Saturday, December 31, 2016

Caltech Flat Lens can be Integrated onto Image Sensor

Engineers at Caltech have developed a system of flat optical lenses that are said to be easily mass-produced and integrated with image sensors, paving the way for cheaper and lighter cameras. The technology relies on stacking two metasurfaces. The metasurfaces are dotted with 600nm tall silicon cylinders that alter the way light passes through them.

The paper "Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations" by Amir Arbabi, Ehsan Arbabi, Seyedeh Mahsa Kamali, Yu Horie, Seunghoon Han & Andrei Faraon has been published in Nature Communications on Nov. 28, 2016.

Friday, December 30, 2016

AR/VR News Roundup

AR glasses maker Osterhout Design Group (ODG) has raised $58M at valuation of $258M in series A financing round. Shenzhen O-film Tech Co., Vanfund Urban Investment & Development Co. and several individual investors participated in the round alongside principal outside investor and strategic partner 21st Century Fox.

For eight years, we’ve taken a very systematic approach to designing and refining our smartglasses for specific applications from the US government, industry and enterprise customers in a wide variety of markets, and that will not change,” said Ralph Osterhout, CEO of ODG. “We carefully picked investment partners who not only share ODG’s product vision and growth strategy, but also have the reputation and reach to expand ODG’s global presence and market recognition."

The company already shows it R-9 consumer AR glasses, to be officially announced at CES in a week:


James Mackie posted a video review of the R-9 prototype:



Apparently, Facebook's Oculus and Google are looking to add eye tracking technology to their VR products. Techcrunch and Forbes report that Oculus has acquired The Eye Tribe, a Stockholm, Sweden-based company. Eye Tribe was founded in 2011 to develop affordable, consumer-oriented eye-tracking technology:



Techcrunch, Bloomberg: In October 2016, Google acquired Eyefluence, an eye-tracking startup founded in 2013:

Thursday, December 29, 2016

Hitachi Lensless Sensor Principles

StackExchange had an interesting discussion on how Hitachi lensless camera works. Steve Byrnes posts a nice guess that can explain it.

Wednesday, December 28, 2016

Trinamix Presents Hertzstück PbS Photodetectors

On the way to commercialize BASF 3D imaging solution, its wholly owned subsidiary Trinamix presents "the new first bondable bare chip infrared PbS sensor: Hertzstück."

"For Hertzstück PbS detectors we have developed a new, patent pending encapsulation technique: Hertzstück is the first bare chip lead salt photoconductor that can be wire-bonded to printed circuit boards."

Rockchip Prepares Visual Processing Solution

One of the largest Chinese application processor manufacturers Rockchip presents the soon-to-be-announced RV1108 vision and image processing SoC demos: "The comparison of drive test results verifies the great image processing performance between Rockchip visual processing solution RV1108 and other solution. The car DVR, based on RV1108, shows the excellent image quality in day shot, night shot and backlighted shot as well as the superior sharpness, hence highlighting the outstanding advantages of WDR of RV1108 image processing."

WDR demo
Sharpness demo
Low light demo

Reportedly, Rockchip RV1108 Visual SoC uses CEVA-XM4 vision co-processor inside.

Tuesday, December 27, 2016

Sony Announces World's Smallest 1.06MP Camera Module

Sony announces the IU233N2-Z (color) and IU233N5-Z (monochrome) camera modules for wearable and small mobile device market. Sony says the are the world's smallest 1MP camera modules, as of Nov. 2016. The module size is 2.6 mm (W) × 3.3 mm (D) × 2.32 mm (H) while its image sensor size is 2 mm × 2 mm and the super-small low-profile lens head size is 2.6 mm × 2.6 mm. This module is also equipped with a low power consumption mode that reduces power consumption by approximately 10 % to 60 % compared to existing image sensors with an equivalent number of pixels for mobile applications.

The IU233N2-Z and IU233N5-Z will help to reduce the size and weight of mobile devices such as head-mounted displays and smart watches, and to develop new markets such as IoT and drones. The head part alone weighs just 0.02 g and the weight including the flexible printed circuit board is 0.1 g or less. The power consumption during all-pixel output (1.06M-pixel) is approximately 55 mW at 60fps, 22 mW at 15fps, and 8 mW at 3fps.

Himax Announces 5.5MP UltraSenseIR BSI Sensor

GlobeNewsWire: Himax Imaging announces the HM5530 UltraSenseIR low power and low noise 5.5 MP BSI CMOS sensor with more than 40% NIR QE for the next generation of computer vision applications such as 3D camera, autonomous navigation, event and pattern recognition and human-machine interaction.

Computer and machine vision devices operate unnoticed to human perception by using Near Infrared light sources and sensors because NIR is not visible to the human eye. Silicon based image sensor, which is the dominant technology for visual imaging applications due to advantages in cost, power and scalability, is not sensitive to the Near Infrared spectrum and needs to be compensated by using relatively large pixel sizes to confine the resolution of the sensor that can fit in an embedded device,” said Amit Mittra, CTO of Himax Imaging. “With our breakthrough UltraSenseIR technology, Himax combines the advantages of our highly integrated, low power and low noise BSI sensor technologies with NIR and visible sensitivity in a very small 2.0-micron pixel size to enable high resolution computer vision and NIR imaging applications.

The HM5530 consumes less than 140mW at 5.5MP 30fps rate over industry compliant MIPI CSI2 interface. The sensor timing supports frame synchronization to an external LED or Laser Diode light source.

Monday, December 26, 2016

Lensless Future?

Nikkei publishes an article "The future of photography looks lensless," mostly about Hitachi lensless camera prototype. Few quotes:

"In Japan, the leader is Hitachi, which recently announced the development of the country's first lensless camera. Over in the U.S., Rice University and semiconductor developer Rambus are leading the way. At this point, Hitachi's technology stands out because of its speed.

Hitachi's system processes images as much as 300 times faster than other lensless cameras developed to date. What is more, the focus can be adjusted after the image has been captured by the sensor.

A number of technical issues still need to be resolved -- particularly the hazy look of the photos -- but Hitachi hopes to have a practical model ready in two years.

Japanese companies have established themselves as leaders in the imaging business, but now they face the challenge of securing slices of the nascent lensless market.
"

Hitachi camera prototype

Yole on Thinned Wafers Demand

Yole Developpement report on thinned wafers has interesting comparisons of image sensor vs other wafers thickness:

Sunday, December 25, 2016

TSMC ToF Pixel Patent Application

TSMC patent application US20160358955 "Depth sensing pixel, composite pixel image sensor and method of making the composite pixel image sensor" by Calvin Yi-ping Chao, Kuo-yu Chou, and Chih-min Liu proposes an global shutter pixel with dual storage diodes optimized for ToF imaging:


Then, TSMC proposes different ways to combine the ToF pixel with regular 4T pixels in non-stacked and stacked versions:

Saturday, December 24, 2016

Almalence Interview

Forbes publishes an interview with Eugene Panich, Almalence CEO and Founder. Few intersting quotes:

"Almalence's imaging solutions are licensed by top smartphone manufacturers, shipping more than 30 million high-end devices annually.

SuperSensor uses multiple input frames to produce a single image, thus collecting more light and rendering a higher resolution, producing images that could only otherwise be taken with a camera with a higher resolution sensor and bigger lens. Also, by utilizing the vision processing cores available in high-end chipsets, SuperSensor improves the quality of not only still images, but video as well. So far, none of our competitors have shown any improvements close to that.
"

Friday, December 23, 2016

Toshiba Presents 0.7 x 0.7mm Endoscopic Camera

Toshiba introduces the IK-CT2, an ultra-small, chip-on-tip video camera system. The 0.7 x 0.7mm BSI CMOS sensor features 220 x 220 pixels, 60fps frame rate, and integrated 120 deg. FOV F4.5 glass lens.

Custom Image Sensor Design Houses News

A new custom design company Imagica Technology has just been formed, by Rob Hannebauer in Vancouver, Canada. Their first designs are a couple of custom CMOS sensors for an unidentified company but they will also offer (through Alternative Vision) a series of CMOS line scan sensors designed to be pin and signal compatible with the discontinued Sony types widely used in spectrometers and other instruments.

Imagica is currently completely tied up with funded design backlog and will be for probably a year but after that they will be able to entertain new design programs.

Thanks to DG for the news and links on that!

Caelste publishes a presentation by Jan Vermeiren on some of its past and ongoing projects. The presentation includes sales an profits charts, quite unusual for a privately held company:


Renato Turchetta formed a new custom image sensor design company Wegapixel, based in the UK. From the company website: "WegaPixel is a one stop shop for CMOS image sensors. In addition to conventional imagers, the company brings together unique expertise in applications including high speed imaging, electron microscopy, X-ray digital imaging as well as UV and IR imaging."

As LaserFocusWorld and Photonics report, Wegapixel has a financial backing from Specialised Imaging camera company.

Thursday, December 22, 2016

Hamamatsu Image Sensor Lineup

Hamamatsu publishes an updated Dec. 2016 book of its image sensors featuring nice insets with popular explanations of image sensor technology:

LeddarTech Explains 2D, 3D and MEMS Scanning Automotive LiDAR Operation

LeddarTech publishes two Youtube videos explaining 2D and 3D flash LiDAR and MEMS scanning LiDAR principles:



Update: Since LeddarTech has removed its Flash LiDAR video, here is Quanergy lecture on its own version, delivered by Louay Eldada, the company CEO and Founder:

Tuesday, December 20, 2016

Tessera FotoNation and VeriSilicon Partner on Vision Processor

BusinessWire: FotoNation and Shanghai, China-based VeriSilicon are to jointly develop an image processing platform that is promised to offer best-in-class programmability, power, performance and area for computer vision (CV), computational imaging (CI) and deep learning. The new IP platform, named IPU 2.0, will be available for customer license and design in the first quarter of 2017. IPU 2.0 is to offer a unified programming environment and pre-integrated imaging features for a wide range of applications across surveillance, automotive, mobile, IoT and more.

VeriSilicon’s proven and scalable vision processor core family has been adopted by world’s leading automotive and surveillance suppliers over the years,” said Weijin Dai, EVP, chief strategy officer and GM of the IP Division, VeriSilicon. “These highly power efficient, scalable processors with deep learning CNN technology in a unified programing environment unleashes great potential in an increasingly connected world with big data analytics and artificial intelligence. Working closely with FotoNation, we will jointly provide the total solution to address the challenges in intelligent devices.

ESPROS Announces New Products

ESPROS December 2016 Newsletter says that its ToF chips are "being used in more and more volume products... In the consecutive months we will extend our ToF product line with a 160x60 and a fast 8x8 pixel array. With these new products we respond to your requirements and to many technical discussions."

"We will begin the new year with the launch of a new spectrometer chip product at Photonics West in San Francisco.

For quite some time we have been searching for micro-patterned spectral filter arrays to combine them with our imager technology. We found the perfect solution and a great cooperation partner at VIAVI Solutions. Their 8x8 filter arrays in the visible or near infrared regime are a perfect match for our sensitive chip. Filter and imager combine to a miniature size spectral sensor with 64 channel resolution with only 2.7 x 2.7 mm footprint and 1.1 mm height.
"

Monday, December 19, 2016

3D Imaging for AR Devices

Vance Vids publishes a series of videos on Magic Leap AR glasses concepts. While the first two videos are quite speculative, the third one has a nice explanation why 3D camera is so important in AR devices:

Smartphone Features 30MP (?) Selfie Camera

Gizmochina reports that Chinese company 8848 Phone has launched a high-end smartphone featuring 21MP main camera at the rear and a 30MP front facing selfie camera. The phone has a number of other high end features and is priced at $2,880.


Update: As written in comments, it appears to be an error in Gizmochina report, and the actual front camera resolution is 3MP.

Sunday, December 18, 2016

GS Sensors at IEDM

IEEE Spectrum publishes a review of two global shutter papers presented at IEDM 2016. Tohoku University, Japan talks about 1M fps image sensor with 480 frames trench-capacitor based analog memory. The sensor resolution is 96 x 128 pixels:


Canon GS sensor has 4046 x 2496 pixel resolution and features one charge-based memory per pixel supporting multiple accumulation cycles. It can operate at 120 fps speed. With 4 accumulations, the Canon sensor can achieve 92dB DR at 30 fps:

Friday, December 16, 2016

SNR Explanation

Albert Theuwissen publishes an excellent discussion of SNR definition, including some rarely mentioned stuff:

"in the case the sensor is used for video applications, very often the photon shot noise is omitted in the total noise Nout, and actually the SNR listed in the data sheets is much higher than what the reality will bring. If the sensor is used for still applications, mostly the photon shot noise is included in the total noise Nout."

Canon Applies for RGBW12 CFA Patent

CanonWatch quotes Japanese-language Egami site that Canon has applied for RGB-W CFA pattern patent. The US version of this patent is available here and appears to describe an image processing pipeline for RGBW12 pattern proposed in the company's 2011 patent application:

◯, Δ, and x are assigned in descending quality order.

Incidentally, SonyAlphaRumors publishes that the next generation Sony full-frame ILC might feature RGBW CFA under "impossible rumors" title:

"The Sony A7S III that will be announced next spring will be the first interchangeable lens camera featuring a RGBW sensor. This will result in an sensitivity increase of 84%. Additional improvements and the cooper wired backlight sensor will result in a total of over 1 fstop of noise improvement."

IDC on AR/VR Market Prospects

In spite of reports of lower than expected sales of VR headsets, IDC keeps optimistic view on AR/VR market:

"Worldwide AR and VR headset shipments are expected to see a compound annual growth rate (CAGR) of 108.3% over 2015-2020 forecast period, reaching 76.0 million units by 2020. The more affordable VR devices will continue to lead the market in terms of volume. However, IDC expects AR headsets to pick up momentum over the forecast as more affordable technologies and more OEMs enter the market."

"2016 has been a defining year for AR as millions of consumers were introduced to Pokemon Go and, on the commercial side, developers and businesses finally got their hands on coveted headsets like Microsoft's HoloLens," said Jitesh Ubrani senior research analyst for IDC Mobile Device Trackers. "AR may just be on track to create a shift in computing significant enough to rival the smartphone. However, the technology is still in its infancy and has a long runway ahead before reaching mass adoption."

"Augmented reality represents the larger long-term opportunity, but for the near term virtual reality will capture the lion's share of shipments and media attention," said Tom Mainelli, program VP, Devices & AR/VR. "This year we saw major VR product launches from key players such as Oculus, HTC, Sony, Samsung, and Google. In the next 12 months, we’ll see a growing number of hardware vendors enter the space with products that cover the gamut from simple screenless viewers to tethered HMDs to standalone HMDs. The AR/VR headset market promises to be an exciting space to watch."


Update: Canalys, and TrendForce too forecast a bright VR market future.

Thursday, December 15, 2016

Sony IEDM Paper

Nikkei reviews Sony IEDM 2016 paper on polarization sensitive image sensor.

Regular polarization sensor (left) and Sony sensor (right)
Polarizer grids (left). Top AR layer prevents image ghosts.

Wednesday, December 14, 2016

e2v Emerald 2.8um Global Shutter Sensors Presentation

This post has been removed by e2v request.

Anteryon Introduces Modular SpectroBlocks

Anteryon presents an interesting concept of a modular architecture for fit-for-use spectroscopy: SpectroBlocks - miniature spectrum analyzers that can be assembled by simply clicking them together:


The units on the picture illustrate the SpectroBlocks roadmap:
  • a dual channel VIS-NIR broadband module (400nm – 950nm, resolution < 3nm) with USB3.
  • a broadband VIS-SWIR module (400nm – 1700nm), resolution: 2 nm (VIS), 6 nm (SWIR).
  • a cubic inch UV-NIR module (200nm- 1000nm), resolution < 1nm and USB3.
  • a 1 cc dual-channel VIS-VIS/NIR module, resolution

The modules utilize a 2D CMOS image sensors for the VIS-IR spectral range and miniaturized InGaAs sensors for SWIR. The detector-grating-unit, the optical heart of the micro-spectrometer, exploits Anteryon’s proprietary miniaturized optics based on replication technology.

Compared to competitive products, such as those of Hamamatsu, Fringoe, Consumer Physics, and nanoLambda, the SpectroBlocks are said to show better resolution and a wider spectral range making them better suited for food and tissue analysis. For example, the Consumer Physics and SCIO devices have a limited spectral range of 700nm – 1100nm in the NIR, while SpectroBlocks modules include the full VIS+NIR range for more reliable results. The unique feature of the VIS-SWIR module is that it measures from 400nm up to 1700nm in one compact device shown in the picture.

In addition to the grating-based modules, Anteryon is developing also SpectroBlocks with tunable optical filters based on piezo-driven Fabry-Pérot mirrors, for multispectral imaging in the visible, NIR and SWIR spectral ranges.

Tuesday, December 13, 2016

Synaptics Partners with OXi Technology on Optical Fingerprint Sensors

GlobeNewswire: Synaptics announces an exclusive partnership with OXi Technology, a Shanghai-based developer of optical fingerprint technology. The collaboration includes combining technology from each company in the development of new proprietary optical sensing solutions for smartphones, tablets and PCs.

Synaptics has signed an agreement to make a minority investment in OXi and expects the investment to close within a few weeks. Further details of the agreement are currently confidential.

GlobeNewswire: Synaptics announces an industry-first family of Natural ID biometric authentication solutions that leverage optical-based fingerprint sensors for smartphones and tablets. The new Synaptics FS9100 optical fingerprint sensor family is capable of high-resolution scanning through 1mm of full cover glass and enables clean, button-free industrial designs. Under cover glass biometrics eliminates button cut-outs and glass thinning processes required by capacitive under-glass sensors, leading to glass yield improvements. The optical solution also excels with wet finger performance, and being protected by glass, is durable, scratchproof, waterproof, and eliminates ESD concerns.

The FS9100 sensor leverages unique Synaptics optical technology developed for mobile devices and breaks through key technical barriers with an extremely thin form factor and minimal power consumption. The new sensors are scheduled to sample in early CYQ1 with mass production in CYQ2.


OXi Technology provides some more info on its optical fingerprint sensors:

T.O.T Technology Principle:

TOT is a glass-based Thin Optical Touch Technology. Fingerprint image is captured by light sensitive pixels. Reflected by human finger through the cover glass, photon converted electron signal then be digitized by our specially designed high speed, low noise 16 bit A/D circuit. Such data is processed by our unique image process technology and transformed into sharp and clear image.

T.O.T Highlights:
  • Optical, but thin.
  • 1mm thickness
  • Capture size/prints Image = 1:1
  • High durability, 9H hardness
  • High protection ESD (above ±20KV)
  • Thin, but not expensive. (Glass-based sensor)

Forza on Stacked Image Sensor Options for the Fabless

BusinessWire: Forza Silicicon president, Barmak Mansoorian, will present a talk titled “3D Integrated Image Sensor: Options for the Fabless” at the 2016 3D Architectures for Semiconductor Integration and Packaging (ASIP) Conference in San Francisco on December 14. The presentation discusses the benefits, challenges, and opportunities of stacked CMOS sensors. Stacked sensor design technology was once a new concept and has since been proven viable in mass consumer electronics in the last two years.

The most obvious benefit of stacking is a smaller footprint for the CIS and processor combo. However, stacking still faces some specific challenges starting with higher NRE costs and limited technology processes available to select customers. The added complexities during characterization, qualification, and production will require customers to carefully consider design choices. The presentation will describe how Forza is addressing the challenges by developing custom stacking flows involving intelligent design choices and good product engineering.

Forza Silicon has been working on stacked CMOS image sensors since 2011 as part of the DARPA SCENICC Program and other DoD initiatives,” said Mansoorian. “Stacked sensor design is something we believe will dominate advances in the CIS marketplace for the next 3 to 5 years, and Forza will continue to improve image sensor design processes to help our customers take advantage of enhanced technology options.

Monday, December 12, 2016

NIT Logarithmic Sensor for Smart Vision Cameras

SpectroNet publishes New Imaging Technologies (NIT) Yang Ni's video presentation on logarithmic sensor for smart vision cameras:

Pixpolar Technology in PhD Thesis

Lappeenranta University of Technology (LUT), Finland, publishes PhD Thesis "Novel Solutions for Improving Solid-State Photon Detector Performance and Manufacturing" by Vladislav Marochkin. The thesis discusses Pixpolar MIG pixel, silicon drift detectors, and 3D radiation detectors. The doctoral dissertation defense is scheduled for December 15, 2016. A discussion of operational principles of the photon detectors starts with an interesting quote on page 17:

"When a photon interacts with silicon it creates electron-hole pairs, the amount of which is dependent on photon energy. “The energy required to create an electron-hole pair in silicon is 3.6 eV. The band gap of silicon is 1.12 eV at room temperature and the average energy needed to create a single pair, called the radiation ionization energy, is empirically found to be about three times the band gap energy of the semiconductor. This is due to the phonon excitation, which is required for momentum conservation” (Luukka, 2006)."

Teledyne to Acquire e2v for $789M

BusinessWire: Teledyne and e2v jointly announce that they have reached agreement on the terms of a cash acquisition to be made by Teledyne for e2v. The aggregate enterprise value for the transaction is expected to be approximately £627 million (or approximately $789 million) taking into account e2v stock options and net debt. For the year ended March 31, 2016, e2v had sales of approximately £236 million. Excluding transaction-related expenses, Teledyne management expects the transaction to be accretive to earnings per share. It is expected that the acquisition will be completed in the first half of calendar 2017.

"We have followed e2v for more than a decade. Over time, as both Teledyne and e2v evolved, our businesses have become increasingly aligned. In fact, every business within e2v is highly complementary to Teledyne. As important, there is minimal product overlap,” said Robert Mehrabian, Chairman, President and CEO of Teledyne.

For example, we are both leaders in space and astronomy imaging, but Teledyne largely provides infrared detectors and e2v provides visible light sensors... Teledyne serves the healthcare market with specialized X-ray sensors. In machine vision applications, e2v’s advanced capabilities in proprietary CMOS sensor design add to Teledyne’s strengths in cameras and vision systems.

Sunday, December 11, 2016

IR on the Recent Image Sensor News

Imaging Resources publishes its Editor-in-chief Dave Etchells' analysis of the recent image sensor news: Nikon's 2-layer PDAF image sensor patent application, Canon's curved sensor application, and Tamron HDR sensor.

CAOS-CMOS Camera Promises 1000x Dynamic Range Improvement

LaserFocusWorld: Nabeel A. Riza, University College Cork, Ireland, and colleagues say to have demonstrated the Coded Access Optical Sensor (CAOS) CMOS camera, or CAOS-CMOS, with a three-orders-of-magnitude improvement in camera DR when compared to a conventional CMOS camera.


"Light from an external object is directed by a lens (L1) onto the agile pixels plane of a programmable digital micromirror device (DMD). To initiate the imaging operation, the DMD micromirrors are set to spatially route the incident light to the CMOS sensor to create an initial target-scene irradiance map. Based on this initial image intelligence, the DMD is programmed in its CAOS mode to create specifically located agile pixels that sample image zones of interest.

This agile-pixel programming capability via the DMD allows the agile pixels to operate with different time-frequency coding methods such as frequency/code/time division multiple access (FDMA/CDMA/TDMA) schemes common in cell-phone radio-frequency (RF) communications.

Experiments demonstrate a CAOS-CMOS camera dynamic range of 82.06 dB, which can be improved upon by further optimization of the camera hardware and image processing.
“The CAOS camera platform, when used in unison with current multipixel sensor camera technology, is envisioned to enable users to make a smart extreme-dynamic-range camera, opening up a world of the yet unseen,” says Nabeel Riza.
"

Nabeel Riza publishes a Youtube video explaining the CAOS-CMOS camera principles:



Update: A new paper "Demonstration of 136 dB dynamic range capability for a simultaneous dual optical band CAOS camera" by Nabeel A. Riza and Pablo La Torre has been published in Optics Express. "...the experimental camera demonstrates an agile pixel extreme dynamic range of 136 dB, which is a 56 dB improvement over the previous CAOS-imaging demonstrations."

Saturday, December 10, 2016

CCD vs CMOS - Zoom Needed to See CCD Market Share

Vision Show, held in Nov 2016 in Stuttgart, Germany, publishes presentations form the different companies at the Show. ON Semi's presentation "AND not OR CCD & CMOS Technologies in Industrial Markets" by Michael DeLuca quotes the latest CCD vs CMOS market data from TSR. A serious magnification is needed to see CCDs on the chart:


Still, CCDs keep a major market share in machine vision applications, according to AIA. Possibly, AIA means an installed base here, rather than yearly sales:


Framos presents its research on machine vision components, including image sensors: