Friday, February 24, 2017

Invisage Explains its 1.1um Pixel Global Shutter Operation

Invisage presented a paper "Device design for global shutter operation in a 1.1-μm pixel image sensor and its application to near infrared sensing" by Zach M. Beiley, Robin Cheung, Erin F. Hanelt, Emanuele Mandelli, Jet Meitzner, Jae Park, Andras Pattantyus-Abraham, and Edward H. Sargent at SPIE Physics and Simulation of Optoelectronic Devices XXV Conference held on Jan. 30-Feb. 2 in San Francisco.

The global shutter operation is based on QuantumFilm QE dependence on the bias voltage:

Waymo Accuses Uber-acquired Otto in Stealing LiDAR Secrets

Bloomberg: It took Alphabet’s Waymo seven years to design and build a LiDAR for its self-driving cars. Uber-acquired Otto startup allegedly did it in nine months. Waymo claims in a lawsuit that its former employees stole the designs and technology and started a new company. Waymo says:

"One of the most powerful parts of our self-driving technology is our custom-built LiDAR — or “Light Detection and Ranging.” LiDAR is critical to detecting and measuring the shape, speed and movement of objects like cyclists, vehicles and pedestrians.

Hundreds of Waymo engineers have spent thousands of hours, and our company has invested millions of dollars to design a highly specialized and unique LiDAR system. Waymo engineers have driven down the cost of LiDAR dramatically even as we’ve improved the quality and reliability of its performance. The configuration and specifications of our LiDAR sensors are unique to Waymo. Misappropriating this technology is akin to stealing a secret recipe from a beverage company.

In 2016, Uber bought a six-month old startup called Otto and appointed its founder (a former employee on our self-driving car project) as its head of self-driving technology. At the time, it was reported that Otto’s LiDAR sensor was one of the key reasons Uber acquired the company.

Recently, we received an unexpected email. One of our suppliers specializing in LiDAR components sent us an attachment (apparently inadvertently) of machine drawings of what was purported to be Uber’s LiDAR circuit board — except its design bore a striking resemblance to Waymo’s unique LiDAR design.

We found that six weeks before his resignation this former employee, Anthony Levandowski, downloaded over 14,000 highly confidential and proprietary design files for Waymo’s various hardware systems, including designs of Waymo’s LiDAR and circuit board. To gain access to Waymo’s design server, Mr. Levandowski searched for and installed specialized software onto his company-issued laptop. Once inside, he downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation. Then he connected an external drive to the laptop. Mr. Levandowski then wiped and reformatted the laptop in an attempt to erase forensic fingerprints.

Beyond Mr. Levandowki’s actions, we discovered that other former Waymo employees, now at Otto and Uber, downloaded additional highly confidential information pertaining to our custom-built LiDAR including supplier lists, manufacturing details and statements of work with highly technical information.
"

No Lens is Needed to See Simple Images

University of Utah, Salt Lake City, USA, researches show that the simple 32x32 images can be seen by a bare image sensor with no lens. The open-access paper "Lensless Photography with only an image sensor" by Ganghun Kim, Kyle Isaacson, Racheal Palmer, and Rajesh Menon has been published in arxiv.org. From the abstract:

"Photography usually requires optics in conjunction with a recording device (an image sensor). Eliminating the optics could lead to new form factors for cameras. Here, we report a simple demonstration of imaging using a bare CMOS sensor that utilizes computation. The technique relies on the space variant point-spread functions resulting from the interaction of a point source in the field of view with the image sensor. These space-variant point-spread functions are combined with a reconstruction algorithm in order to image simple objects displayed on a discrete LED array as well as on an LCD screen. We extended the approach to video imaging at the native frame rate of the sensor. Finally, we performed experiments to analyze the parametric impact of the object distance. Improving the sensor designs and reconstruction algorithms can lead to useful cameras without optics."

Example images taken with the VGA sensor. The left column shows
the objects displayed on the LED matrix. The 2nd column shows the raw
sensor images. The 3rd column shows the reconstructed images before
any processing. The right column shows the reconstructed images after
binary thresholding.

Panasonic to Commercialize Heartrate Rate Extraction from Video

Nikkei: Panasonic exhibits a technology to accurately measure the heart rate of a person on video.The companys Contactless Vital Sensing utilizes the change of skin's light reflectance caused by blood. The reflectance changes with the contraction of blood vessels caused by heartbeat.

Panasonic aims to commercialize the technology in 2018 in such applications as sports, checking the stress of employees in call centers, preventing car drivers from falling asleep, etc.

Thursday, February 23, 2017

Samsung Mobile Processor Supports 4K 120fps Video, 28MP Cameras

Samsung announces its latest application processor (AP), the Exynos 9 Series 8895, manufactured in 10nm FinFET process. Its imaging, video, and machine vision features are rather impressive:

"It supports recording and playback of video contents at maximum resolution of 4K UHD at 120fps with the latest video codec including HEVC(H.265), H.264 and VP9.

[Exynos 8895] ISP supports high resolution up to 28MP for each rear and front camera with advanced features such as Smart WDR and PDAF. Exynos 8895 features dual ISP that consists of one ISP dedicated for high quality and the other for low power. Thus, it enables various combination of dual camera scenario for DSLR-like photography experience while consuming very low power.

Exynos 8895 features VPU (Vision Processing Unit) which is designed for machine vision technology. This technology improves the recognition of an item or its movements by analyzing the visual information coming through the camera. Furthermore, it enables advanced features such as corner detection that is frequently used in motion detection, image registration, video tracking and object recognition.
"

Tokyo University and Sony Vision Chip Demo

Tokyo University Ishikawa Watanabe Lab publishes a second demo of its vision chip based on Sony stacked sensor technology:

Qualcomm Snapdragon 835 VR Development Kit Includes 4 Cameras

PRNewswire: Qualcomm introduces a new VR development kit (VRDK) based on Snapdragon 835 mobile platform. The kit includes 4 cameras:

  • Six degrees of freedom (6DoF) Motion Tracking: Two monochromatic, stereo- 1MP (1280x800) cameras with fish-eye lenses for each
  • Eye Tracking: Two monochromatic VGA global shutter cameras with active depth sensing

Older, Snapdragon 820-based VRDK

Wednesday, February 22, 2017

Tessera Becomes Xperi

BusinessWire: Tessera is changing its name to Xperi Corporation (“Xperi”) and its Nasdaq ticker symbol to XPER, effective tomorrow, February 23. This change, which also includes a new corporate logo and brand platform, is a reflection of the company’s expanded capabilities, continued technological innovation and refined vision.

Changing our name to Xperi is an incredible moment in our history,” said Tom Lacey, CEO of Tessera Holding Corporation. “Xperi represents the combination of DTS, FotoNation, Invensas and Tessera – world-class companies dedicated to creating solutions that enable extraordinary experiences for people around the world. Our new logo and brand identity convey the unlimited possibilities of what our team of approximately 700 employees can create to truly impact the human experience. We are constantly inspired by how people use our technologies in their lives, and that drives us to continue generating ideas and innovation. We cannot wait to show the world what’s next.

OmniVision and Corephotonics Dual-Camera Zoom Reference Design for Smartphones

PRNewswire: OmniVision announces its collaboration with dual-camera technology company Corephotonics in producing a new dual-camera zoom reference design for mobile devices. Combining OmniVision's OV12A10 and OV13880 image sensors with Corephotonics' proprietary zoom and Bokeh algorithms, the reference design brings optical zoom capabilities to smartphone camera applications.

The Corephotonics algorithms fuse the images from the wide-angle cameras and telephoto cameras to deliver optical zoom, increase resolution, reduce SNR and enable smoother video transitions. OmniVision's OV12A10 and OV13880 sensors benefits include:
  • Slim form factor enabled by reduced module height
  • PureCel Plus 1.25um large pixel and 1.0um small pixel
  • Dual-camera sync: master/slave capability to switch between sensors during zoom operation
  • Low power using unique toggle mode to extend phone battery life

"Corephotonics is widely regarded as a world leader in dual-camera technology," said Will Foote, senior partnership manager, OmniVision. "The rapid expansion of the dual-camera smartphone market gives us the perfect opportunity to combine our companies' expertise. We are pleased to introduce our first joint reference design, and are excited about many more future collaborations based on OmniVision sensors and Corephotonics' algorithm IP."

More about Mobileye 7.4MP Automotive Camera

SeekingAlpha: Mobileye Q4 2016 earnings call transcript has an interesting part about the company's 7.4MP autonomous driving camera platform:

Itay Michaeli - City analyst

...how important is the 2019 launch of the ultra-high resolution camera to automakers to be able to perform those Level 2 plus functions? And then if you could also comment on the hardware cost to the automaker of that camera relative to mono and trifocal today?

Amnon Shashua - CTO & Chairman

So, regarding the ultrahigh-definition camera, this is a 7.4 megapixel camera. 2019, we have programs that have a 1.7 megapixel camera. We have programs that have 2 megapixel cameras. These are parking cameras that are used also for autonomous driving, not only for parking. And we have the 7.4 megapixel, in many cases, it replaces the trifocal. So, we have a single mono camera with 120 degrees field view replacing three cameras. So, this particular camera is going to be more expensive. It’s not the imager that is so much more expensive, it is the lens. Lens is more expensive. So, it’s few – I can’t say exactly how much more expensive, but it is more expensive and this allows – so basically the ultrahigh-definition camera allows to replace the trifocal with a single monocular camera.

Thanks to DS for the link!