ROHM’s New High Power 120W Laser Diode for <span style='color:red'>LiDAR</span>: Increasing Measurement Range by Reducing Wavelength Temperature Dependence by 66%
  ROHM has developed a high-power laser diode - the RLD90QZW8. It is ideal for industrial equipment and consumer applications requiring distance measurement and spatial recognition.  In recent years, LiDAR is being increasingly adopted in a wide range of applications that require automation - including AGVs (Automated Guided Vehicles), robot vacuums, and autonomous vehicles - where it is necessary to accurately measure distance and recognize space. In this context, there is a need to improve the performance and output of laser diodes when used as light sources to increase detection distance and accuracy.  To meet this demand, ROHM established original patented technology to achieve a narrower emission width that contributes to longer range and higher accuracy in LiDAR applications. In 2019, ROHM released a 25W laser diode RLD90QZW5 followed by a 75W laser diode RLD90QZW3 in 2021. In response to the growing market demand for even higher output, ROHM developed a new 120W laser diode.  The RLD90QZW8 is a 120W infrared high output laser diode developed for LiDAR used in distance measurement and spatial recognition in 3D ToF systems. Original device development technology allows ROHM to reduce the temperature dependence of the laser wavelength by 66% over general products, to just ⊿11.6nm (Ave. 0.10nm/°C). This makes it possible to narrow the bandpass filter while extending the detection range of LiDAR. At the same time, a uniform light intensity of 97% is achieved over the industry's smallest class* of emission width of 270µm, representing a range of 264µm that contributes to higher resolution. Additional features that include high power-to-light conversion efficiency (PCE) enables efficient optical output that contributes to lower power consumption in LiDAR applications.  A variety of design support materials necessary for integrating and evaluating the new product is available free of charge on ROHM’s website that facilitate market introduction. In order to drive laser diodes with high nano-second order speed required for LiDAR applications, ROHM developed a reference design available now that combines ROHM’s 150V EcoGaN™ HEMT and gate drivers.  ROHM has also acquired certification under the IATF 16949 automotive quality management standard for both front-end and back-end processes at its manufacturing facilities. As a result, product development of laser diodes for automotive applications (AEC-Q102 compliant) is underway, with commercialization planned by the end of 2024.  Application ExamplesConsumer: Robot vacuums, Laser rangefinders  Industrial: AGVs (Automated Guided Vehicles), service robots, 3D monitoring systems (sensors for human/object detection)  and more...  Support PageA broad range of design data is available on ROHM’s website free of charge, including simulation (SPICE) models, board development data, and application notes on drive circuit design necessary for integration and evaluation that supports quick market introduction.  Reference DesignsReference designs for LiDAR incorporating these new products together with ROHM’s 150V EcoGaN™ and high-speed gate driver (BD2311NVX series) are now available on ROHM’s website.  Reference Design Part Nos.  ・REFLD002-1  (120W High Power Laser Diode [RLD90QZW8] built-in)  ・REFLD002-2  (75W High Power Laser Diode [RLD90QZW3] built-in)  EcoGaN™ is a trademark or registered trademark of ROHM Co., Ltd.  Online Sales InformationSales Launch Date: September 2023  Pricing: $30.0/unit (samples, excluding tax)  Online Distributors: DigiKey, Mouser and Farnell  The product will be offered at other online distributors as they become available.  Target Product: RLD90QZW8-00A  Online Distributors  TerminologyLiDAR  Short for Light Detection and Ranging, a type of application that uses ToF (Time of Flight) system (comprised of a light source and ToF or image sensor) to sense ambient conditions.  3D ToF System  An abbreviation for Time of Flight, a spatial measurement system which, as its name implies, measures the flight time of a light source. Refers to a system that uses ToF to perform 3D spatial recognition and distance measurement.  Bandpass Filter  A filter that allows only signals in a specific light wavelength band to pass through. In optical devices, a narrow bandpass filter range allows for efficient extraction of light close to the peak waveform. This minimizes the effects of disturbance light noise such as sunlight, enabling lower power consumption at the same distance or longer range at the same optical output.  IATF 16949  IATF is the short for International Automotive Task Force, a quality management standard for the automotive industry. Based on the international standard ISO 9001 with additional specific requirements, compliance with IATF 16949 enables automakers and suppliers to meet international quality standards.  AEC-Q102  AEC stands for Automotive Electronics Council, an organization (comprised of major automotive manufacturers and US electronic component makers) responsible for establishing reliability standards for automotive electronics. Q102 is a standard specifically intended for optical devices.
Key word:
Release time:2023-12-05 15:21 reading:1891 Continue reading>>
Ameya360:EPC GaN eToF Laser Driver IC Enables Higher Density Lidar Systems
  Efficient Power Conversion (EPC) has launched the EPC21701, a laser driver that monolithically integrates an 80V, 40A FET with gate driver and 3.3 logic level input into a single chip for time-of-flight lidar systems used in robotics, surveillance systems, and vacuum cleaners. It is tailored to lidar systems for gesture recognition, time of flight (ToF) measurement, robotic vision, or industrial safety.  The EPC21701 laser driver uses 5V supply voltage and is controlled using 3.3V logic. It is capable of very high frequencies greater than 50MHz and super short pulses down to 2ns to modulate laser driving currents up to 15A. Voltage switching time is less than 1ns and delay time from input to output is less than 3.6ns. The EPC21701 is a single-chip driver plus GaN FET using EPC’s proprietary GaN IC technology in a chip-scale BGA form factor that measures only 1.7-by-1-by-0.68mm. The wafer level packaging is small, low inductance, and lays out very well with the laser system. With this small form factor and the integration of several functions, the overall solution is 36% smaller on the printed circuit board (PCB) compared to an equivalent multi-chip discrete implementation.  The 80V EPC21701 complements the ToF driver IC family in chip-scale package (CSP) that also includes the 40V, 15A EPC21601 and the 40V, 10A EPC21603 options.  Integrated devices in a single chip are easier to design, easier to layout, easier to assemble, save space on the PCB, increase efficiency, and reduce cost. This family of products will enable faster adoption and increased ubiquity of ToF solutions across a wider array of end-user applications.  “This new family of GaN integrated circuits dramatically improves the performance while reducing size and cost for time-of-flight lidar systems,” said Alex Lidow, CEO, and co-founder of EPC. “Integrating a GaN FET with driver on one chip generates an extremely powerful and fast IC and reduces size and cost for wider adoption in consumer and industrial applications. With EPC21701 we expand the family to 80V and 15A and will soon extend the family further to 100V and 125A.”  The EPC9172 development board features the EPC21701 eToF laser driver IC and is primarily intended to drive laser diodes with short, high current pulses. Capabilities include minimum pulse widths of <2ns, 15A peak currents, and bus voltage rating of 40V.  Designers interested in replacing their silicon MOSFETs with a GaN solution can use the EPC GaN Power Bench’s cross-reference tool to find a suggested replacement based on their unique operating conditions.
Key word:
Release time:2023-01-28 11:04 reading:2013 Continue reading>>
2019 AV Sensors: Vision, Radar, Lidar, iDAR
Today, there’s no shortage of questions for executives and engineers at tech and auto companies grappling with the technology and business roadmap of automated vehicles (AVs). Three big unanswered questions, however, stand out.Egil Juliussen, director of research for Infotainment and advanced driver-assistance systems (ADAS) for automotive at IHS Markit, laid out the following as the “unanswered questions” that will dog the auto industry in 2019:       Do we really need lidars?       Are tech/auto companies really ready to collaborate in pursuit of “network effect” for advancements of driving software?        Will the industry solve the L2 to L3 handover problems?Industry observers certainly see a new round of AV partnerships percolating among tech companies, tier ones and car OEMs. And several companies are trying out new technologies, such as ADAM, on the L2 to L3 handover quandary.Speaking of that unimaginable dilemma for human drivers when machines suddenly give control back to them, “expect the resurgence of interest in driver monitoring systems among tier ones and OEMs at the 2019 Consumer Electronics Show in Las Vegas next month,” Colin Barnden, Semicast Research lead analyst, told us.But, will ADAS cars and robocars really need lidars? Juliussen told us, “We are beginning to hear this a lot.” The issue follows the emergence of digital imaging radars “that can do a lot more than they used to,” he explained.AEye to fuse camera/lidar dataAgainst this backdrop, a startup called AEye, based in Pleasanton, Calif., announced last week its first commercial product, “iDAR,” a solid-state lidar fused with an HD camera, for the ADAS/AV market.The idea of autonomous vehicles without lidar has been floating around the tech community for almost a year. The proposition is tantalizing because many car OEMs regard lidars as too costly, and they agree that the lidar technology landscape is far from settled.Although nobody is saying that a “lidar-free future” is imminent, many imaging radar technology developers discuss it as one of their potential goals. Lars Reger, NXP Semiconductors’ CTO, for example, told us in November that the company hopes to prove it’s possible.  AEye, however, moves into the is-lidar-necessary debate from another angle. The startup believes that car OEMs are reluctant to use current-generation lidars because their solutions today depend on an array of independent sensors that collectively produce a tremendous amount of data. “This requires lengthy processing time and massive computing power to collect and assemble data sets by aligning, analyzing, correcting, down sampling, and translating them into actionable information that can be used to safely guide the vehicle,” explained AEye.But what if AEye uses artificial intelligence in a way that discriminately collects data information that only matters to an AV’s path planning, instead of assigning every pixel the same priority? This starting point inspired AEye to develop iDAR, Stephen Lambright, AEye’s vice president of marketing, explained to EE Times.Indeed, AEye’s iDAR is “deeply rooted in the technologies originally developed for the defense industry,” according to Lambright. The startup’s CEO, Luis Dussan, previously worked on designing surveillance, reconnaissance, and defense systems for fighter jets. He formed AEye “to deliver military-grade performance in autonomous cars.”Driving AEye’s iDAR development were “three principles that shaped the perception systems on military aircraft Dussan learned,” according to Lambright: 1) never miss anything; 2) understand that objects are not created equal and require different attention; and 3) do everything in real time.In short, the goal of iDAR was to develop a sensor fusion system with “no need to waste computing cycles,” said Aravind Ratnam, AEye’s vice president of products.  Building blocks of iDAR include 1550nm solid-state MEMS lidar, a low-light HD camera and embedded AI. The system is designed to “combine” 2D camera “pixels” (RGB) and 3D lidar’s data “voxels” (XYZ) to provide “a new real-time sensor data type” that delivers more accurate, longer range and more intelligent information faster to AV’s path-planning system, according Rantan.Notably, what AEye’s iDAR offers is not post-scan fusion of a separate camera and lidar system. By developing an intelligent artificial perception system that physically fuses a solid-state lidar with a hi-res camera, AEye explains that its iDAR “creates a new data type called dynamic vixels.” By capturing x, y, z, r, g, b data, AEye says that dynamic Vixels “biomimic” the data structure of the human visual cortex.‘Combiner’ SoCThe new iDAR system called AE110, announced last week, is a fourth-generation prototype. Included in the system, according to Ratnam, is a “combiner” SoC based on Xilinx’s Zynq SoC. Zynq integrates an ARM-based processor with an FPGA. It’s designed to enable key analytics and hardware acceleration while integrating CPU, DSP, ASSP, and mixed signal functionality on one device. In 2019, AEye is planning to design its own ASIC for the combiner SoC, he added.‘Vision + radar’ or ‘vision + lidar’?AEye is promoting its combined vision/lidar sensor system while a few developers of high-precision mmWave radar chips advocate vision/radar solutions.Mike Demler, senior analyst at the Linley Group, called AEye’s camera-lidar fusion “an interesting approach.” Acknowledging that Aeye’s implementation “may have some unique features,” Demler cautioned that AEye “isn’t the only company doing that.” He noted that Continental also sells a camera-lidar combo unit. Presumably, though, Continental is combining data from two separate sensors after pre-processing.As Demler sees it, the advantage of AEye’s approach would be “in the sensor-fusion software.” In essence, “Treating the camera/lidar image sensors as an integrated unit could speed identification of regions-of interest, as they claim,” he noted. “But beyond that, all the strengths/weaknesses of the two sensors still applies.”Demler noted that AEye is using a MEMS lidar, but doesn’t appear to disclose its spatial resolution. That could be a weakness compared to a scanning lidar like Velodyne's, he speculated. “The camera sensor has highest resolution, but it can’t handle extremely bright or dark scenes, and it’s still limited by dirt and precipitation that can block the lens. So you can’t rely on that for your spatial resolution. Likewise, the lidar doesn’t function as well in precipitation as radar, so you can’t rely on that for object detection, and most lidars don’t measure velocity.”Asked about AEye, Phil Magney, VSI Labs founder, disclosed that his firm was hired to validate lidar performance for distance and scan rate.Magney stressed, “The iDAR sensor is unique in that it couples a camera with a lidar and fuses the data before the combined values are ingested by the central computer.” In his opinion, “this is really edge fusion as the device is fusing the raw data with the camera data before any classification occurs.” Magney added, “We also know that the device has the capacity to drill down to a subject of interest meaning that would not need to process an entire point cloud scene.” Magney acknowledged that AEye’s iDAR device has “the potential to better classify because you have the fused camera data to work with.” He noted, “iDAR is developing classification algorithms that apply to the fused data set.”AEye’s so-called dynamic vixels create content that is, in theory, "much richer than either cameras or lidars can produce by themselves,” said Magney. But he cautioned that “basically every pixel has a point and every point has a pixel but keep in mind the resolution of the camera is much higher than the lidar so your ratio of pixels to points is not one to one.”Magney acknowledged, “When comparing iDAR to radar it is possible to eliminate the need for radar, because lidar and radar are both ranging instruments.”He noted, “If you could have enough confidence in the lidar’s ability to give you proper depth perception, and can track the velocity of targets, then this is possible. It should be mentioned, iDAR has twice the scan rate (100 Hz) compared to most commercial lidar products, another advantage of their device.”On the other hand, because more ADAS-featured cars are poised to roll out well before fully autonomous vehicles, radar appears to hold the advantage over lidars (or iDAR) in the ADAS market.“Radar is going to work better in inclement weather and is thus best suited for ADAS, where you need safety systems running even when the conditions do not suite autonomous driving,” noted Magney. “But radar by itself is still limited in terms of what it can classify. This is determined by the firmware in the radar device. We understand that radar is getting better at classification and the companies are purporting some richer capabilities. There are some radar startup that have some pretty impressive claims.”VSI recently validated the test and methodology in a recent performance test of the AEye iDAR sensor. Magney said the firm validated that the lidar signals were able to detect a truck on the road at a distance of 1 kilometer. VSI also confirmed the frequency of the scan rate of 100 Hz, Magney said. “We did not validate that this sensor will lead to better performance or safety, but we did validate that it had enough intelligence to identify an object at 1,000 meters,” he added. Asked about radars vs lidars, Demler summed it up. For high-level autonomous vehicles, no company at this point is claiming lidar isn’t necessary. He said, “Sure, you can build a self-driving car without it, but that doesn’t mean it functions as well under all conditions or is as safe as a camera/lidar/radar system.”In Demler’s opinion, what AEye's iDAR doesn’t replace radar. And TI's mmWave imaging radar doesn’t replace lidar, Demler explained. “Most AV developers are using all three, and in fact they are using other sensors as well," Demler said. "Ultrasonic sensors have their place, as do infrared sensors.” He said: “Safety and redundancy demand backups, and multiple sensor types are required because no one type works best under all conditions.”Who will make iDARs?Last week,AEye also disclosed the second close of its Series B financing, which will take the company’s total funding beyond $60 millionAEye said that included among its Series B investors are automotive OEMs, tier ones and tier twos, as well as strategic investors Hella Ventures, Subaru-SBI Innovation Fund, LG Electronics and SK Hynix.”AEye’s Lambright pointed out the significance of Hella Ventures and LG Electronics joining the Series B round. AEye is counting on tier one partners like these to bring up the volume of iDAR production and lower the unit cost. Lambright estimated the initial cost of iDAR in 2021 will be lower than $1,000 per unit.
Key word:
Release time:2018-12-28 00:00 reading:1092 Continue reading>>
BMW to Use Innoviz Lidar for Autonomous Cars
 Innoviz Technologies of Israel is to supply its solid-state Lidar sensing to BMW Group for its autonomous vehicle production platforms. This is one of the first serial production contracts for solid-state lidar, according to a press statement from Canadian automotive supplier Magna, a collaborator with and strategic investor in Innoviz.Magna has been working with Innoviz Technologies to integrate automotive-grade, solid-state lidar (light detection and ranging) into its autonomous driving platform to support up to L4 and L5 self-driving systems across multiple vehicle platforms. The solid-state high-resolution lidar technology generates a 3D point cloud in real time of the vehicle’s surroundings, even in challenging settings such as direct sunlight, varying weather conditions, and multi-lidar environments. In addition, the solution provides a complete computer vision software stack and algorithms to turn 3D vision into critical driving insights."BMW is setting a high standard in autonomous vehicles development, and their vote of confidence in our lidar demonstrates how advanced our technology is," said Omer Keilaf, co-founder and CEO of Innoviz.Lidar Fills a Necessary Gap for Higher-Level Autonomous SensingABI Research forecasts that 8 million consumer vehicles shipping in 2025 will feature SAE Level 3 and 4 technologies, where drivers will still be necessary but are able to completely shift safety-critical functions to the vehicle under certain conditions, and SAE Level 5 technology, where no driver will be required at all. This, in turn, will help drive the shipments of vital lidar sensors that underpin the technology. As many as 36 million lidar units are expected to ship in 2025, corresponding to a market value of $7.2 billion."With the rapid development and deployment of various advanced driver-assistance systems (ADAS) packages by OEMs, higher-level automation represents the next suitable step," said Shiv Patel, a research analyst at ABI Research. "The primary functional sensor gap between today’s ADAS and higher-level autonomous vehicles will be filled with the addition of lidar, which will help to provide reliable obstacle detection and simultaneous location and mapping (SLAM)"ABI says that for conditional and high-level automation applications within the consumer market (SAE Level 3 and Level 4), solid-state lidar solutions from companies such as Innoviz and LeddarTech have emerged as the lidar form factor that will not only help enable robust sensing on autonomous vehicles but also, more importantly, satisfy stringent pricing requirements set by OEMs. These units are expected to reach price points of $200 and$750 per unit by 2020 for low- and high-end solutions, respectively. At this price, even with multiple sensors around the car, using solid-state lidar solutions represents a highly feasible option to OEMs on premium models.In fully autonomous applications (SAE Level 5) such as autonomous ridesharing, wherein the aim is to eliminate the driver completely, much more expensive, traditional mechanical lidar solutions, with their higher resolution for robust sensing, remain the go-to option.Players targeting the "robo-taxi" use case aren’t too concerned with vehicle ASPs, with their short-term "land-grab" objective being to maximize their share in the smart mobility market as it emerges. In these market conditions, it is purely a race to be the first to eliminate the driver, who represents the single biggest cost for these companies. Although the performance of solid-state lidar continues to improve, mechanical Lidar as part of a broader suite of other sensor types is currently seen as the only short-term option to enable full automation as soon as possible for these aggressive implementers.
Key word:
Release time:2018-05-02 00:00 reading:1054 Continue reading>>
Who’s the Lidar IP Leader?
Among the host of sensors nowadays loaded into autonomous vehicles, lidar (light detection and ranging) projects as both critical and lucrative. As the automotive industry girds for a wave of autonomous car rollouts, Pierre Cambou, activity leader for imaging and sensors at market-research firm Yole Développement (Lyon, France), said he can’t imagine a robotic vehicle without lidars. “You need a lidar,” he noted.Yole forecasts that revenue generated by lidars will reach $1.6 billion in 2022 and will balloon to $31.5 billion by 2032.However, the technologies that drive lidars are still in flux, with new developments still in he pipeline. As Akhilesh Kona, senior analyst for automotive electronics and semiconductors at IHS Markit, previously told EE Times, lidar technology suppliers continue to improve durability, size, and cost by developing a variety of beam-steering technologies that range from mechanical to MEMS and solid-state.As the race for better lidar heats up, the inevitable question is: Who’s the lidar leader? One way to find out is to look at lidar-related patents filed. Knowmade, one of Yole's group companies that specializes in IP analysis and patent assessment, recently examined lidar devices and systems for automotive. Knowmade identified more than 6,480 lidar-related patent families for automotive.Although this patent activity began as early as the late 1960s, the number of patent publications has exploded in the last several years. In particular, between 2007 and 2017, lidar patents had an annual compound growth rate of 21 percent.In the early days, companies such as Bosch, Denso and Valeo dominated patent filings related to automotive lidars. Paul Leclaire, technology and patent analyst at Knowmade, describes these as “historical IP players.” Their patents are mostly related to “ADAS applications, based on incremental technologies, and with limited amount of white spaces,” he observed. By “an area with a limited amount of ‘white spaces’ means that it is difficult to file a patent with claims that do not overlap other patent claims,” he explained. “Thus, new patent applications have less chance to be granted.”However, those historical IP players’ activities alone cannot explain the recent escalation in lidar IPs, Leclaire said.The newcomers in lidar break down in several categories.Semiconductor companies’ lidar IPsThe first group consists of semiconductor companies such as Qualcomm, LG Innotek, Ricoh and Texas Instruments. Their contributions are “reducing the size of lidars” and “increasing the speed with high pulse rate” by using non-scanning technologies, Leclaire explained. These players’ patents offer hints as to how beam-steering has become the new preferred mode, and how the market is beginning to see the emergence of compounds (detector, laser) specifically dedicated to lidars, he added.Pure IP playersAnother set of newcomers includes Quanergy, Velodyne, Luminar and LeddarTech. Leclaire calls them “pure IP players” dedicated to lidar development. Their patent publications focus on highly specific patented technology that leads to product assertion and its application.Notable is the emergence of lidar IP players in China. They include LeiShen, Robosense, Hesai, Bowei Sensor Tech. “The main Chinese industrial players are IP newcomers that have entered the IP landscape only two or three years ago. The vast majority of their patent applications are still pending and have not been extended to countries other than China,” Leclaire told us. “Their IP portfolios are, however, related to their lidar products that are currently on the market.”Asked if there are other Chinese lidar IP players, he said that the four mentioned are the most notable. Other Chinese players are mostly academic players. Robotic vehicle vendorsThe last group of newbies to lidar are autonomous carmakers themselves. They are using lidar as “tools to provide complex embedded sensor systems,” said Leclaire. Active in the IP landscape are Google, Waymo, Uber, Zoox and Faraday Future. Chinese giants such as Baidu and Chery also have lidar IPs.For most robotic vehicle vendors, lidar is a central component of their patented sensing solutions. Many of their patents are related to method and process of “computing,” according to Knowmade. Asked about “promising technologies” likely to accelerate development of lidars, Leclaire pointed out two: new light sources such as laser and VCSEL (vertical-cavity surface-emitting laser), and beamforming technologies. The use of VCSEL as a novel type of laser source has advantages that include “small angular divergence, VCSEL array, high output power,” he noted. Similarly, methods to perform beamforming and beam steering operations are more and more described in patents related to solid-state lidar, Leclaire observed.Asked if Knowmade can identify who dominates in IPs related to new light sources or beamforming, Leclaire said he can’t. His team’s analysis only focuses on lidar devices and systems for automotive. “We have analyzed neither the IP landscape of lidar components like laser, VCSEL, photodetectors, SPAD (single photon avalanche detector), APD (avalanche photodiodes), nor the IP landscape related to beamforming technologies.”
Key word:
Release time:2018-04-24 00:00 reading:1155 Continue reading>>
<span style='color:red'>LiDAR</span> Goes Back To The Future
  LiDAR is emerging as an increasingly important piece of the enabling technology in autonomous driving, along with advanced computer vision and radar sensor chips. But LiDAR systems also are finding their way into a variety of other applications, including industrial automation, including robotics, and unmanned aerial vehicles.  Advanced mapping is another rapidly growing market for LiDAR, which is not entirely surprising considering this was the original application for this technology. But work is progressing in this area as mapping grows more powerful and sophisticated due to new developments in LiDAR technology. In fact, LiDAR was used to locate a vast Mayan metropolis in the jungles of Guatemala, which included thousands of structures and extensive causeways for commerce that were hidden from view for centuries.  These expanding use cases, along with continued form factor improvements in the technology, bode well for the LiDAR market. ABI Research forecasts the market for automotive LiDAR alone will be worth almost $13 billion by 2027.  And with LiDAR growth, comes growth in related markets. Yole Développement predicts the market for gallium nitride power devices, for example, will enjoy a compound annual growth rate of 79% over the next five years, reaching $460 million by the end of 2022. Power GaN technology is well-suited for high-performance and high-frequency uses, the market research firm says. “LiDAR, wireless power, and envelope tracking are high-end low/medium voltage applications, and GaN is the only existing technology able to meet their requirements,” notes Yole’s Ana Villamor.  So while leading LiDAR vendors are pursuing their vision of autonomous vehicles, they are shipping products today for drones, industrial systems, and mapping, in addition to developing advanced driver-assistance systems.  Autonomous and assisted driving  Still, the biggest opportunity for LiDAR is automotive, and work is underway to reduce the number of moving parts and the cost of these systems with solid-state designs.  “To me, LiDAR is a really interesting system because there are so interesting components in there that all have relevance to semiconductors,” says Jeff Miller, product strategist for Mentor, a Siemens Business. “There’s some very interesting work being done in the silicon photonics area to try to make solid-state LiDARs. And there’s some very interesting work in the more traditional LiDAR space in terms of how do I get power to the laser, how do I control my laser pulses, and very interesting power transistor designs. And then there’s the data processing angle on this. What do I do with this enormous volume of data I’m producing in these 3D point clouds that come off the LiDAR sensor? How do I make any kind of sense of that? Ultimately, the value of this is providing information about what my car can drive through and what my car can’t drive through. That’s ultimately what we’re after with these sensors, because the primary end-vision product for these things is the driverless car.”  Early automotive LiDAR systems were spinning domes on the roofs of prototype cars. That won’t fly in vehicles for the consumer market, of course. “Commercialization is already starting to happen,” Miller observes. “It needs to look like a regular car.”  And that’s where things really start to get interesting for LiDAR. “The race is on for a low-cost, small-form-factor, solid-state LiDAR,”says Ian Dennison, senior group director of research and development for the Custom and IC Packaging Group at Cadence, who notes that the price tag needs to drop to $200 or less. “There is a bit of a gold rush going on. There are plenty of startups, plenty of investment. Part of the excitement is that there’s quite a variety of silicon design fabrics in the mix. You’ve got CMOS, for sure, but you’ve also got silicon photonics, you’ve got silicon MEMS. Different players have different perspectives on which part the technology mix should be.”  Dennison adds that LiDAR offers excellent depth resolution and “a great benefit for the characterization of an autonomous vehicle. It’s the real killer feature for LiDAR.”  What’s different  The growth in LiDAR also is boosting demand for high-current, high-frequency, low-resistance, low-capacitance power transistors, made with GaN and other exotic semiconductor materials.  “We are seeing a trend toward more integration to bring higher performance attained in III/V materials into traditionally silicon systems,” says Chris Cone, a product marketing manager at Mentor. “LiDAR design is highly sensitive to geometry and requires many of the same capabilities developed for silicon photonic IC design, including the ability to generate design components based on curvilinear objects, the ability to assemble design components with multiple waveguide types along with the required tapers and transitions, and multi-domain circuit simulation with proper modeling of optical, electrical behavior. Very importantly, physical verification needs to be photonic structure-aware, as standard verification is prone to produce large amounts of false errors. Using standard DRC checking techniques will generate large amounts of false positives that can’t be adequately reviewed. They have no choice to waive large sets of checks, which allows real errors to make it to mask.”  As a result, tools vendors are looking at different methodologies. “LiDAR requires new design considerations, which will requires design tools to be flexible to account for new methodologies,” Cone says. “But you can apply the same well-known development milestones to LiDAR design – design completion, design analysis through simulation and layout verification, including DRC, LVS and post-layout extraction.”  Some experts don’t characterize MEMS devices as solid-state components because they have moving parts, and some LiDAR vendors have turned to phased arrays instead of MEMS.  “Different people take different approaches,” says Cadence’s Dennison. “Silicon photonics is an excellent fit for a problem like LiDAR, a light-based application like LiDAR. If you can use the photonics to manipulate the light, you’re already getting onto a solution that isn’t trying to keep up with the frequencies of light. It’s using light natively.”  Mixing MEMS with photonics and CMOS chips presents a co-design problem, he notes. “It’s blended technologies.”  No shortage of investors  As prospects for LiDAR increase, so does funding for this technology. Three of the leading vendors of LiDAR systems are all based in Silicon Valley – Velodyne LiDAR, Quanergy Systems, and Cepton Technologies. All have been expanding their operations in the past year.  While these companies are maturing and readying LiDAR systems for the big automotive manufacturers and their Tier 1 suppliers, they are being pursued by well-funded startups. Aurora Innovation just picked up $90 million in Series A funding, co-led by Greylock Partners and Index Ventures. May Mobility of Ann Arbor, Mich., received $11.5 million in seed funding from BMW i Ventures and Toyota AI Ventures. Seattle-based SEEVA Technologies got $2 million from Revolution’s Rise of the Rest Seed Fund and other investors. Nuro, which is developing a self-driving delivery vehicle, recently disclosed receiving $92 million in a Series A round from Banyan Capital and Greylock Partners. And LeddarTech of Quebec City, Quebec, Canada, last year raised $101 million in Series C funding led by Osram, joined by Delphi Automotive, Magneti Marelli, and Integrated Device Technology.  “We see the market as asking for two things,” says Anand Gopalan, CTO of San Jose, Calif.-based Velodyne, high-performance sensing for robotaxis and a smaller-form-factor sensor for ADAS applications. But he also sees industrial applications as a fast-growing market for LiDAR systems, along with drones and mapping.  “The mapping market continues to evolve and grow,” he says. “Our focus is shifting from simply providing high-definition maps to providing high-definition maps with a view of enabling autonomous driving. We continue to see that the intermapping market, having a 360° field of view combined with a really accurate LiDAR is pretty important, obviously, to take a good high-definition map. We continue to see a lot of growth in that space. People are finding a lot of interesting uses for LiDAR across the board, like area mapping. It’s exciting for LiDAR as a domain, in general, to see all the new applications. Definitely, in the ADAS space, LiDAR has become accepted as a piece of modality for most ADAS systems. Electric vehicle and ADAS programs around the world all are using LiDAR.”  LiDAR is particularly useful when there is snow on the roadway, covering up the lane markings, making it difficult for computer vision to work.  One of the big challenges in developing automotive-grade LiDAR are the environmental issues, such as operating in temperatures of 105° C, and up to 150° C, along with conditions of -40° C, according to the Velodyne CTO. That will be critical for automotive applications, which in turn will drive other applications and capabilities.  “As you start seeing this first wave of LiDAR being deployed, you will also start seeing more and more smarts or intelligence being baked into these sensors – providing high levels of functionality, like localization or object tracking. The amount of compute in the LiDAR will also constantly increase. That’s the other domain where you will see more work,  more interesting products come out in the next couple of years, in the domain of embedding intelligence within the sensor,” Gopalan says.  Quanergy CEO Louay Eldada also is seeing a bump up in the LiDAR market. “Things are becoming real. It’s becoming easier to cut through the noise,” he says. “We are getting some large contracts in automotive security, industrial automation, as well as mapping, terrestrial and aerial mapping from drones.”  Quanergy builds solid-state LiDAR products, employing optical phase arrays. The goal is to get the price of a LiDAR system-on-a-chip device down to $100, according to Eldada. “LiDAR is the primary sensor you must have for Level 4. We want to support the automotive industry.”  There are an estimated 50 companies in the LiDAR business, and that field will be culled in the years to come. But in the short term, new companies are still cropping up. Cepton Technologies was established in 2016. Mark McCord, the startup’s vice president of engineering, says his team is working on longer-range sensors for automotive applications, and the company expects to turn out automotive-grade products in the second half of this year.  Wei Wei, Cepton’s director of business development, says of the 50 companies in the LiDAR market, less than five can deliver LiDAR systems with a range of 200 or more meters. He sees more carmakers coming out of the prototype stage with their most advanced vehicles by the end of this year.  Conclusion  The LiDAR market is still in a nascent stage, as far as automotive electronics are concerned. But the market for this technology is growing well beyond a single vertical market. Mapping has returned as an important application for LiDAR technology. The sensor’s use in drones and robots could be lucrative for some companies, as well.  Considering this technology was relatively obscure several years ago, that’s a big change. And as the price continues to drop, it will show up in even more applications. For LiDAR, the real growth phase is still to come.
Key word:
Release time:2018-03-16 00:00 reading:1128 Continue reading>>
Prophesee Foresees Event-Driven CIS, Lidar
  In the fast-growing markets for factory automation, IoT, and autonomous vehicles, CMOS image sensors appear destined for a role capturing data not for human consumption but for machines to see what they need to make sense of the world.  CMOS image sensors “are becoming more about sensing rather than imaging,” said Pierre Cambou, activity leader, MEMS & Imaging at Yole Développement. The Lyon, France-based market research and technology analysis company boldly predicts that by 2030, 50% of CMOS image sensors will serve “sensing” devices.  Paris-based Prophesee SA (formerly known as Chronocam) styles itself as a frontrunner in that revolution. A designer of advanced neuromorphic vision systems, it advocates an event-based approach to sensing and processing. Prophesee’s bio-inspired vision technology has been deemed too radically different from conventional machine vision — and perilously “ahead of its time.” But Luca Verre, co-founder and CEO of Prophesee, told us that this is no longer the case.  In a one-on-one interview here, Verre said that his company has secured its Series B plus funding (the startup raised $40 million in funding in the last three years). It now has a partnership deal with a large unnamed consumer electronics company. Most importantly, Prophesee is now advancing its neuromorphic vision system from the usual technology concept pitch to promoting its reference system for tinkering by developers.  Prophesee’s first reference design, available in VGA resolution, consists of Prophesee’s Asynchronous Time-Based Image Sensor (ATIS) chip and software algorithms. The ASIC will be manufactured by a foundry partner in Israel, said Verre — most likely Tower Jazz.  The company declined to detail its ASIC and the specification of the reference design. Prophesee said that it is planning on a formal product announcement in several weeks.  Nonetheless, the startup reached a milestone when the reference design proved able to offer system designers the opportunity to see and experience just what an ATIS can accomplish in data sensing. The ATIS will be characterized by its high temporal resolution, low data rate, high dynamic range, and low power consumption, said Prophesee.  Cameras are bottlenecks  Makers of cameras for machine-vision systems — whether in smart factories, IoT, or autonomous vehicles — have begun to heed the event-based approach promoted by Prophesee’s co-founders such as Ryad Benosman and Christoph Posch.  With all of the detailed visual information that traditional cameras can capture, “the camera has become a technology bottleneck,” said Verre. Unquestionably, cameras are the most powerful sensing device. Yet for visual data in automation systems, surveillance cameras, or highly automated vehicles, cameras could slow down the processing.  Consider self-driving cars, said Verre. The central processing system inside the vehicle is bombarded with data from cameras, lidars, radars, and other sources. The key to manage this overload is figuring out how best to “reduce the amount of raw data” streamed from sensors. The sensors should only capture data that matters to “a region of interest,” said Verre.  As Prophesee explained in past interviews with EE Times, the company’s event-driven vision sensors are inspired by biology. This perception derives from the co-founders’ research on how the human eye and brain work.  Ryad Benosman, Prophesee’s co-founder, told us that human eyes and brains “do not record the visual information based on a series of frames.” Biology is much more sophisticated. “Humans capture the stuff of interest — spatial and temporal changes — and send that information to the brain very efficiently,” he said. That’s principally what Prophesee’s ATIS does.  Noting that the ATIS is not bound by frames, Verre explained, “Our technology will not have to miss important events that might have happened between frames.”  In short, what Prophee’s ATIS offers is everything that frame-based image sensors is not. In the view of another co-founder, Christoph Posch, “Frame-based methodology results in redundancy in the recorded data, which triggers higher power consumption.” He said, “Results include inefficient data rates and inflated storage volume. Frame-based video, at 30 or 60 frames per second, or even a much higher rate, causes a catastrophe in image capturing.”  Event-driven approach for lidars  Verre last week disclosed to us that Prophesee is exploring the possibility that its event-driven approach can apply to other sensors such as lidars and radars. Verre asked: “What if we can steer lidars to capture data focused on only what’s relevant and just the region of interest?” If it can be done, it will not only speed up data acquisition but also reduce the data volume that needs processing.  Phrophesee is currently “evaluating” the idea, said Luca, cautioning that it will take “some months” before the company can reach that conclusion. But he added, “We’re quite confident that we can pull it off.”  Asked about Prophesee’s new idea — to extend the event-driven approach to other sensors — Yole Développement’s analyst Cambou told us, “Merging the advantages of an event-based camera with a lidar (which offers the “Z” information) is extremely interesting.”  Noting that problems with traditional lidars are tied to limited resolution — “relatively less than typical high-end industrial cameras” — and the speed of analysis, Cambou said that the event-driven approach can help improve lidars, “especially for fast and close-by events, such as a pedestrian appearing in front of an autonomous car.”  The downside is that lidar hardware would have to be changed, he added. More importantly, though, Prophesee needs a strong buy-in from lidar companies to this event-driven approach.  Cambou said, “Sure, this is always the problem for a technology startup.” He pointed out that Mobileye needed some lead customers such as Volvo and a Tesla [before having its technology going mainstream and getting broadly accepted]. Movidius, now an Intel company, needed DJI [to become successful]. “Prophesee will need a strong partner in order to have its solution largely adopted,” said Cambou.  “Given the market drivers in the realm of robotic vehicles (safety first, technology-driven, not so cost-conscious),” he added, “This should be possible.”  Although Cambou expressed his concerns about a large player such as Google depending on a small startup for its technology, he brushed off this concern by noting that the small volume involved makes this less of an issue.  ISSCC demo  While Prophesee is not forthcoming with details of its reference design, it is not vaporware.  At the International Solid-State Circuits Conference (ISSCC) last month, Prophesee was among companies invited to present their technologies at an Industry Showcase session. The startup attached to a guitar a high-speed VGA camera unit (based on its reference design), demonstrating its Asynchronous Time-Based Image Sensor’s ability to measure and visualize invisible vibrations of guitar chords in real time at such frequencies up to 1.5 KHz.  Prophesee’s market opportunities  Prophesee expects the company’s reference design to find its first home in machine-vision applications on the smart factory floor. Verre is counting on event-driven vision systems as Prophesee’s first revenue-generating commercial product.  But why smart factories?  Robotic system builders using machine visions worry about three things, said Verre. “First, they want to decrease machines’ downtime via predictive analysis. Second, they must ensure safety for workers as co-bots increase. Third, they are eager to increase the speed of production by accelerating the cadence of what machines can detect.”  He added, “Our technology can offer high temporal resolution at great precision,” making it ideal for “predictive maintenance — detecting abnormal behavior, area monitoring, and making systems run fast.”  No frames, no clock  Prophesee’s ambition extends to IoT. “Our event-driven image sensors are always on, featuring no clock, and running on ultra-low power,” said Verre.  Last summer, a study by LCV Capital, a VC firm that invests in visual technologies such as computer vision, predicted that 45 billion cameras will be watching the world by 2022. Many will be ultra-low-power streaming cameras doing surveillance in retail spaces and smart buildings, noted Verre. With that much more data to process, Prophesee’s event-driven sensors can offer increased value, he added.  According to Prophesee, the company’s ATIS offers not only extremely fast vision processing but also a high dynamic range of more than 120 dB. It is very power-efficient, added the company, with operating characteristics of less than 10 mW.  ATIS for redundancy?  The startup today sees ADAS and highly automated vehicles as the default opportunities for its sensors. Until last year, said Verra, many autonomous vehicle technology suppliers, including Prophesee, were concerned that the overhyped robotic vehicle market was a bubble destined to pop. He said, “But obviously that did not happen. Instead, we are seeing the acceleration for autonomous vehicle developments and more investment pouring in the field.”  As Euro NCAP updates its reward system for 2021 and 2022 vehicles, prompting OEMs to include features like autonomous emergency braking and vision enhancement systems, many traditional tier ones and OEMs are “making decisions this year” about what to incorporate into their ADAS/AV cars, explained Verra.  Of course, Prophesee’s Asynchronous Time-Based Image Sensors will require changes in much of the fundamental data that an image sensor must capture. Furthermore, to exploit event-driven sensors, users will need a new math model — new algorithms — for machine vision. In this circumstance, Verra believes that OEMs are likely to use Prophesee’s ATIS as a redundancy. “We provide temporal resolution that conventional image sensors can’t provide.”  Alternatively, “our system can be used as a replacement for some lidars in ADAS,” added Verra, “as we can capture relevant information in a more efficient manner.”  More pre-processing on the edge  Speaking of recent imaging developments, said Verra, “The industry’s trend is to do a lot more pre-processing on the edge.” A good example is Sony’s three-layer stacking imaging sensors, he noted.  Sony’s CIS chip consists of three stacked dies, with a 40-nm logic substrate at the bottom. On top of the logic die is DRAM, which is placed like a flip-chip facing downward at the logic. On top of the DRAM, at the very top, are 90-nm backside illuminated pixels.  In contrast, conventional non-stacking CIS chips are designed to collect signal data from the pixels and send it through the logic circuit and out through the interface serially. This inherently restricts CIS chip speed to the output speed of the interface. In turn, this holds pixel reading speed at the same level.  The new three-layer stacking CIS is designed to speed up such pre-processing operations.  Does this mean that if Prophesee wants to go mainstream, it must be able to offer something similar to Sony’s Pixel/DRAM/logic three-layer stacked CIS?  Yole’s Cambou said, “Definitely, in the future Prophesee will require 3D semiconductor technology to shrink its pixel.”  However, he added, “Nevertheless, I think they are able to do 15-μm pixel without 3D stack, which should be okay to start doing business.”  He added, “As a fabless company, it will depend on availability of open foundries for future products.” As time goes by, Cambou suspects that access to 3D technology will get easier. He said, “TSMC is probably able to offer a solution today. Tower Jazz has announced it would go in this direction. SMIC and Dongbu are other potential foundry partners.”
Key word:
Release time:2018-03-14 00:00 reading:1137 Continue reading>>
Velodyne Pads Lead in Lidar Derby
  Velodyne LiDAR is still lapping the field in an embryonic but increasingly competitive and complex lidar market, an advantage that grew this week when it unveiled a new 128-channel lidar sensor. The VLS-128 lidar unit boasts range (distance) and resolution that other lidars currently on sale have not been able to offer.  Velodyne’s VLS-128 is ideal for “high-speed highway driving, as it senses objects in farther distance,” said Anand Gopalan, CTO of Velodyne LiDAR. Moreover, it can capture “a rich set of data, high-resolution enough for object classifications without using a camera,” he added.  To pass as an effective sensor technology for highly automated vehicles traveling at 70 miles per hour on a highway, a lidar needs to be able to see at least 200 to 250 meters ahead, recognize that there’s an object out there, and determine what it is.  Compared to its own previous model (HDL-64), Gopalan said Velodyne’s VLS-128 can see objects three times farther (300 meters) and in three times resolution (0.1 degree).  Armed with these impressive specs, Velodyne hopes to set itself apart from traditional lidar sensors that are too low-quality or used primarily on a development platform. However, Velodyne’s own lidars, thus far, have been primarily used to create highly accurate 3D maps of their surroundings by bouncing laser beams off of nearby objects.  Velodyne’s VLS-128 is coming to the market as the competition for lidar is heating up, with the development community still wrestling with a host of newly emerging technologies.  Big car OEMs are snatching up lidar technology companies. For example, Ford bought Princeton Lightwave just last month, General Motors acquired lidar company Strobe Inc. also last month, and Continental got the lidar business from Advanced Scientific Concepts (ASC) last year, explained Akhilesh Kona, senior analyst, Automotive Electronics & Semiconductors at IHS Markit.  On one hand, the industry sees on the horizon a new laser emitter technology — above 1,400-nm wavelength. The laser in the new wavelength promises to bring to lidars higher resolution and longer range, said Kona. Princeton Lightwave, Continental (through its acquisition of ASC), and Luminar Technologies are all working on new laser emitter technology, he added.  On the other hand, technology suppliers continue to improve the durability, size, and cost of their lidars by developing a variety of beam-steering technologies, said Kona. They range from mechanical to MEMS and solid-state.  Lidar vs. other sensors  It’s important to note that the consensus among automakers is unequivocally the need for multi-modal sensors in autonomous vehicles. Velodyne isn’t claiming that lidars will replace other sensors already in automated vehicles. Rather, the company is pitching its improved lidars “for the safety and redundancy of the autonomous vehicles’ compute function.”  In assessing the performance improvements in lidar technologies, however, it’s helpful to understand how a lidar stacks up against other sensors such as vision and radar.  Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI Labs), noted, “Lidar’s advantage over other sensors is for every point you have a precise distance measurement. However, the problem with lidar is its relatively low resolution or its ability to distinguish colors.”  Velodyne’s VLS-128 lidar compensates the traditional weakness of lidars by increasing the resolution, so the new VLS-128 can classify objects as the company claims, explained Magney.  “Cameras, on the other hand, have high resolution, so their ability to classify an object is much better than lidar. But cameras don’t have the precision on distances,” noted Magney. “To deal with this, you usually fuse radar measurements to the objects from the camera and you end up with acceptable precision for some applications.”  What about radar? “Radar offers precise distance measurements but almost no resolution,” he said. “The radar will know there is an object there and its exact movement and velocity with respect to the vehicle. Newer radars (millimeter-range radar) do have resolution and can pick up on multiple points of an object and even classify it.”  Evolutionary path  There is no question that Velodyne has made significant improvements on its own mechanical-based lidars.  As Magney sees it, “Velodyne is state-of-the-art when it comes to lidar, but their market is still the development side, where unit cost is not a big issue.”  Magney observed, “While 360-degree laser scanning is desirable for development and making high-definition maps, a reduced field of view (FOV) is viable for production vehicles, as is the case with the new Audi A8, which uses the Valeo Scala unit. This is a forward-facing device and is also an intelligent lidar device that produces object data.”  While Velodyne is at the high end, Magney believes that a number of low-cost lidar devices will likely make their way into production vehicles. “The Valeo unit is one example, but others from Quanergy and Pioneer purport similar capabilities,” he noted. “These are devices that cost in the range of a few hundred dollars. There are lower-cost flash devices emerging that cost well under $100, but their functionality is limited by low resolutions.”  In predicting new technologies on horizon for lidar, IHS Markit’s Kona laid out three evolutionary phases: mechanical scanning (available now), solid-state scanning (a system in production around 2020), and pure solid-state (after 2020).  As shown in the table above, pure solid-state lidars come in different types. They range from basic flash to high-resolution flash, optical phased array and frequency modulated continuous wave, Kona noted. While basic flash is already in production — largely used for ADAS systems — other types of pure solid-state lidar are still in development.  No lidar company is married to a specific beam steering technology. Companies like Velodyne and Valeo, for example, are working on both types of lidar — mechanical and pure solid-state.  Kona also pointed out that different “use cases” demand different lidars. Primarily for highway driving, very long-range lidars with a narrow field of view is necessary. For city driving, lidars with a broader field of view are critical to see corners at an intersection or detect pedestrians, he noted.  Lidar vendors such as Continental and Valeo already offer solid-state flash lidars. But those lidars, whose cost is in the $100 range, have limited resolution and range, explained Kona.  In contrast, companies such as Velodyne, Ibeo, and Valeo provide high-resolution mechanical lidars with 360 degrees. Price, however, tends to be high (Velodyne’s puck is $8,000) and size too big, said Kona. Another downside of such high-resolution mechanical lidars is an abundance of moving parts, making them susceptible to vehicle vibration.  What OEMS want  Velodyne’s claim for VLS-128’s object-classification capabilities has triggered arguments on the architecture of autonomous vehicles. At issue is whether AVs might eventually opt for a central-fusion architecture or a distributed sensor processing model.  Velodyne’s CTO told us that many OEMs and Tier Ones are looking for intelligent lidars capable of pre-processing of data that enables object classification. Until now, OEMs had no choice but to fuse raw data from lidar with a camera because no intelligent lidars were available, said Gopalan. The goal of intelligent lidars is to provide a high level of information including object lists, localization, and segmentation, he added.  VSI’s Magney noted, “Velodyne now claims that the higher resolution can classify objects well, and so you would not need any supplemental sensors.” He explained, “When configured with the proper classification algorithms, you would have everything you need to get a good-enough environmental model to have a safe deployment. Under this use case, you would not need to fuse the Lidar’s object data with anything else.”  Magney, however, disagreed with Velodyne’s claim that theirs is the only lidar that classifies objects. “There are other lidars out there that also couple classification algorithms within the sensor package and are intelligence sensors because of this. Valeo’s Scala unit is an example of that kind of device even though it is a forward-facing device, not 360-degree like the new Velodyne unit.”  Asked if the automotive industry is indeed going for a distributed sensor processing route, Magney made it clear, “Not everybody wants processed data.”  He said, “Advocates of AI desire raw data rather than processed data. There are hybrids of this, too, where you have partially processed data that is something short of object data.”  IHS Markit’s Kona agreed. “There is no one right answer to this.” Noting that some OEMs are already working on their own software algorithms for autonomous driving, he said, “Some prefer raw data from lidars (and other sensors), not the processed data.” But others without their own algorithm development are likely to demand lidars (or other sensors) capable of object classification.  Solution for corner cases?  In its own press release, Velodyne boasted:  With its long range and high-resolution data, the VLS-128 allows autonomous vehicles to function just as well in highway scenarios as low-speed urban environments. It is designed to solve for all corner cases needed for full autonomy in highway scenarios, allowing for expanded functionality and increased testing in new environments.  But can lidar solve “all corner cases”? Can an AV equipped with the VLS-128 always recognize that a plastic bag blowing across the road is indeed a plastic bag, not a hard object?  Magney noted, “While the VLS-128 will have a very precise 360-degree view of any scene, it is never enough data to solve all corner cases in my opinion.” He added, “Some corner cases will involve occluded objects that the sensors simply cannot see. On the other hand, the precision data that the VLS-128 acquires will be very accurate and be able to pick up on just about everything going on in the scene (short of occluded objects). So from a perception standpoint, the data you get from the new Velodyne unit will be as good as lidar gets.”  However, Magney stressed, “You have to keep in mind that coping with corner cases is largely determined by predictive software and not the lidar data itself. However, couple the accuracy of precision lidar with artificial intelligence and you might have your best defense at handling a diverse range of corner cases.”  IHS Markit’s Kona agreed. While acknowledging that Velodyne’s claim for “solving all corner cases” might be “marketing spin,” he explained that, given deep-learning algorithms, point cloud information collected by lidar could make it easier to connect the dots.  But what about cost?  Velodyne refused to disclose how much the new VLS-128 lidar will cost. Velodyne’s lowest cost Puck, VLS-16, costs $8,000.  Magney said, “I would say cost is the biggest holdback on lidar right now.” While Velodyne did not say how much the new unit costs, Velodyne’s previous lidar model with 64 channels cost around $75,000, he noted. “So it is probably likely that the new one is going to be too expensive for production cars.”  Magney sees the VLS-128 “best applied to development.” He explained, “Most of the time, 360-degree lidars like the Velodyne 64 and 128 are used for making detailed maps, so their use case is limited to development.”  However, Magney added that he would not rule out Velodyne as a key supplier to commercial vehicle fleets. Presumably, robotic taxis and autonomous trucks will have a different economic model. “They can support a more expensive sensor package, and they will. As far as the VLS-128 being deployed for these, this may depend on its price,” said Magney.  Manufacturability  In announcing VLS-128, Velodyne discussed its investment in automated assembly in Velodyne’s San Jose factory.  Annual production capacity at the factory will increase to a million lidar units in 2018, according to Velodyne. While the company’s mass production plan is reassuring, it also begs the question as to how complex the production of lidars might be.  Kona observed, “In general, manufacturing mechanical scanning lidars is quite challenging. Integrating mechanical mirrors, motors, and other semiconductor components inside a module that should sustain harsh conditions (vibrations, extreme temperatures, etc.) is a formidable task.” But he added that Velodyne has been doing this for over a decade now, as well as working with many OEMs.  Kona suspected, “Leveraging on this expertise, Velodyne seems to have a patent in manufacturing lidars (laser alignment and system manufacturing). The in-house expertise at various levels of lidar development and production would give them an edge over other suppliers.”  Velodyne’s Gopalan told EE Times that the lidar company has designed its own ASICs to reduce the number of components inside the unit. What used to be a few thousand components (for 32-channel lidar) is down to “a couple of hundred” for the latest 128-channel unit, he said. Those range from analog and mixed-signal chips to A/D converters and digital signal processing. “It’s across the board,” he said.  Kona believes that semiconductor technology will ease manufacturing. “Velodyne’s new approach of integrating functions on fewer semiconductor components and using mass-produced semiconductor technologies will accelerate its production volumes,” he noted.
Release time:2017-11-30 00:00 reading:1259 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
CDZVT2R20B ROHM Semiconductor
RB751G-40T2R ROHM Semiconductor
BD71847AMWV-E2 ROHM Semiconductor
TL431ACLPR Texas Instruments
MC33074DR2G onsemi
model brand To snap up
TPS63050YFFR Texas Instruments
BU33JA2MNVX-CTL ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
BP3621 ROHM Semiconductor
ESR03EZPJ151 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code