Visual Processor IP Runs Deep Convolutional Nets in Real Time

Release time:2018-01-23
author:Ameya360
source:Nitin Dahad
reading:1345

  German intellectual property supplier Videantis launched its sixth-generation processor IP architecture, which adds deep learning capability to a solution that combines computer vision, image processing and video coding from a single unified SoC platform.

  The main application initially is to target the automotive industry, which is moving towards more sophisticated advanced driver assistance systems (ADAS) and ultimately to fully autonomous vehicles, which are dependent on multiple cameras.

  The new v-MP6000UDX visual processing architecture can be configured from a single media processor core to up to 256 cores, and is configurable based on the company’s own programmable DSP architecture. Each core includes a dual-issue VLIW core, and provides eight times more multiply accumulates per core compared to its previous generation processor, which the company says results in 1000x performance improvement in deep learning applications, while maintaining software compatibility with its previous v-MP4000HDX architecture.

  The v-MP6000UDX processor architecture includes an extended instruction set optimized for running convolutional neural nets (CNN), increases the multiply-accumulate throughput per core eightfold to 64 MACs per core, and extends the number of cores from typically 8 to up to 256.

  The heterogeneous multicore architecture includes multiple high-throughput VLIW/SIMD media processors with a number of stream processors that accelerate bitstream packing and unpacking in video codecs. Each processor includes its own multi-channel DMA engine for efficient data movement to local, on-chip, and off-chip memories.

  Alongside the new architecture, Videantis also announced v-CNNDesigner, a new tool that enables easy porting of neural networks that have been designed and trained using frameworks such as TensorFlow or Caffe. The tool analyzes, optimizes and parallelizes trained neural networks for efficient processing on the v-MP6000UDX architecture. Using this tool, the task of implementing a neural network is fully automated and the company says it takes minutes to get CNNs running on the low power Videantis processing architecture.

  “We’ve quietly been working on our deep learning solution together with a few select customers for quite some time and are now ready to announce this exciting new technology to the broader market,” said Hans-Joachim Stolberg, CEO at Videantis. “To efficiently run deep convolutional nets in real-time requires new performance levels and careful optimization, which we’ve addressed with both a new processor architecture and a new optimization tool. Compared to other solutions on the market, we took great care to create an architecture that truly processes all layers of CNNs on a single architecture rather than adding standalone accelerators where the performance breaks on the data transfers in between.”

  Stolberg said the v-MP6000UDX architecture increases throughput on key neural network implementations by roughly three orders of magnitude, while remaining extremely low power and compatible with the company's v-MP4000HDX architecture.

  Videantis  spun out of the Leibniz Universitat in Hannover in 2004, having developed its own multi-core SoC media processor. It claims to have shipped its processor in millions of cars already, and says it has over 40 million automotive cameras committed with customers which will see production at the earliest next year. While the company won't identify any of its customers, Robert Bosch was announced as a licensee for its automotive IP back in 2012.

  In the automotive sector, it has seen a lot of growth due to the industry’s rapid adoption of advanced driver assistance systems. Its technology has been adopted by several leading semiconductor companies and OEMs.

  In the automotive industry, processing sensor data will be crucial to supporting and enabling autonomous driving and ADAS systems.

  “The industry is marching towards 10+ cameras and 20+ sensors per car — we’ve even seen cars with as many as 20 cameras on board," said Marco Jacobs, vice president of marketing at Videantis. For example, there might be two rear view cameras per car — one for self-driving mode, and one for parking assist.”

  Jacobs said Videantis can enable deep learning on all sensors, in-camera, ECU and central processing solutions, and combination with other technology inputs, such as radar, lidar, ultrasound and night vision. Within the same platform, it can also carry out the codec functions for automotive Ethernet.

  Another area of growth is in virtual and augmented reality, where new headsets use smart cameras for a wide variety of tasks, including localization, depth extraction, and eye tracking. Jacobs said the company has also licensed its IP for applications in gaming, AR/VR, and wearables, though he could not name customers yet.

  Videantis  sees a lot of demand for smart sensing systems that combine deep learning with other computer vision and video processing techniques such as SLAM or structure from motion, wide-angle lens correction, and video compression. Videantis says it is the only company that can run all these tasks on a single unified processing architecture. This simplifies SOC design and integration, eases software design and reduces unused dark silicon (due to the programmable nature of its DSP-based platform).

  "Embedded vision is enabling a wide range of new applications such as automotive ADAS, autonomous drones, new AR/VR experiences, and self-driving cars," said Mike Demler, senior analyst at the Linley Group. "Videantis is providing an architecture that can run all the visual computing tasks that a typical embedded vision system needs, while meeting stringent power, performance, and cost requirements.”

  Jeff Bier, founder of the Embedded Vision Alliance, described Videantis has a pioneer in enabling the proliferation of computer vision into mass-market applications. “By enabling the deployment of deep learning as well as conventional computer vision algorithms, processors like the v-MP6000UDX are making the promise of more intelligent devices a reality,” Bier said.

("Note: The information presented in this article is gathered from the internet and is provided as a reference for educational purposes. It does not signify the endorsement or standpoint of our website. If you find any content that violates copyright or intellectual property rights, please inform us for prompt removal.")

Online messageinquiry

reading
  • Week of hot material
  • Material in short supply seckilling
model brand Quote
CDZVT2R20B ROHM Semiconductor
BD71847AMWV-E2 ROHM Semiconductor
MC33074DR2G onsemi
TL431ACLPR Texas Instruments
RB751G-40T2R ROHM Semiconductor
model brand To snap up
TPS63050YFFR Texas Instruments
ESR03EZPJ151 ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
BU33JA2MNVX-CTL ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
BP3621 ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 weixin Service Account AMEYA360 weixin Service Account
AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code