<span style='color:red'>In-Memory</span> Processing Making AI-Fueled Comeback
In-memory computing could propel new AI accelerators to be 10,000 faster than today's GPUsStartups, corporate giants, and academics are taking a fresh look at a decade-old processor architecture that may be just the thing ideal for machine learning. They believe that in-memory computing could power a new class of AI accelerators that could be 10,000 times faster than today’s GPUs.The processors promise to extend chip performance at a time when CMOS scaling has slowed and deep-learning algorithms demanding dense multiply-accumulate arrays are gaining traction. The chips, still more than a year from commercial use, also could be vehicles for an emerging class of non-volatile memories.Startup Mythic (Austin, Texas) aims to compute neural-network jobs inside a flash memory array, working in the analog domain to slash power consumption. It aims to have production silicon in late 2019, making it potentially one of the first to market of the new class of chips.“Most of us in the academic community believe that emerging memories will become an enabling technology for processor-in-memory,” said Suman Datta, who chairs the department of electrical engineering at Notre Dame. “Adoption of the new non-volatile memories will mean creating new usage models, and in-memory processing is a key one.”Datta notes that several academics attempted to build such processors in the 1990s. Designs such as the EXECUBE, IRAM, and FlexRAM “fizzled away, but now, with the emergence of novel devices such as phase-change memories, resistive RAM, and STT MRAM and strong interest in hardware accelerators for machine learning, there is a revitalization of the field … but most of the demonstrations are at a device or device-array level, not a complete accelerator, to the best of my knowledge.”One of the contenders is IBM’s so-called Resistive Processing Unit, first disclosed in 2016. It is a 4,096 x 4,096 crossbar of analog elements.“The challenge is to figure out what the right analog memory elements are — we are evaluating phase-change, resistive RAM, and ferroelectrics,” said Vijay Narayanan, a materials scientist recently named an IBM Research fellow, largely for his work in high-k metal gates.Stanford announced its own effort in this field in 2015. Academics in China and Korea are also pursuing the concept.To succeed, researchers need to find materials for the memory elements that are compatible with CMOS fabs. In addition, “the real challenge” is that they need to show a symmetrical conductance or resistance when voltage is applied, said Narayanan.Vijay Narayanan, a materials scientist at IBM Research, said that most in-memory processors for AI are still in a research phase and perhaps three to five years from the market. (Image: IBM)Thoughts on the future of transistorsSo far, IBM has made some discrete devices and arrays but not a whole test chip with a full 4K x 4K array using what are currently seen as the ideal materials. IBM’s Geoff Burr demonstrated DNN training using phase-change materials in a 500 x 661 array that showed “reasonable accuracies and speedups,” said Narayanan.“We are progressing steadily, we know what we need to improve in existing materials, and we are evaluating new materials.”IBM wants to use analog elements so it can define multiple conductance states, opening a door to lower-power operation compared to a digital device. It sees a large array as an opportunity to run many AI operations in parallel.Narayanan is optimistic that IBM can leverage its years of experience with high-k metal gates to find materials to modulate resistance in the AI accelerator. He spent a dozen years bringing IBM’s expertise in the area from research to commercial products, working with partners such as Globalfoundries and Samsung.Looking forward, IBM is working on gate-all-around transistors that it calls nanosheets for use beyond the 7-nm node. He sees no fundamental hurdles with the designs, just implementation issues.Beyond nanosheets, researchers are exploring negative capacitance FETs that deliver a large change in current for a small change in voltage. The idea got increased attention in the last five years, when researchers found that that doped hafnium oxide is ferroelectric and could be a CMOS-compatible vehicle for the technique.“There’s still a lot of naysayers and people on both sides,” said Narayanan.“Research in my group shows negative capacitance is a transient effect,” said Datta of Notre Dame. “So you get a temporal boost in channel charge when the polarization switches but don’t get anything once the transients settle.”That said, Berkeley researchers “believe that this is a ‘new state’ of matter. So the story continues. It is fair to say that most companies are evaluating this internally.”
Key word:
Release time:2018-05-02 00:00 reading:983 Continue reading>>
IBM Demos <span style='color:red'>In-Memory</span> Massively Parallel Computing
  Today’s experimental non-von Neumann computing architectures mostly make use of memristive devices modeled on the human brain; they do not separate data memory from computing hardware and thus avoid the inefficiency of von Neumann computers’ repeated load/store operations. Now IBM Research (Zurich) has demonstrated a way to mass-produce 3-D stacks of phase-change memory (PCM) to perform memristive calculations 200 times faster than von Neumann computers. The in-memory coprocessor uses algorithms that exploit the dynamic physics of phase-change memories simultaneously on myriad cells, similar to the way millions of neurons and trillions of synapses in the brain operate in parallel.  The development, which IBM will demonstrate in December at the International Electronic Devices Meeting (IEDM), could return the company to the brink of hardware dominance.  “We have demonstrated that computational primitives using non-von Neumann processors can be used to do machine learning tasks,” IBM Fellow Evangelos Eleftheriou told EE Times. “So far, we predict a speedup of 200 times for our non-von Neumann correlation detection algorithm compared to using state-of-the-art computing systems, but we have many other computational primitives on the way that we will demonstrate later this year.”  The new paradigm combines PCM crystallization dynamics with an acceleration methodology called in-memory computing, which loads all data into RAM instead of swapping data sets into and out of mass memory (hard drives or flash). IBM’s approach does not force the in-memory values through the von Neumann bottleneck of a central processing unit; rather, it leaves the initial-state memory values in each PCM cell and uses a specialized memory controller to perform simultaneous, parallel operations on the cells. Calculations are performed in place by harnessing the physical properties of the phase-change RAMs.  Building on crystallization-dynamics discoveries  Memristive non-von Neumann architectures work like the brain by strengthening (lowering the resistance between) memory synapses each time they are used (and, conversely, increasing the resistance over time if they are not frequently used). The pattern-recognition and other algorithms get increasingly accurate as they gain experience, “memorizing” similar data sets and “forgetting” irrelevant ones whose patterns are seldom repeated. IBM uses this technique in its all-digital neurocomputer e-brains, which are already shepherding the U.S. nuclear arsenal and piloting U.S. fighter jets.  IBM Zurich’s latest effort does not emulate brainlike algorithms such as the digital spiking used in its neurocomputer e-brains. Rather, the development builds on IBM’s discoveries about the crystallization dynamics of phase-change memories.  “What we are trying to do is make more energy-efficient processors by avoiding all the load/store operations” of a von Neumann computer, IBM Research Staff Member Abu Sebastian said in an interview. “Today we’re showing how to use crystallization dynamics to perform unsupervised deep learning, but eventually we plan to build a coprocessor that will allow a von Neumann computer to offload all sorts of tasks it is ill-suited to perform well.”  The prototype houses 1 million in-memory cells, each performing the same deep-learning computational tasks on the unique data set loaded into it. The memristor-like use of PCM crystallization dynamics both accelerates time-to-results and eliminates power-wasting load/stores. IBM says the technology should be easily scalable both horizontally and vertically to realize 3-D non-von Neumann coprocessors that can solve tasks of almost any size.  In more detail, the PCM device uses a germanium-antimony-telluride alloy, sandwiched between two electrodes. When pulsed, the phase-change material shifts from an amorphous to a crystalline phase in easily controllable resistance steps that vary from extremely low (for 0) to extremely high (for 1) or anywhere in between (analog operation).   Model of the phase-change material.  Sebastian was the lead author on a paper describing the development in Nature Communications. He also leads a European Research Council project on the same topic.
Release time:2017-10-26 00:00 reading:1010 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
TL431ACLPR Texas Instruments
BD71847AMWV-E2 ROHM Semiconductor
MC33074DR2G onsemi
RB751G-40T2R ROHM Semiconductor
CDZVT2R20B ROHM Semiconductor
model brand To snap up
STM32F429IGT6 STMicroelectronics
IPZ40N04S5L4R8ATMA1 Infineon Technologies
ESR03EZPJ151 ROHM Semiconductor
BU33JA2MNVX-CTL ROHM Semiconductor
BP3621 ROHM Semiconductor
TPS63050YFFR Texas Instruments
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code