Beating IoT Big Data With <span style='color:red'>Brain</span> Emulation
  To beat Big Data, according to German electronics company Robert Bosch, we need to tier the solution by making every level smart — from edge sensors to concentration hubs to analytics in the cloud.  Luckily, we have the smart sensors of the brain — eye, ears, nose, taste-buds and touch sensitivity — as the smartest model in the universe (as we know it) after which to fashion our electronic Big Data solutions to the Internet of Things (IoT), said Marcellino Gemelli, head of business development at Bosch Sensortec.  "We need to feed our Big Data problems into a model generator based on the human brain, then use this model to generate a prediction of what the optimal solution will look like," Gemelli told the attendees at the recent SEMI MEMS & Sensor Executive Congress (MSEC). "These machine learning solutions will work on multiple levels, because of the versatility of the neuron."  Neurons are the microprocessors of the brain — accepting thousands of Big Data inputs, but outputting a single voltage spike down their axon after receiving the right kind of input from thousands of dendrites mediated by memory synapses. In this way the receptors of the eye, ear, nose, taste-buds and touch sensors (for presence, pressure and temperature, mainly) can pre-process the deluge of incoming raw Big Data before sending summaries — encoded on voltage spikes — up the spinal cord to the hub called the "old brain" (the brain stem and automatic behavior centers such as those handling breathing, heart beating and reflexes). Finally the pre-processed data makes its way through a vast interconnect array called the white matter to its final destination in the conscious parts of the brain (the gray matter of the cerebral cortex). Each part of the cerebral cortex is dedicated to a function like vision, speech, smelling, tasting, the sensations of touch as well as the cognitive functions of attention, reasoning, evaluation, judgement and consequential planning.  "The mathematical equivalent of the brain's neural network is the perceptron, which can learn with its variable conductance synapse while Big Data is streaming through it," said Gemelli. "And we can add multiple levels of perceptrons to learn everything a human can learn, such as all the different ways that people walk."  Moore's Law also helps out with multi-layered perceptrons — called deep learning — because it offers a universal way to do smart processing at the edge sensor, in the hub and during analytics in the cloud.  "First of all, volume helps — the more Big Data the better," said Gemelli. "Second, variety helps — learning all the different aspects of something, such as the mentioned different gaits people use to walk. And thirdly, the velocity at which a perceptron needs to respond needs to be quantified. Once you have these three parameters defined, you can optimize your neural network for any particular application."  For example, Gemelli said, a smartwatch/smartphone/smart cloud combination can divide-and-conquer Big Data. The smartwatch evaluates the real-time continuous data coming in from individual users, then sends the most important data in summaries to the smartphone every few minutes. Then just a few times a day, the smartphone can send trending summaries to the smart cloud. There the detailed analysis of the most important data points can be massaged in the cloud and fed back to the particular user wearing the smartwatch, as well as to other smartwatch wearers as appropriate suggestions of how anonymous others have met the same goals as they have set.  Bosch, itself, is emulating this three-tiered brain-like model by putting processors on its edge-sensors so they can identify and concentrate Big Data trending before transmitting to smart hubs.  "Smart cities, in particular, need to make use of smart sensors with built-in processors to perform the real-time edge sensor trending," said Gemelli. "Then they send those trends to hubs, that analyze and send the most important ones to the cloud for analysis into actionable information for city managers. That is Bosch's vision of the smart cities of the future."
Release time:2017-11-20 00:00 reading:1419 Continue reading>>
<span style='color:red'>Brain</span> Research Reimagines AI
  Researchers face fundamental mysteries understanding how the brain works, but their work promises breakthroughs in computing as well as health care. That’s the view of Phillip Alvelda, a serial entrepreneur whose latest work is at the intersection of neuroscience and electronics.  Alvelda helped organize a research program to create an implantable neural-electrical interface. More recently, he launched a startup with the ambitious goal of creating the digital equivalent of a hippocampus and cerebellum.  Researchers are now able to track signals in a million-and-a-half neurons, the entire cortex of a mouse, he reported. “We can put an image in front of a mouse and read out how its processed … to start to tease out the actual neural code,” he said in a keynote at last week’s Hot Chips event here.  “How information is coded in the brain is not known; maybe it’s not a code of signals and switches [like those used in today’s computers] but something based on the relative time of arrival of multiple signals in a shared channel,” he said, pointing to work on neural information theory that began around 2009.  Today’s deep-learning systems such as Amazon’s Alexa, IBM’s Watson, and Facebook’s convolutional neural networks are relatively siloed, “not able to generalize out of their domains.” “Our learning systems need a common code that’s relevant to sensory and memory integration,” he added.  Others agree that today’s neural networks are relatively crude in comparison to the virtual GFlops of computing the brain manages at an estimated 30 W.  “The brain is built on a different computation model we only partially understand, and all this deep-learning stuff is heading in a different direction,” said Doug Burger, a veteran computer architect who helped design Microsoft’s recently announced Brainwave system for machine learning.  “We need breakthroughs to snap back to the biological model of computing or an investment in some other new model to find a new Moore’s law,” said Burger in an interview at Hot Chips. “The advantage of the biological model is that we know one exists, and we don’t know how much digital headroom there is ahead in deep neural networks.”  For his part, Alvelda believes the hippocampus serves as the brain’s integrator, “assembling sub-AIs into integrated meta-AIs.” He wants to build such a system at his startup, Cortical.ai.  The company, which is so new that it has not even established a headquarters yet, also aims to build a predictive system that mimics the brain’s cerebellum.  &ldquo ;In just the last couple of years, we have learned that the cerebellum has more neurons than the whole rest of the brain, so it’s not just used for motor control refinement,” he said. “The cerebellum is connected to the entire brain, and it is now believed to help project future states of cognitive processes,” such as knowing how to catch a ball.  Cortical’s work depends heavily on Neural Engineering System Design (NESD), a program that Alvelda helped start at the Defense Advanced Research Agency. NESD aims to develop within three years an implantable interface to connect neurons and electronics.  Part of President Obama’s brain initiative, the program aims to restore sight and hearing by delivering visual and audio data to the brain at a higher quality that is possible with current methods.  “All the technologies to make such an interface possible are available, but they are stove-piped and held by different companies and universities,” said Alvelda, who visited 80 labs and hosted multiple workshops to launch NESD.  The effort will draw on work in a wide range of technologies, including thinned CMOS electrical probe arrays, photonics, biocompatible packaging, and basic neuroscience.  “We’re seeding a neuro-engineering industry that’s moving incredibly fast. We have a few hundred participants over a couple of hundred institutions — we have been successful catalyzing a new industry,” he said, noting separate $700 million investments from Elon Musk and a group including Google.  Building on that work, startup Cortical’s goal is “to free the mind even from the limits of healthy bodies,” said Alvelda, imaging a kind of virtual VR where you can “write directly to the senses.”  Such capabilities have broad implications. “What does telecom and media look like when we can reach to your thoughts and emotions and senses?” asked Alvelda, who also founded MobiTV, a broadcast service for smartphones.  The Cortical vision is expansive. It sees the potential to tie AI systems together in ways that deliver the digital equivalent of empathy, a foundation for ethics and trust. “That’s when AI becomes really powerful,” he said.
Key word:
Release time:2017-08-29 00:00 reading:1148 Continue reading>>
Google Aims to Beat the <span style='color:red'>Brain</span>
  Google’s artificial-intelligence guru, Demis Hassabis, has unveiled the company’s grand plan to solve intelligenceby unraveling the algorithms, architectures, functions, and representations used in the human brain. But is that all there is to it?  No one disputes the basics of artificially intelligent neural networks, namely that brain neurons are connected by synapses with “weights” that grow stronger (learn) the more they are used and atrophy (forget) when seldom used. The European Union’s Blue Brain project, for instance, hopes to simulate on a supercomputer even the smallest details of how the brain works, so as to unravel the mechanisms behind maladies such as Parkinson’s and Alzheimer’s, as well as to build AI systems.  If all you want is AI cast in silicon (in silico, as opposed to in vivo — living organisms), however, engineers can get by with a mere understanding of the algorithms, architectures, functions, and representations used in brains, according to Hassabis, CEO of DeepMind Technologies, which Google acquired in 2014.  “From an engineering perspective, what works is ultimately all that matters. For our purposes, then, biological plausibility is a guide, not a strict requirement,” Hassabis and his co-authors argue in “Neuroscience-Inspired Artificial Intelligence,” published in the peer-reviewed journal Cell. “What we are interested in is a systems neuroscience-level understanding of the brain, namely the algorithms, architectures, functions, and representations it utilizes.  “By focusing on the computational and algorithmic levels, we gain transferrable insights into general mechanisms of brain function, while leaving room to accommodate the distinctive opportunities and challenges that arise when building intelligent machines in silico.”  For example, during sleep the hippocampus replays and re-associates the particularly successful learning experiences that have happened during each day, enabling long-term memory to secure lessons learned, even if only from a single instance. Simple machine-learning algorithms, by contrast, can wash out single learning instances with a clutter of insignificant details. But Google claims machine-learning algorithms can be constructed to mimic the functions in real brains.  “Experiences stored in a memory buffer can not only be used to gradually adjust the parameters of a deep network toward an optimal policy, but can also support rapid behavioral change based on an individual experience,” Hassabis et al. state.  Because learning algorithms tend to overwrite existing knowledge with new knowledge, getting neurocomputers to learn multistep tasks has been a tough nut for engineers to crack. The authors note that recent research has exploited synergies in neuroscience and engineering to address that conundrum. Neuroscientists’ discovery of synaptic lability (variable rates of change) in real brain synapses gave AI engineers a new tool to achieve multistep learning; they crafted learning algorithms that set the lability of earlier tasks at levels that prevent later tasks from overwriting them.  “Findings from neuroscience have inspired the development of AI algorithms that address the challenge of continual learning in deep neural networks by implementing of a form of ‘elastic’ weight consolidation, which acts by slowing down learning in a subset of network weights identified as important to previous tasks, thereby anchoring these parameters to previously found solutions,” the authors state. “This allows multiple tasks to be learned without an increase in network capacity, with weights shared efficiently between tasks with related structure.”  Hassabis et al. note that “much work is still needed to bridge the gap between machine- and human-level intelligence. In working toward closing this gap, we believe ideas from neuroscience will become increasingly indispensable.” Citing engineers’ success in enabling AI multistep learning by reproducing the biological mechanism, the authors call for neuroscientists and AI engineers to join ranks to solve “what is perhaps the hardest challenge for AI research: to build an agent that can plan hierarchically, is truly creative, and can generate solutions to challenges that currently elude even the human mind.”  Not everybody agrees, however, that to crack the code of human intelligence we need merely understand the brain’s algorithms, architecture, functions, and representations. There is a counterargument that the brain’s “code” is the same for all living intelligence in the universe, just as chemistry is the universal code for which the alchemists searched. Likewise, the brain’s code for intelligence will result in a body of intertwined universal principles similar to chemistry and physics.  “We need to crack the brain’s code for a genuine understanding of intelligence, and computer software alone is not sufficient. The brain is just likened to a computer because of the age in which we live, just as it was likened to a steam engine in prior eras,” said neuroscientist and tech entrepreneur Pascal Kaufmann, founder of AI software company Starmind International AG (Küsnacht, Switzerland). “Just as physics is the code of all the physical phenomena in the universe, the brain’s code will similarly be based on principles that are universal in nature.  “The same principles are applied again and again in nature — the way a tree’s branchings are very similar to veins and arteries in the body,” Kaufmann said. “You just have to ask the right questions.”
Key word:
Release time:2017-07-26 00:00 reading:1024 Continue reading>>
<span style='color:red'>Brain</span> Analysis System Approved by the FDA

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
CDZVT2R20B ROHM Semiconductor
MC33074DR2G onsemi
RB751G-40T2R ROHM Semiconductor
BD71847AMWV-E2 ROHM Semiconductor
TL431ACLPR Texas Instruments
model brand To snap up
ESR03EZPJ151 ROHM Semiconductor
BU33JA2MNVX-CTL ROHM Semiconductor
BP3621 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
IPZ40N04S5L4R8ATMA1 Infineon Technologies
TPS63050YFFR Texas Instruments
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code