Intel/Saffron AI Plan Sidesteps Deep Learning

Release time:2017-10-24
author:Ameya360
source:Junko Yoshida
reading:1061

  Intel’s $1 billion investment in the AI ecosystem is one of the well-publicized talking points at the processor company. The Intel empire boasts a breadth of AI technologies it has amassed by acquisition and Intel Capital investments in AI startups.

  The acquired companies seemingly useful to Intel’s AI ambitions thus far include Altera (2015), Saffron (2015), Nervana (2016), Movidius (2016) and Mobileye (2017). Intel Capital has also fattened its AI portfolio with startups Mighty AI, Data Robot, Lumiata, CognitiveScale, Aeye Inc., Element AI and others.

  Unclear is how Intel is going to stitch all this together.

  With AI innovation still in its early days, Intel’s apparent scattershot approach to AI strategy might be justified. We might still have to wait a while for a more coherent narrative to emerge.

  Intel has talked up its AI hardware portfolio more often than its overall AI strategy. Typically, Intel announced this week that it will shipbefore the end of the year the Nervana Neural Network Processor (NNP), formerly known as “Lake Crest.” Naveen Rao, formerly CEO and cofounder of Nervana, and now Intel’s vice president and general manager of AI products, describes NNP as featuring “a purpose built architecture for deep learning.”

  Intel has other ammunition when it comes to AI chips, including Xeon family, FPGAs (from Altera), Mobileye (for automotive) and Movidius (for machine learning at the edge).

  However, Intel has been reticent about AI applications or exactly which fields of AI where it will focus. AI is a realm both broad and deep. Among Intel’s sprawl of acquisitions, the biggest mystery might be Saffron.

  It was hard not to notice earlier this month when Intel announced a new product called the Intel Saffron Anti-Money Laundering (AML) Advisor. The product isn’t hardware, although AML apparently runs on Xeon processors, but a tool for investigators and analysts to ferret out financial crimes.

  Earlier this week, EE Times had a chat with Elizabeth Shriver-Procell, director, financial industry solutions at Saffron Technology, to learn about the AI technologies behind Saffron’s product, and what she sees as Saffron’s gain in becoming an Intel company.

  Mainly, though, we wanted to know what a long-time financial crime-fighter like Shriver-Procell is doing inside the world’s largest CPU company.

  EE Times: Tell us a little bit about yourself. I hear you are an expert on financial analytics, working at various companies and agencies including the Treasury Department.

  Shriver-Procell: I am a lawyer, with the focus of my work on fighting financial crimes. I’ve worked at international consultancies and various financial institutions. Most recently, I was at Bank of America. I joined Saffron earlier this year. Yes, I also worked at the U.S. Treasury as a program manager for analytics development.

  EE Times: So before coming to Saffron, did you use Saffron’s products?

  Shriver-Procell: Some organizations I was associated with — including some clients at consulting companies — have used Saffron. I’ve been intrigued by the platform, so when this opportunity came up, I took it.

  EE Times: So, what exactly does Saffron offer?

  Shriver-Procell: Saffron was always sold and marketed as an ‘analytic platform’ customizable for broader applications. Users include supply chains, banks and insurance companies.

  EE Times: With the launch of Intel Saffron Anti-Money Laundering Advisor, has anything changed in Saffron’s platform approach?

  Shriver-Procell: We’re now rolling out specific products for specific applications.

  Different branch of AI

  EE Times: I suspect the primary reason for Intel to acquire Saffron was more to do with getting their hands on Saffron’s AI technologies, rather than solving financial crimes (although it’s a worthy cause). Tell us a little bit about what kind of AI expertise Saffron has designed and uses for what you do. And how is that different from other AI technologies?

  Shriver-Procell: At Saffron, the AI technology we use is called Associative Memory AI, which is a different branch of artificial intelligence from Deep Learning. Associative memory AI is very good at looking at a large volume of data – and a high variety of data – and discern signatures or patterns out of databases that are so far apart. It unifies structured and unstructured data from enterprise systems, email, web and other data sources.

  EE Times: Give us examples.

  Shriver-Procell: Take the example of a banking customer named Mary. Mary goes to London every other week and shops at Liberty store. John, who lives in a different country goes to London about the same time when Mary is there, and does something entirely different. Is there any relationship between the two? What are the commonalities between the two? Can we take a look at their IP addresses? Do we find any similarities in their log-in patterns? Is there anything that shows if any nefarious activities are going on there?

  EE Times: So, the point is that Associative Memory AI can take a look at so many seemingly unrelated databases at the same time.

  Shriver-Procell: Not only that, it gets the job — otherwise very time-consuming — done very quickly. While it takes a lot of training for Deep Learning to work, Associative Memory AI does not need to be trained. This AI does rapid, one-shot learning. It’s a model-free AI.

  EE Times: In the press release, you talk about Saffron’s “white box AI.” Please explain.

  Shriver-Procell: By “white box AI,” we’re talking about transparency. We can explain how we have arrived at a certain conclusion. In the past, financial institutions acquired a model-based, vendor-supplied solution for fraud detections. We call it a “black box” because users have no idea how their software has worked inside the black box. When regulators ask financial institutions how they came to a conclusion, they can’t really explain it. They can’t see what’s inside the black box, and they can’t tell if it was working properly.

  In highly regulated industries, it’s critical for financial institutions to be able to provide transparency in their data.

  EE Times: Interesting. That sounds like almost the opposite of Deep Learning AI. Some safety experts worry that when Deep Learning AI deployed in autonomous vehicles makes a certain decision turning a corner, for example, carmakers can’t explain why the AI made its decision. The lack of transparency in the learning process makes it tough for carmakers to validate the safety of autonomous cars.

  Shriver-Procell: I think it’s important to recognize that there are different approaches to AI. When Intel’s CEO talks about unlocking the promise of AI, he says that we may try new things. We need to explore new learning paradigms.

  EE Times: Do you see that any of those different branches of AI converging at one point?

  Shriver-Procell: I think they are complementary. As we see a growing trend for blending of applications, I think multiple types of AI will be able to address the needs presented by a spectrum of applications.

  EE Times: Tell us more about your new products.

  Shriver-Procell: As I said before, Saffron always sold its product as a platform. Now, as we are finding specific needs in specific market segments, we’ve decided to roll out a specific solution as a product that can meet the market’s challenges.

  Saffron has always held a very strong position in the financial market backed by its experience in finding financial crimes. By unifying structured and unstructured data linked into a 360-degree view, we can make sense of the patterns found across boundaries wherever the data is stored.

  We also announced that the Bank of New Zealand has just joined the Intel Saffron Early Adopter Program. This is designed for those institutions interested in innovation in financial services by taking advantage of the latest advancements in associative memory artificial intelligence.

  EE Times: What do you think Saffron has gained by becoming an Intel company?

  Shriver-Procell: The benefits of joining Intel are great. We’re talking about serious problems that large financial institutions are fighting. In order to be able to support them, you need all the power and support that a large corporation like Intel brings to bear. We also need the full support of Intel as a technology partner as we create new capabilities and applications on the Saffron platform, and make them scalable and extensible. As AI rapidly advances, you can’t overlook the significance of exploring new things, new ways to do AI.

  AI in its infancy

  After the interview with Saffron, EE Times got in touch with a few analysts to see how they view the state of AI technology development.

  Jim McGregor, founder and principal analyst at Tirias Research, observed, “There are many different types of learning (supervised, unsupervised), different types of digital neural networks (deep learning; holographic associated memory — also referred to as just associative memory, inference models), different hardware solutions for AI (CPUs, GPUS, DSPs, FPGAs, TPUs, Quantum processors), and a plethora of different software frameworks. So, mapping out all the AI solutions is like mapping out a tree that has new branches springing out every day.”

  Paul Teich, principal analyst at Tirias Research, concurred. “New classes of learning and AI algorithms are still emerging at a frightening rate.” He added, “That means we are still fairly far away from locking in efficient full-custom silicon. General purpose silicon rules during times of radical change. That is why GPUs, FPGAs, and coprocessor style matrix math accelerators (NVIDIA's Tensor Core and Google's TPU2 are in this bucket) will dominate until we get farther down the road in selecting best in class algorithms and best practices for model development and deployment.”

  Do we see some of those different branches of AI in the future working together?

  McGregor said, “This is a great question.” As he sees it now, “Most of the effort is being put on Centralized Intelligence and Hybrid Intelligence, where everything is done in the cloud for split between the cloud for learning and the edge devices for inference. A few companies like Microsoft are working on distributed intelligence where the intelligence can be spread amongst multiple resources, such as data centers for deep learning.”

  In his opinion, “The future will require Collective Intelligence where all these intelligent solutions work together. We do see the future as being one of collective intelligence.” But he noted, “When and how we get there has yet to be determined. Which solution has priority? What do you do when these solutions do not agree? How do you collectively share information between drastically different frameworks and neural networks (even creating two neural networks that look the same using the same data is next to impossible)? These are all issues that will have to be worked out.”

  McGregor added, “I'm not surprised to see Intel starting with [Saffron], because the financial industry will be one of the industries that drive us toward collective intelligence because of its importance to the global economy.”

  Now that Intel is offering Saffron’s Anti-Money Laundering Advisor as “a product” in the financial market, does this mean that Intel is taking a step — somewhat akin to IBM — toward a “service business model” rather than just sticking to the chip business?

  McGregor believes it is. “Intel has done this before and tends to swing back and forth between being a solutions vendor and a technology vendor, but in the case of AI, you almost have to be a solutions vendor because of the need for both hardware and software, and Intel has invested in both.”

("Note: The information presented in this article is gathered from the internet and is provided as a reference for educational purposes. It does not signify the endorsement or standpoint of our website. If you find any content that violates copyright or intellectual property rights, please inform us for prompt removal.")

Online messageinquiry

reading
Intel’s Next Gen CPU to Produce at TSMC with 3nm in First Half of Next Year
  Intel’s upcoming Lunar Lake platform has entrusted TSMC with the 3nm process of its CPU. This marks TSMC’s debut as the exclusive producer for Intel’s mainstream laptop CPU, including the previously negotiated Lunar Lake GPU and high-speed I/O (PCH) chip collaborations. This move positions TSMC to handle all major chip orders for Intel’s crucial platform next year, reported by UDN News.  Regarding this news, TSMC refrained from commenting on single customer business or market speculations on November 21st. Intel has not issued any statements either.  Recent leaks of Lunar Lake platform internal design details from Intel have generated discussions on various foreign tech websites and among tech experts on X (formerly known as Twitter). According to the leaked information, TSMC will be responsible for producing three key chips for Intel’s Lunar Lake—CPU, GPU, and NPU—all manufactured using the 3nm process. Orders for high-speed I/O chips are expected to leverage TSMC’s 5nm production, with mass production set to kick off in the first half of next year, aligning with the anticipated resurgence of the PC market in the latter half of the year.  While TSMC previously manufactured CPUs for Intel’s Atom platform over a decade ago, it’s crucial to note that the Atom platform was categorized as a series of ultra-low-voltage processors, not Intel’s mainstream laptop platform. In recent years, Intel has gradually outsourced internal chips, beyond CPUs, for mainstream platforms to TSMC, including the GPU and high-speed I/O chips in the earlier Meteor Lake platform—all manufactured using TSMC’s 5nm node.  Breaking from its tradition of in-house production of mainstream platform CPUs, Intel’s decision to outsource to TSMC hints at potential future collaborations. This move opens doors to new opportunities for TSMC to handle the production of Intel’s mainstream laptop platforms.  It’s worth noting that the Intel Lunar Lake platform is scheduled for mass production at TSMC in the first half of next year, with a launch planned for the latter half of the year, targeting mainstream laptop platforms. Unlike the previous two generations of Intel laptop platforms, Lunar Lake integrates CPU, GPU, and NPU into a system-on-chip (SoC). This SoC is then combined with a high-speed I/O chip, utilizing Intel’s Foveros advanced packaging. Finally, the DRAM LPDDR5x is integrated with the two advanced packaged chips on the same IC substrate.
2023-11-22 11:18 reading:1932
Intel’s CEO Envisions Over One Hundred Million AI PC Shipments in Two Years
  On November 7th, Intel held its “Intel Innovation Taipei 2023 Technology Forum”, with CEO Pat Gelsinger highlighting the healthy state of PC inventory. He also expressed optimism about the injection of several more years of innovative applications and evolution in PCs through AI.  Intel Aims to Ship over One Hundred Million AI PC within the Next Two Years  Gelsinger expressed that the PC inventory has reached a healthy level, and he is optimistic about the future growth of AI PCs, which are equipped with AI processors or possess AI computing capabilities. He anticipates that AI will be a crucial turning point for the PC industry.  Additionally, Gelsinger stated that the server industry may have seemed uneventful in recent years, but with the accelerated development of AI, it has become more exciting. AI is becoming ubiquitous, transitioning from the training phase to the deployment phase, and various platforms will revolve around AI.  Gelsinger expressed his strong confidence in Intel’s position in the AI PC market, expecting to ship over one hundred million units within two years.  Intel’s Ambitious Expansion in Semiconductor Foundry Landscape  Intel is actively promoting its IDM 2.0 strategy, with expectations from the industry that the company, beyond its brand business, has advanced packaging capabilities to support semiconductor foundry operations. In the future, Intel is poised to compete with rivals such as TSMC and Samsung.  Gelsinger noted that some have viewed Intel’s plan of achieving five technical nodes in four years as “an ambitious endeavor.” However, he emphasized that Intel remains committed to its original goal of advancing five process nodes within four years.  The company’s foundry business has received positive responses from numerous potential customers, and while it may take three to four years for significant expansion, the advanced packaging aspect may only require two to three quarters to get on track.  This transformation marks a significant shift for the company, setting new standards in the industry. Intel is making steady progress in its four-year plan to advance five nodes, and Moore’s Law will continue to extend. The construction of Intel’s new factories is also ongoing.  According to Intel’s roadmap, Intel 7 and Intel 4 are already completed, Intel 3 is set for mass production in the latter half of this year, and Intel 20A and 18A are expected to enter mass production in the first and second halves of next year, respectively.
2023-11-08 16:10 reading:1526
Intel, Facebook working on cheaper AI chip
 Intel and Facebookare working together on a new cheaper Artificial Intelligence (AI) chip that will help companies with high workload demands.At the CES 2019 here on Monday, Intel announced "Nervana Neural Network Processor for Inference" (NNP-I)."This new class of chip is dedicated to accelerating inference for companies with high workload demands and is expected to go into production this year," Intel said in a statement.Facebook is also one of Intel's development partners on the NNP-I.Navin Shenoy, Intel Executive Vice President in the Data Centre Group, announced that the NNP-I will go into production this year.The new "inference" AI chip would help Facebook and others deploy machine learning more efficiently and cheaply.Intel began its AI chip development after acquiring Nervana Systems in 2016.Intel also announced that with Alibaba, it is developing athlete tracking technology powered by AI that is aimed to be deployed at the Olympic Games 2020 and beyond.The technology uses existing and upcoming Intel hardware and Alibaba cloud computing technology to power a cutting-edge deep learning application that extracts 3D forms of athletes in training or competition."This technology has incredible potential as an athlete training tool and is expected to be a game-changer for the way fans experience the Games, creating an entirely new way for broadcasters to analyse, dissect and re-examine highlights during instant replays," explained Shenoy.Intel and Alibaba, together with partners, aim to deliver the first AI-powered 3D athlete tracking during the Olympic Games Tokyo 2020."We are proud to partner with Intel on the first-ever AI-powered 3D athlete tracking technology where Alibaba contributes its best-in-class cloud computing capability and algorithmic design," said Chris Tung, CMO, Alibaba Group. 
2019-01-09 00:00 reading:2736
Israel Approves $185 Million Grant for Intel Fab
The Israeli parliamentary finance committee approved a $185 million grant to Intel in return for meeting job creation targets and local contract guarantees.Last May, Intel announced it would spend $5 billion over two years to upgrade its Fab 28 in Kiryat Gat, Israel, from 22nm to 10nm production technology.Israel's grant is conditional on Intel meeting its already announced commitment to hire 250 new staff at the fab, and on awarding contracts worth around $560 million to local suppliers.Earlier this month, Ann Kelleher, Intel’s senior vice president and general manager of manufacturing and operations, said the company was planning for manufacturing site expansions in Oregon, Ireland and Israel, with multi-year construction activities expected to start in 2019.In a blog post, Kelleher said, “Having additional fab space at-the-ready will help us respond more quickly to upticks in the market and enables us to reduce our time to increased supply by up to roughly 60%. In the weeks and months ahead, we will be working through discussions and permitting with local governments and communities.”Kelleher said it was part of the company’s strategy to prepare the company’s global manufacturing network for flexibility and responsiveness to demand. As part of this, the company is spending to expand its 14nm manufacturing capacity, made progress on the previously announced schedule for the Fab 42 fit-out in Arizona, and located development of a new generation of storage and memory technology at its manufacturing plant in New Mexico.Kelleher also said that Intel would also supplement its own manufacturing capability with selective use of foundries for certain technologies "where it makes sense for the business." The company had already been doing this but will do so more as it aims to address a broader set of customers beyond the PC and into a $300 billion market for silicon in cars, phones, and artificial intelligence (AI) based products.
2019-01-04 00:00 reading:1384
  • Week of hot material
  • Material in short supply seckilling
model brand Quote
BD71847AMWV-E2 ROHM Semiconductor
TL431ACLPR Texas Instruments
CDZVT2R20B ROHM Semiconductor
MC33074DR2G onsemi
RB751G-40T2R ROHM Semiconductor
model brand To snap up
TPS63050YFFR Texas Instruments
STM32F429IGT6 STMicroelectronics
BU33JA2MNVX-CTL ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
ESR03EZPJ151 ROHM Semiconductor
BP3621 ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 weixin Service Account AMEYA360 weixin Service Account
AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code