Intel <span style='color:red'>FPGA</span> Unit to Buy eASIC
Intel Corp. bid to buy eASIC for an undisclosed sum, aiming to fold to pioneer of a low-cost alternative to ASICs into its FPGA group. The x86 giant aims to accelerate eASIC's road map and hire all its 120 employees including eASIC CEO Ronnie Vasishta when the deal closes, probably in the third quarter.Back in 2015 when it decided to scuttle plans for an IPO, eASIC had reported revenues of $67.4 million and a $1.1 million loss. Since that time, it rolled out products in a 28nm process and talked about plans to support ARM cores. Investors in the 19-year-old company are said to have balked at supporting its plans for future nodes and Arm cores.“It’s a small company today but we believe we can scale it and make it a real differentiator over Xilinx,” Intel’s larger rival in FPGAs, said Dan McNamara, general manager of Intel’s Programmable Solutions Group, formerly Altera.eASIC defined a proprietary approach for taking a wafer with pre-defined logic and memory and customizing it with interconnects in just one or two mask layers. The resulting structured ASIC has a fraction of the up-front cost of a full ASIC although it lacks its programmability. At one time, Intel and LSI Logic were among rivals offering roughly similar approaches.“It’s a great technology…we had been looking at strategic partnerships with them when we decided to just buy the company,” said McNamara “They have a good patent portfolio and a solid team. Our job is to scale this business quickly and get them the investment they have needed,” he added.If the deal goes ahead as planned, eASIC could be offering Arm cores early next year. Longer term, its next-generation products could offer Intel’s Embedded Multi-Die Interconnect Bridge (EMIB), a proprietary die-to-die package that is a low-cost rival to 2.5D chip stacks.So far, Intel’s only public products using EMIB have been its FPGAs that employed it as a bridge to serdes, HBM memory stacks and Xeon processors. “We have a bunch of sampling products today in different verticals and a lot of products coming in next couple years from Intel leveraging EMIB,” he said.EMIB has not seen use in third-party products, although it was introduced as a differentiator for Intel’s still-small foundry business. Long term, Intel also will consider making eASIC chips. TSMC and Globalfoundries make eASICs latest products.Dan McNamara (left) with eASIC CEO Ronnie Vasishta outside Intel's headquarters. (Image: Intel)The deal is not expected to move the needle significantly in Intel’s competition with Xilinx in FPGAs. Both companies have been reporting solid growth with annual revenues north of $2 billion for their products used in a wide variety of markets including machine learning.In its first quarter, Intel reported sales of FPGAs were up 17% year-over-year and a whopping 150% in the data center where they are used as accelerators on Microsoft’s new servers. In March, Xilinx, which commands 49% of the FPGA market according to Semico Research, announced Everest, a 7nm accelerator for big data and AI it aims to ship next year.
Key word:
Release time:2018-07-13 00:00 reading:1211 Continue reading>>
Adding NoCs To <span style='color:red'>FPGA</span> SoCs
  FPGA SoCs straddle the line between flexibility and performance by combining elements of both FPGAs and ASICs. But as they find a home in more safety- and mission-critical markets, they also are facing some of the same issues as standard SoCs, including the ability to move larger and larger amounts of data quickly throughout an increasingly complex device, and the difficulty in verifying and debugging any problems that might show up along the way.  FPGA SoCs are a hybrid device, and they are gaining traction as chipmakers and systems companies are tasked with completing more designs per year, often in markets where protocols and algorithms are still in flux, such as automotive, medical devices and security. Using a standard FPGA chip can provide the necessary flexibility, but only an ASIC can meet the higher performance requirements, both for new and existing markets such as aerospace. FPGA SoCs offer a compromise solution that basically splits the difference, providing some of the performance and low-power benefits of an ASIC and the flexibility to avoid early obsolescence.  But this level of complexity also adds issues that are very familiar to SoC design teams.  “The complexity and capabilities of FPGA have grown so much that you can build big systems with multiple interfaces and protocols in a single FPGA, and such designs require a fabric to integrate different IP and hardware modules working at various clock domains and data protocols,” said Zibi Zalewski, general manager for Aldec’s Hardware Division.  Modern FPGAs — especially those with hard embedded processors and controllers — fit somewhere between traditional logic FPGAs and ASICs, with a nod to the ASIC direction. “A NoC is definitely needed, because having a NoC simplifies the interfacing from the verification point of view,” Zalewski said. “A NoC in the design allows the engineering team to manage the top-level interfacing, which can be further used to create a main prototyping channel to the host computer or a transactor for emulation, instead of multiple interfaces that increase the complexity, time and cost of the verification process.”  This has some interesting implications for FPGA SoC tooling. FPGA vendors generally sell their own tools with their hardware, which has made it difficult for EDA vendors to make a significant dent in that market. But as these two worlds begin to merge, there are questions about whether the kind of complex tooling and IP that makes a finFETpossible, for example, also may be required inn an FPGA SoC—particularly in safety critical applications where traceability is required.  “When using high-capacity FPGAs for design verification and prototyping purposes, one of the key requests is for appropriate debug capabilities,” said Juergen Jaeger, product management director at Cadence. “However, the architecture in today’s no-NoC FPGAs makes it challenging to provide such debug features, mostly due to finite (limited) connectivity resources in the FPGA, especially as all the FPGA-internal routing resources are needed to implement the design itself and run it at sufficient-enough performance. Also, debug requires being able to access as many internal design nodes as possible, ideally all, and route those probe points to the outside. This is almost impossible, and results in many challenges and debug shortcomings. This is where an FPGA-internal NoC could help, as it would provide the ability to probe many nodes locally, route the data through the NoC to an aggregator without wasting precious FPGA routing resources, and then export the debug data through some standard interface, such as gigabit Ethernet, to the outside world.”  Not all FPGAs will need NoCs, however. “It might help if the design is a data-path heavy design, moving a lot of data around,” Jaeger said. “However, if the design is more control-centric, and/or requires the highest possible performance, the inherent latency and non-deterministic nature of a NoC might be counterproductive. It will also require new FPGA design tools that can take advantage of a NoC component inside an FPGA.”  Lower power  ASICs inherently are more power-efficient than FPGAs. Now the question is how much power overhead can be shaved off by combining these devices and utilizing some of the low-power techniques that have been developed for SoCs, such as more efficient signal routing through a NoC.  “The NoC enables FPGA resources to be shared by IP cores and external interfaces and facilitates power management techniques,” said Aldec’s Zalewski. “With a NoC, the FPGA logic can be divided into regions, each of which can be handled by individual NoC nodes called routers and turned off selectively into sleep mode if not used.”  This notion of flexibility is what drove the formation of the CCIXConsortium, which was founded to enable a new class of interconnect focused on emerging acceleration applications such as machine learning, network processing, storage off-load, in-memory data base and 4G/5G wireless technology.  The standard is meant to allow processors based on different instruction set architectures to extend the benefits of cache coherent, peer processing to a number of acceleration devices including FPGAs, GPUs, network/storage adapters, intelligent networks and custom ASICs.  This is especially key when using FPGA to accelerate a workload. Anush Mohandass, vice president of marketing at NetSpeed Systems, noted that during the Hot Chip conference a few years ago, Microsoft said it wanted to accelerate image search in Bing using FPGAs rather than running it in a regular server. “They found higher efficiency and lower latency using FPGA acceleration for images, so that’s a place where FPGAs can come into the forefront. Instead of using it as a general-purpose compute, you use it for acceleration.”  In fact, Mohandass suggests this is the genesis behind the CCIX moment. “Even when Microsoft did it and said, ‘We have the Xeon processor, that’s the main CPU, that’s the main engine — when it detects something that the FPGA can do, it offloads it to the FPGA. If that is the case, why should you treat the accelerator as a second-class citizen? In CCIX, acceleration literally has the same privileges as your core compute cluster.”  There are other technical issues with today’s advanced FPGAs that may benefit from the structure of a NoC, as well.  “Each FPGA fabric can look like an SoC just in terms of sheer gate count and complexity,” said Piyush Sancheti, senior director of marketing at Synopsys. “But now that you have all this real estate available, you’re obviously jamming more function into a single device, and that’s creating multifunctional complexity as well as things like clocking. We see that clocking structures in FPGAs are becoming much more complex, which creates a whole bunch of new issues.”  IP reuse  This simplifies design reuse, as well. “Typically, if the design is in any kind of a SoC environment, whether that’s implemented on ASIC or FPGA, the more IPs that are integrated, the more asynchronous clocks there are in the design,” Sancheti said. “There may be a PCIe running at 66 MHz, there may be other aspects of the design that are running at a much higher frequency, and these by design are not synchronous with each other. What that means, essentially, is that there is logic operating at different frequencies, but this logic is communicating with each other. This causes clock domain crossing issues. How do you make sure that when a signal goes from a fast clock domain to slow, and vice versa, that the signal is reliable, and that you don’t have meta stable signals, where essentially the timing of those signals is not completely synchronized?”  Just like an SoC design, a very complex synchronization scheme is needed, along with the tools and methodologies to ensure the proper synchronization is in place. “Everybody who’s doing anything more than jelly bean FPGAs has a complete methodology around the clock domain crossing verification, which is actually somewhat new to the FPGA design community,” he said. “If you map all of these challenges to design flows and methodologies, there are new things being added to their flows that historically they didn’t need to worry about purely because they didn’t have that many IPs and they didn’t have that many clock domains to deal with. It goes back to the simplicity of the design and the end application. As FPGAs become more SoC-like, unfortunately they have to deal with all the challenges of doing SoC design.”  Bridging the gap  So are today’s FPGA SoCs enough like traditional, digital SoCs that all the same rules apply for a network on chip? The answer appears to be somewhat, but not completely.  “Both of the main FPGA vendors have proprietary network-on-chip tools, and if a user chooses to use one of those, they can hook up their functions using a form of network on chip,” said Ty Garibay, CTO of ArterisIP. “It is more of a conceptual approach to the system. Does it look enough like a standard SoC that it makes more sense to think of it as having a NoC as the connectivity backbone? Many FPGA applications do not. They look a lot more like a networking chips or backbone chips that are fundamentally data flow. Data comes in the left, you have a whole bunch of a munging units, and data goes out the right. That is not a traditional SoC. That’s a normal network processor or baseband modem or something like that, where it’s a data flow chip. So in those types of FPGA soft designs, there’s no need for a network on chip.”  But if it conceptually looks like a bunch of independent functional units that communicate with each other and are controlled generally by a central point, then it does make sense to have those connected with a soft network on chip, he said. “The next generation of high-performance of FPGAs are expected to contain hard NoCs built into the chip because they are getting to the point where the data flow is at such a high rate—especially when you have 100-gigabit SerDes and HBM2, where trying to pipe a terabit or two per channel through soft logic essentially uses all the soft logic and you’ve got nothing left to be processing with.”  As a result, that bandwidth is going to require a hardening of the data movement that is enforced in much the same way that processing enforces hard DSPs or hard memory controllers. Successive generations of FPGAs may be expected to look like a checkerboard of streets, where the streets are hard 128, 256, 512 12-bit buses that go from end to end in one or two cycles and don’t use up any soft logic to do it.  “Along with this would be the synthesis function that allocates on-ramps and off-ramps to those channels as part of hardening the function onto the FPGAs, because we’re moving so much data around I just don’t see how they can continue to do that in soft logic,” Garibay said. “That will be the coming of real NoCs onto FPGAs, because NoCs are always a good idea.”
Key word:
Release time:2018-07-03 00:00 reading:1216 Continue reading>>
Embedded <span style='color:red'>FPGA</span> Supplier Join's TSMC IP Alliance
TI Claims to Obsolete <span style='color:red'>FPGA</span>s for Embedded Apps
  Texas Instruments claims to have made the field-programmable gate array obsolete for embedded industrial applications such as servo-motor control. As the world’s largest industrial-semiconductor manufacturer, TI may be uniquely positioned to make that judgment, but that doesn’t necessarily mean FPGAs are going away.  The short of it is that using TI’s C2000 industrial microcontroller with its DesignDRIVE fast-current-loop software provides a 460-nanosecond current loop that can eliminate the need for FPGAs in many embedded industrial control applications. The long of it is that FPGAs still have life in other applications, and TI’s approach is not the only one that offers efficiencies over the combined use of a microcontroller and FPGA, according to Tom Hackenberg, principal analyst of embedded processors at IHS Markit.  “Energy efficiency in motors is a driver of innovation, and current-loop performance—torque response—is the fundamental [determinant] of motor drive performance,” Hackenberg said in an exclusive interview. (TI makes the case in the paper “At the heart of the matter of improving servo drive performance is the system’s current-loop or torque response performance.”) “The need to enhance this responsiveness with an FPGA would likely complicate the architecture, as the pin count will increase along with the cost and space requirements,” Hackenberg added. “The C2000 microcontroller, which is highly integrated with several on-chip features, would be a favorable solution for motor drive manufacturers.”  Hackenberg noted that servo drives are key to a multitude of industrial applications. “In robotics and machine tools, for instance, IHS Markit predicts substantial shipment growth of GMC [generic motion control] servo drives into these applications, with average year-to-year growth of 17.1 percent and 7.1 percent, respectively, throughout 2021,” he said. “Given all considerations, I would estimate that the C2000 is a very competitive solution that is likely to eliminate the need for additional logic in some servo applications, saving designers significant cost and effort. But it is not likely to eliminate the market for FPGAs or other configurable-logic solutions, due to the wide variety of engineering skill sets, supplier relationships, valuable existing IP, and overall device design considerations that enable a continued market for alternative competitive solutions.”  TI notes that the C2000 with DesignDRIVE delivers system-on-chip functionality, thus greatly simplifying and lowering the cost of drive-control system development. Using the C2000 with the submicrosecond-current-loop software enables higher control performance and board space savings while simplifying thermal budget design, according to TI. The approach offers subcycle pulse-width modulation (PWM) and improved control-loop bandwidths–potentially tripling a given motor’s torque response, according to TI–plus it needs just 460 ns for field-oriented control processing, replacing traditional proportional integration control for greater stability at higher speeds. The free fast-current-loop software will persuade most motor control designers to nix costly FPGAs, the company argues.  “The use of the C2000 with DesignDRIVE software does seem to me to be an elegant solution for servo-motor control, and I have no reason to dispute any of its performance advantages,” Hackenberg said, adding that “a hands-on demonstration” of DesignDRIVE’s use for optimizing control of a servo motor had revealed it to be “an extremely quick and painless process.” Nonetheless, there are other considerations that might lead designers to use FPGAs, he said.  “First, many servo control applications can be handled by just a microcontroller or, for significant performance demands, an embedded microprocessor. The use of an FPGA is often only a solution to enable a single processor to control multiple servos, such as [for] multiple axes of control. There may still be designs where a single microcontroller, even one as efficient as the C2000, is insufficient for a number of servos and/or additional motor controls or additional applications. An ASIC-, ASSP-, or FGPA-based solution may be required for this level of complexity,” Hackenberg said.  “Second, many suppliers of configurable ICs such as FPGAs now offer their own configurable SoCs, which include the controller on the configurable chip, thus eliminating many of the design disadvantages highlighted by TI,” he said. “These solutions can even include multiple asymmetric processor cores to enable a real-time processor for the servo control and an applications processor for the application software, and possibly even an additional microcontroller core for sensor fusion or other application demands.  “These are often costlier solutions, so the C2000 may or may not be competitive with these solutions, depending on the advantages of the additional integration” in a given application, Hackenberg said. “While the C2000 may still win in overall efficiency, even compared to servo-application-optimized solutions, there may be cases where it is not the most optimized solution.” For instance, “there are a number of semiconductor suppliers making ASICs and ASSPs, specifically targeting servo controls, that also offer efficiencies over the combined use of a microcontroller and an FPGA.”  Existing intellectual-property investments are another consideration. “If a designer has significant investment in optimized logic IP—often based in an FPGA or ASIC—the cost efficiencies [of TI’s C2000-based solution] may be offset by the loss of existing IP based on hardware,” Hackenberg said. Further, “the application may cost-effectively utilize the [existing] servo controller solution for other device application features, for instance an array of non-servo-related sensors such as pressure or proximity sensors for safety, that can have direct feedback to the servo controller.”  So while TI’s C2000 industrial MCU with DesignDRIVE software and fast current loop can potentially obsolete FPGAs in many or even most motor control applications, it likely cannot optimally serve all.
Key word:
Release time:2017-07-07 00:00 reading:1377 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
RB751G-40T2R ROHM Semiconductor
CDZVT2R20B ROHM Semiconductor
TL431ACLPR Texas Instruments
MC33074DR2G onsemi
BD71847AMWV-E2 ROHM Semiconductor
model brand To snap up
TPS63050YFFR Texas Instruments
BU33JA2MNVX-CTL ROHM Semiconductor
BP3621 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
ESR03EZPJ151 ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code