AMEYA360:Nidec Sankyo Develops a New Automatic Open-Close Unit for Toilet Lids and Seats
Startup Beats Samsung in Foldables
Samsung grabbed world headlines on Wednesday when it announced plans to ship a foldable smartphone sometime next year. But a little-known rival was actually first to announce such a product a few weeks ago and aims to beat the South Korean giant again by shipping it in December.Startup Royole started ramping a Gen6 fab for flexible displays at its Shenzhen headquarters in June. Its whopping $1.7 billion in private financing and staff of 2,000+ is matched by its outsized ambitions to become a conglomerate like Samsung selling both leading-edge displays and consumer devices that use them.Royole’s FlexPai tablet is based on its 7.8-inch AMOLED display that folds into the size of a smartphone. The flexible display is in pilot production in a 1.1 million-square-foot fab that will ultimately be able to produce a million 8-inch panels a month.An artist’s rendering of the startup’s FlexPai handset. (Images: Royole)While China is home to other display makers, Royole is believed to be the farthest along in having commercially available flexible displays. If successful, Royole could become a natural supplier for China’s top-tier handset makers such as Huawei, Oppo, or Xiaomi if they want to have a foldable to match Samsung.To kickstart its display business, the startup has already started shipping tablets, headsets, and even hats and T-shirts using its displays.“We won’t do anything that’s traditional,” said Ze Yuan of Royole with a laugh.“We are targeting niche markets with high-margin products and form factors that will never be anything the market has seen before,” added the R&D manager, who earned his Ph.D. in electrical engineering from Stanford.Royale’s foldable display is roughly equivalent to Samsung’s, said Ze Yuan, who attended the Samsung event where it was first discussed.FlexPai sports a 7.8-inch, 1,920 x 1,440-pixel screen with 308 pixels/inch compared to Samsung’s 7.3-inch 1,536 x 2,152 display with 420 dots/inch. Royole tested its display for bending more than 200,000 times. Samsung said that it has tested bending its display “hundreds of thousands of times.”Neither company would comment on the thickness or power consumption of their displays or details of any novel processes or materials used to make them.Royole uses display driver ICs from two unnamed sources that enable building larger screens of different sizes. MagnaChip, a display driver IC company in South Korea, expressed optimism about foldable handsets earlier this year.Although foldable, the FlexPai display has “a solid feeling” when users interact with the proprietary touch sensors that it has been developing since 2014, said Ze Yuan.Just for fun, Royole rolled out a shirt and hat combo using its flexible displays that sell for $1,400.The FlexPai handset includes 20- and 16-MPixel telephoto and wide-angle cameras “that can be bent to capture objects at unique angles.” Samsung did not comment on cameras on its device.FlexPai uses Qualcomm’s Snapdragon 8-series SoC, supports MicroSD, fingerprint ID, USB-C charging, and stereo speakers and sells for $1,469 with 256-GB memory. Samsung did not detail its handset’s specs or price, but one analyst estimated that it will cost $1,500.One big difference between the two handsets is that Samsung uses two displays — a narrow 4.58-incher on the outside for use while folded and the larger screen inside. FlexPai wraps its screen around the outside of the device so that, when closed, the main screen becomes three screens — primary (16:9, 810 x 1,440), secondary (18:9, 720 x 1,440), and edge (21:6, 390 x 1,440) displays.Both companies say that they will support modes that let content flow across multiple screens or let them act independently.Samsung is working with Google to enable the next version of Android to handle some of the display modes natively. Royole created its own Android variant called Water OS to run Android apps in ways that make sense for its device.“I’m very happy that someone from Google was at the Samsung event because deep support for foldables will be needed from Android,” said Ze Yuan.Samsung and Google are working on an emulator so that developers can start writing apps for the systems before they are launched. One app developer is already working with the duo for an app to be available at launch.FlexPai is aimed, in part, at use by app developers. “Emulators are nice, but any software developer wants actual hardware to see how it works,” said Ze Yuan.Both Samsung and Royole face the pioneer’s risk that foldable handsets may be the next big thing in mobile — or a narrow, short-lived niche.The audience of developers and media at the Samsung event was clearly enthusiastic about its foldable. However, that doesn’t mean that mainstream consumers will find the products compelling.Historically, hybrid products have suffered from tradeoffs. For example, users could be put off by the relatively high cost and power consumption of foldables.Last year, China’s ZTE rolled out a foldable handset based on two displays on a hinge. The device initially received mixed reviews, suggesting that foldables could remain a novelty with little market traction.Even if they get some initial traction, it could take years before most mainstream apps are tailored for their screen sizes and capabilities. Indeed, users are likely to find that many favorite apps will run in awkward or unexpected ways, at least in the beginning.Not surprisingly, Ze Yuan is bullish. “It will take time, and it could be a painful process, but I think it will go mainstream because with 5G and AI, people will want to access more and more information,” he said.Ironically, the R&D manager may not be an ideal customer himself, in part because he regularly travels between Royole’s offices in Shenzhen and Silicon Valley. “I have three cellphones with separate work, personal, and China numbers,” he said.Royole has attracted talent from around the globe, said Ze Yuan, who manages the R&D team focused on core display and sensor technologies. The company is really just getting started, he said, noting that it has other systems in the works using its flexible displays and that it has other display and sensing technologies still in the lab.
Key word:
Release time:2018-11-12 00:00 reading:1274 Continue reading>>
'No one can take us down,' China's state-run media claims as trade war heats up
U.S. President Donald Trump, left, and Xi Jinping, China's president, shake hands during a news conference at the Great Hall of the People in Beijing, China, on Thursday, Nov. 9, 2017.China's state-run media outlets sounded a confident tone in Monday editorials and warned that U.S. trade pressure against the country would not only strain bilateral relations between the two nations but could negatively affect the American economy and wider global markets.Those reactions followed the latest round of mutual tariffs targeting U.S. and Chinese goods taking effect Monday, escalating trade tensions between the world's two largest economies.The U.S. levied tariffs of 10 percent on $200 billion of Chinese products with the rate set to increase to 25 percent by the end of the year. The Chinese government retaliated with taxes on 5,207 U.S. imports worth about $60 billion. Before Monday's penalties took effect, the U.S. and China had already applied tariffs to $50 billion of each other's goods.In Monday editorials, Chinese state-controlled newspapers such as Global Times and China Daily's editorial pages were quick to claim that Beijing had stayed calm and fair in the face of Washington's trade pressure.If the U.S. is using trade pressure as a negotiating tactic against Beijing, it would jeopardize both countries' economies and the effects would likely spill over to the larger global economy, an editorial published by China Daily on Monday said."The US' unilateral trade moves have not only damaged normal China-US trade activities, but also could stunt world economic growth," the state-controlled newspaper reported. "So if the Trumpadministration continues to stick to its unilateral and protectionist stance, and refuses to respect the fundamental norms of mutual respect and consultation, it would be difficult for the two sides to make substantial progress in any future trade talks."The editorial added that China has remained sincere in its efforts for a fair trading agreement, in contrast to U.S. "trade bullying."Other media editorials highlighted the broader erosion of U.S.-China relations as a result of Washington's accusations about unfair business and regulatory practices benefiting Beijing.For one, China Daily claimed in another editorial that America's accusations of China engaging in "theft" and forced transfer of intellectual property and other "unfair" trade practices may strain the longstanding consensus between both countries.As such, if U.S. trade measures against China were meant as a bargaining chip in negotiations "they may actually end up rendering further engagement even more difficult, if not completely impossible," the editorial said.For its part, the Global Times, China's hyper-nationalistic Communist Party-run tabloid, in a Monday editorial called for Americans to recognize China's strength in the trade disagreements."China is doing what it should. China is honest and principled and a major trade power with intensive strengths. No one can take us down," it said.
Key word:
Release time:2018-09-28 00:00 reading:991 Continue reading>>
U.S.-China trade war heats up with semiconductor industry caught in the middle
U.S. Government Imposes Tariffs on $200 Billion of Goods and China Retaliates on $60 Billion of GoodsEarlier this week, the U.S. Trade Representative (USTR) released a 10 percent tariff on $200 billion in imports from China, including more than 90 tariff lines central to the semiconductor industry.The 10 percent tariff will take effect on September 24, 2018, and rise to 25 percent on January 1. These tariff lines will cost SEMI’s 400 U.S. members tens of millions of dollars annually in additional duties. However, counting the products included in the previous rounds of tariffs, the total estimated impact exceeds $700 million annually. China has already announced that it will respond with tariffs on $60 billion worth of U.S. goods. In his notice, President Trump said the U.S. will impose tariffs on $267 billion worth of goods if China retaliates.The U.S. government removed 279 total tariff lines, including three lines that impact our industry: silicon carbide, tungsten, and network hubs used in the manufacturing process.As we’ve noted, intellectual property is critical to the semiconductor industry, and SEMI strongly supports efforts to better protect valuable IP. However, we believe that these tariffs will ultimately do nothing to address the concerns with China’s trade practices. This sledgehammer approach will introduce significant uncertainty, impose greater costs, and potentially lead to a trade war. This undue harm will ultimately undercut our companies’ ability to sell overseas, which will only stifle innovation and curb U.S. technological leadership.Product Exclusion Process – List 2USTR formally published the details for the product exclusion process for products subject to the List 2 China 301 tariffs (the $16 billion tariff list). If your company’s products are subject to tariffs, you can request an exclusion.In evaluating product exclusion requests, the USTR will consider whether a product is available from a source outside of China, whether the additional duties would cause severe economic harm to the requestor or other U.S. interests, and whether the product is strategically important or related to Chinese industrial programs (such as “Made in China 2025”)The request period ends on December 18, 2018, and approved exclusions will be effective for one year, applying retroactively to August 23, 2018. Because exclusions will be made on a product basis, a particular exclusion will apply to all imports of the product, regardless of whether the importer filed a request.
Key word:
Release time:2018-09-26 00:00 reading:1219 Continue reading>>
AI Formats May Ease Neural Jitters
  A group of mainly chip vendors released a draft standard that aims to act as an interface between software frameworks for creating neural network models and the hardware accelerators that run them. It shares goals with a separate effort started as an open-source project earlier this year by Facebook and Microsoft.  The Khronos Group is seeking industry feedback on a preliminary version of its Neural Network Exchange Format. NNEF initially aims to be a single file format to describe any trained neural network model to any chip performing inference tasks with it.  “We have dozens of training frameworks and potentially hundreds of inference engines on the way,” said Neil Trevett, president of Khronos. “That’s a horrible fragmentation.”  The working group that created the draft consists of more than 30 mainly semiconductor companies including AMD, ARM, Intel, Imagination, Qualcomm, and Samsung. The chip vendors see NNEF as a way to share the effort of creating a single software target for their chips, something many are already doing internally.  Web giants such as Amazon, Google, and others each develop their own software frameworks for creating neural net models. They see them as strategic tools to get an edge in efficiency and attract developers.  To jumpstart their support, Khronos created open-source versions of programs that can export NNEF files from Caffe and Google’s TensorFlow, two popular frameworks.  “We will need a bunch more exporters, but we have those two available now … we will do some paid RFQ-based projects with partners to develop more exporters,” said Trevett.  So far, the web giants seem to be coalescing around their own effort called the Open Neural Network Exchange format (ONNX). The open-source project had a version 1.0 release earlier this month and now has support from Amazon as well as a handful of hardware companies such as AMD, ARM, Huawei, IBM, Intel, and Qualcomm.  ONNX aims to translate models created with any of a dozen competing software frameworks into a graph representation. Trevett said that Khronos is open to collaborating with the effort but pointed out that NNEF is different in two key ways that are important to chip vendors.  Technically, ONNX is a flat representation of operations as a graph. NNEF can do that too, but the Khronos approach also supports compound operations that fuse nodes in a graph. Packing and unpacking operations in this way is one approach that chip vendors plan to use to execute operations efficiently, he said.  Perhaps more importantly, Khronos has had bad experiences with open-source projects that can change rapidly, sometimes breaking hardware compatibility.  For example, changes in the open-source LLVL intermediate representation for processor compilers twice broke compatibility with the group’s OpenCL spec. “People had invested in hardware, and road maps were being broken — this was extremely painful” for chip vendors, said Trevett.  Khronos created its own spec, Spir-V, in place of LLVM, updating it periodically to keep pace with the open-source software. “Our experience with LLVM is that we need a stable spec with multi-company governance,” said Trevett, noting that the group “is up for” the work of keeping that spec current with software shifts.  NNEF requires an interface on the hardware side as well. The initial prototype uses Khronos’ OpenVX interface for computer vision and its recently released neural network extensions.  To use NNEF on smartphones, engineers will have to develop interfaces to Apple’s CoreML or Google’s Android Neural Network API. However, both are openly published formats, so anyone can do that work, said Trevett.  “Once there’s a bunch of trained nets in NNEF, anyone benefits from importing them to their hardware,” he said.  Interestingly, Nvidia is one of the few chip vendors not participating in the NNEF work so far.  “That might change, we haven’t made decisions yet,” said Trevett, who is also vice president of developer ecosystems at Nvidia. The company “has a lot of internal projects that do this sort of thing and is heads-down in solutions for customers.”  To ease the path for chip makers, Khronos also launched an open-source NNEF syntax parser and validator. It lets chip vendors make sure that NNEF files are properly formed and converted for their hardware.  Khronos hopes to stay neutral in any AI battles, giving vendors room to pick the frameworks, APIs, and file formats such as ONNX if they prefer. “NNEF support doesn’t negate support for other things,” he said.  Ultimately, the group hopes that NNEF can also be used as a file format for hardware accelerators used for training and as a way to exchange models among software frameworks.  “We see a need for the authoring interchange, but we haven’t prototyped it, so we will probably have to add stuff … fundamentally, there’s a lot of research in AI, and new neural net topologies will appear, so NNEF will have to evolve,” said Trevett.
Key word:
Release time:2017-12-25 00:00 reading:2730 Continue reading>>
2017 Watson AI XPrize Top 10 Revealed at NIPS
  The $5 million IBM AI Watson XPrize is revealing the top 10 finalists for the 2017 round and awarding a total of $15,000 in prize money to the top two finishers today (Dec. 8) at the Neural Information Processing Systems conference (NIPS 2017; Long Beach, Calif.). Amiko AI (Milan), which is upgrading respiratory care with sensor technologies and digital health tools, has come in first and is being awarded the $10,000 top prize for this year. The $5,000 second-place prize goes to aifred Health (Montreal), which is using deep-learning algorithms to personalize treatments for depression. The two top finishers also were scheduled to present detailed descriptions of their projects on stage at the event.  “The 32 judges — who are all independent of IBM, as am I — narrowed the 147 first-year contestants down to 59 second-year contestants [based on] who has made the most progress on the most helpful-to-world-society projects,” Amir Banifatemi, prize lead for the IBM Watson AI XPrize and managing partner of K5 Ventures, told EE Times in advance of the announcement at NIPS.  Here are the other top 10 finishers, in alphabetical order:BehAIvior (Pittsburgh) is combining data from wearables and smartphones to create an early-warning system that predicts addiction relapses — especially overdoses — with the intent of preventing them.Brown University's Human Centered Robotics Initiative (HRCI; Providence, R.I.) is creating a three-phase program to identify the social and moral norms that robots should be designed to internalize.DataKind (New York) is developing artificial-intelligence models using high-resolution satellite imagery that can help alleviate poverty in underdeveloped regions by monitoring crops for disease while they can still be saved.Deep Drug (Baton Rouge, La.) is working on AI drug design software that learns from both the successes and the failures of previous clinical trials to shorten the development time for new, more-targeted drugs.The EmPrize team at the Georgia Institute of Technology’s Design & Intelligence Lab (Atlanta) is aiming for smart virtual tutors that will answer questions, provide feedback, and perform other functions for online education.EruditeAI (Montreal) is creating a free peer-to-peer math tutoring platform to match students who are struggling to understand a given mathematical concept with students who have demonstrated proficiency in that concept.Iris.ai (Oslo, Norway) is automating a systematic mapping solution for scientific papers that will help AI researchers with the literature discovery phase of their projects.WikiNet (Quebec City) is working on a solution that learns from past environmental-cleanup efforts to provide expert recommendations for cleaning up other contaminated sites.  The 59 teams selected to move forward from 2017 to 2018 come from Australia, Barbados, Canada, China, France, Germany, India, Israel, Italy, Norway, Poland, the United Kingdom, the United States, and Vietnam. The categories their projects cover include Health & Wellness, Learning & Human Potential, Civil Society, Space & New Frontiers, Shelter & Infrastructure, and Energy & Resources. The criteria used to assess the projects include the standards they intend to set; the performance and scalability of their application; and, most important, their potential to achieve an exponential societal improvement. “The judges this year recognized those teams that have emphasized man-machine collaboration and are furthest along in their projects,” said Banifatemi.  In a nod to the rapid pace of breakthroughs in AI, the AI XPrize added a wild-card round this past fall to accommodate teams working on concepts that were not foreseen in the competition’s first year. The top 10 finishers among the wild-card teams that applied for inclusion this year will inflate the total field of contenders to 69 in 2018. A second wild-card round will be held next year. In an interview with EE Times when the first wild-card round opened, Banifatemi said that, based on how many wild cards are approved to compete this year and next, “we expect to have half of the total teams in competition by September 2018 moving into 2019.” No wild cards will be added in 2019, and at the end of that year the field will be halved again.  Further prize money will be awarded to the top 10 finishers in 2018 and 2019, with the three top-10 rounds (2017-2019) collectively accounting for $500,000 of the $5 million total allotted for the XPrize. In 2020, at the Grand Prize event on the TED2020 stage, the remaining $4.5 million will be awarded to the top three finalists: $3 million to the first-place finisher, $1 million for second place, and $500,000 for third place. The third-prize winner will be selected with the help of voters at TED2020.
Release time:2017-12-11 00:00 reading:1156 Continue reading>>
Research institute to tackle hardware security, cyber threats
  The Centre for Secure Information Technologies (CSIT) at Queen's University Belfast has launched a new research institute whose goal is to improve hardware security and reduce vulnerability to cyber threats.  According to CSIT, the Research Institute in Secure Hardware and Embedded Systems (RISE), one of four cyber security institutes in the UK, will be a global hub for research and innovation in hardware security over the next five years.  Professor Maire O’Neill, a cryptography expert at Queen’s University, has been named director of RISE. She said: “RISE is in an excellent position to become the ‘go-to’ place for high quality hardware security research. A key aim is to bring together the hardware security community in the UK and build a strong network of national and international research partnerships.  “We will also work closely with leading UK-based industry partners and stakeholders, transforming research findings into products, services and business opportunities, which will benefit the UK economy.”  Funded by EPSRC and the National Cyber Security Centre (NCSC), RISE represents a ?5million investment. It will address cyber threats through four initial component projects, involving Queen’s University, the University of Cambridge, University of Bristol and University of Birmingham.  Dr Ian Levy, NCSC technical director, said: “I think that the inclusion of hardware-based security capabilities in commodity devices could be a game changer in our fight to reduce the harm of cyberattacks and so I’m really pleased to see a strong set of initial research projects.”  RISE will initially host four research projects, with involvement from academics at the universities of Bristol, Birmingham and Cambridge, as well as work by Prof O’Neill.  Birmingham’s project will be led by Professor Mark Ryan, who will be exploring user controlled hardware security. According to Prof O’Neill, this project will look at roots of trust. “Often, these are proprietary and closed,” she explained, “but some attacks have already found weaknesses. Prof Ryan’s work will look to make roots of trust more user friendly and to build demonstrators of how they can be used in a range of applications.”  Cambridge will be working on IOSec protection and memory safety. Professor Simon Moore, along with Drs Rob Watson and Theo Markettos, will be exploring interfaces such as USB-C. “These interfaces have proved vulnerable,” Prof O’Neill said, “so the team wants to go back to scratch to design interfaces which have security built in from the start.”  Dr Dan Page from Bristol is running the SCARV project – side channel hardened RISC-V platform. “This will look to create a more open, well evaluated platform with resistance to attack and which is more flexible to use,” Prof O’Neill continued.  Finally, Prof O’Neill will be working on side channel analysis and trojan detection, with a focus on EDA tools. “I’m looking to build a verification process that allows those not expert in security to find ways to improve their designs.  “We have seen that neural networks can be used to get past counter measures and access private keys. I’m going to look at ways to prevent this from happening in the future.”
Release time:2017-11-24 00:00 reading:1125 Continue reading>>
Keynoter: Noise Analysis Beats Google Now
  Researchers are mining a largely untapped data source — the signals and noise generated by smart-device sensors — to enable technologies that solve the world’s hardest human-machine interface problems, Intel Fellow Lama Nachman, Director of the Anticipatory Computing Lab, told a keynote audience at SEMI’s MEMS & Sensors Executive Congress 2017 (San Jose, Calif.). The resultant applications will accurately detect emotions and anticipate needs, without requiring a Google-like dossier of user habits, she predicted.  “Technology needs to be more active at understanding the needs of the user,” Nachman said. “To do that, our job at the Anticipatory Computing Lab is to really understand what type of help you need in any situation.”  Reviewing earlier stabs at productivity-enhancing personalized assistants, Nachman praised Apple’s Siri and Amazon’s Alexa because they kept it simple, only offering a helping hand in response to specific user requests, whereas Microsoft’s initial efforts to make ad hoc suggestions to users wound up irritating them at best and breaking their train of thought at worst. She praised Google Now’s ability to make ad hoc suggestions to its users that are actually useful (for the most part). The downside to Google Now is the deep knowledge it needs to mine from users’ habits with respect to browsing, location, email, purchasing, and other behaviors — a collection of data amounting to a dossier on each user.  Instead, Intel’s Anticipatory Computing Lab aims to repurpose the signals and noise produced by the legions of sensors already deployed in smartphones, smart watches and wearables, smart automobiles, and the Internet of Things (IoT) to make ad hoc suggestions that entertain, increase productivity, and even save people’s lives — whether or not they are Intel users — all in real time and without a Google-like secret data bank on user habits.  “Intel is taking all the sensor feeds available now and reinventing the way they can help people with volunteered information that is always relevant to the person, what you are doing, and what goals you are trying to achieve,” Nachman said. “But to do so, there is a very large set of capabilities that we need to understand, such as emotions, facial expressions, nonverbal body language, personal health issues, and much, much more.”  Many of these personal parameters can be gleaned from the normal usage of the sensors built into our smartphones, wearables, and IoT devices — for instance, facial expressions from a smartphone’s user-facing camera or the volume of a user’s voice. The sensor data can be fused with smart-watch data on pulse rate, activities, location, and more to anticipate a user’s actions and needs with unprecedented accuracy, according to Nachman.  “To understand emotions ‘in the wild,’ so to speak, it is essential to understand, for instance, when you are angry. Even if you are not cursing or yelling, your computer should understand when you are pissed off,” said Nachman. “Physical factors like breathing fast can be seen by a user-facing smartphone camera, fast heart rate can be measured by your smart watch, but we need to fuse that with facial expressions and a deeper understanding of how individuals behave.”  Besides the aforementioned applications, Intel is pursuing such technologies as caring for elderly people or those with disabilities in real time. Indeed, just about everyone can benefit in some fashion from the “guardian angel” model. For instance, Nachman admitted to be a serial food burner, especially when she prepares large spreads for parties. “I need my computer to help me stop burning things,” she said. “Mechanics, repairmen, and even surgeons need their computers to tell them when they have left a tool inside the location they are repairing before they close it up.”  Another essential, according to Nachman, is the perfection of adaptive, personalized learning that engages each user, especially children, in the optimal way for them. Likewise, she claimed that autonomous vehicles need to keep track of what is happening to the people inside the car as well as the environment around the car. “You especially need to understand how comfortable the driver of the car is when he releases control to the autopilot, [gauging] the anxiety level. You also need to keep track of the activities in which the people in the car are engaging, at least insofar as [the activity] affects the occupants’ safety.”  Noise is the signal  Nachman said OEMs and microelectromechanical systems (MEMS) sensor makers are ignoring the noise produced by sensors and actuators, and as a result they are automating functions, such as smartphone camera settings, over which users might want more control. “There is a lot of noise in the environment, but sometimes that is the signal you want to identify,” she said.  Smartphone cameras automatically adjust exposure and focus, for instance, assuming users are only interested in the foreground. But what if the photographer wants to focus on the criminal lurking in the background? Current smartphones let you touch the part of the scene you want to be exposed properly, but they invariably switch back to the foreground without continuous taps on the background. MEMS sensor makers should, by default, allow users to disable all automatic functions, according to Nachman.  Nachman cited another product category in which the default settings should not be automatic: RFID tags, which are both sensors and actuators. Ordinarily, RFID readers ping the RFID tag with an RF signal, which is harvested and used to actuate a return signal that identifies the product on which the tag is mounted. In her lab, Nachman demonstrated that by analyzing the noise that results when a person stands between the reader and the tag, it is possible to infer customer engagement. “We found that the noise of the human interrupting the RFID ping could be used to find out which item on a shelf is being touched, which one the buyer is interested in the most, and other useful facts for retailers,” she said.  Other examples given by Nachman include extracting a person’s respiration and heart rates from the noise in the reflection of wireless signals already saturating the environment from everybody else’s smartphones. The lab has also experimented with putting smart nose pads on a person’s glasses that could render a noisy nasal version of the user’s voice. Using signal processing to remove the nasal noise yields clear voice signals without the use of a microphone.  The lab found that the noise from the ubiquitous magnetometers in smartphones, wearables, and IoT devices could be mined for a variety of contextual data, for instance whether a person is sitting; standing; walking; exercising; biking; or riding in a car, bus, or airplane. The noise from a smart watch can reveal whether the wearer is talking on the phone, moving a mouse, pushing a button, or stapling. Gyroscope noise can be used to tell, from just a single finger touch, whether a person is intending to point or zoom, thus obsoleting pinch-to-zoom and allowing simultaneous zooming and clicking with one hand.  Blood pressure measurements — a capability Apple promised in its initial buildup for watches but failed to deliver — can be taken from the noise generated between a smartphone’s two cameras as the phone is pressed against the user’s skin. The camera can also use the noise in a user-facing camera to measure pupil dilation and thereby infer whether the person is drowsy, anxious, or something in between.  The Anticipatory Computing Lab even developed a way to allow Stephen Hawking to control everything he does with the movement of a single cheek muscle. That movement contains a significant noise element, namely how much control Hawking has over that muscle from day to day (which varies wildly).  There is also a power-savings component to the research. For example, “people have been thinking up all sorts of GPS applications since it became ubiquitous,” Nachman noted, since GPS “burns up a lot of battery power that the sensor makers never anticipated. Even worse is how to keep the power consumption down for always-on sensors, which must able to sense the intended signals plus the noise in between them, and decide when to turn on the application processor, all while keeping power consumption low.”  The answers, according to Nachman, are to accelerate the pace of innovation without increasing power consumption by virtue of more configurable smart sensors that know when to turn on the application processor, as well as when to sense noise they were not originally envisioned to sense.
Release time:2017-11-03 00:00 reading:1292 Continue reading>>
IBM Watson AI XPrize Adds Wild-Card Round
  The $5 million IBM Watson AI XPrizecompetition, which kicked off last year and will end in 2020, was the first of the XPrize contests (14 since 1995) to have a contestant-defined “open” goal rather than a predetermined objective. Now it is also the first XPrize to add a wild card, giving new contestants until Dec. 1 to join the 147 teams that made the first-year cut.  “The total number of teams officially registered stands at 147, out of the total of 870 team submissions that were recorded from more than 9,000 initial interested requests,” Amir Banifatemi, prize lead for the IBM Watson AI XPrize, told EE Times. “Given the rapid pace of artificial-intelligence breakthroughs and the possibilities that AI opens to solve grand challenges, we wanted to ensure that teams with new ideas still had the opportunity to participate.”  The original teams still in the competition hail from 22 countries in total: Australia, Barbados, Canada, China, the Czech Republic, Ecuador, France, Germany, Hungary, India, Israel, Italy, Japan, the Netherlands, Norway, Poland, Romania, Spain, Switzerland, the United Kingdom, the United States, and Vietnam. Their projects are being evaluated not only for their efficacy in addressing AI challenges but also for their potential social, ethical, and technological impact. Imbuing AI with the ability to understand human emotional cues, for example, could have implications beyond the AI’s cognitive-computing capabilities.  The IBM Watson AI XPrize is also the first XPrize to leave the goal open to the contestants’ discretion, and as a result the teams have proposed ideas for solving problems across a wide swath of disciplines. “Energy Efficiency” projects would reduce greenhouses gases and makes landfills smart at separating recyclables. “Health and Wellness” investigations look to head off mental health problems, diagnose an infant’s crying, and improve sleep. “Learning and Human Potential” projects aim to reinvent computer coding, personalized learning, peer-to-peer tutoring, and scalable learning to achieve universal worldwide literacy. Proposals for “Improving Society” would automatically flag “fake news” in social media and get legal information to victims at little cost. “Shelter and Infrastructure” projects aim to meld social development with satellite imagery, predict disasters, manage traffic flows in cities, and assess the structural health of buildings. “Space and New Frontiers” explorations seek to develop neurologically inspired models and automatically propose hypotheses.  “We have been impressed with the level of variety and domain focus so far. Teams are very diverse, [hailing from] startups, academia, large corporations, nonprofits, and more,” Banifatemi said. Among the “impressive” entries, he said, are AI applications to “detect crop disease in Ethiopia, detect illegal mining in Congo, model malaria-prone regions in India, predict psychiatric medicine effectiveness, automate project management at scale, [advance] triage emergency medicine, and monitor the structural health of buildings.”  The addition of the wild-card teams aims to widen the application domains even further, but the expanded pool will still be subject to the same annual culling process. “Each year, up to 50% of teams will move to the next round provided they reach their milestones and are selected by judges to move forward,” Banifatemi said. “Based on how many wild cards are approved to compete, we expect to have half of the total teams in competition by September 2018 moving into 2019.”  The judges also will be picking out favorites over the course of the multiyear competition, distributing $500,000 in total to teams for meeting their stated milestones with outstanding performance. The milestone awards will be made at the judges’ discretion rather than follow a strict policy. The finalists in 2020 will receive $3 million for first place, $1 million for second place, and $500,000 for third place at the Grand Prize event on the TED2020 stage. The conference attendees, including the online audience, will have a say in determining the final placement of the three winners.  Banifatemi described the awards system in detail: “We will have 10 teams eligible for the milestone prizes each year, and the top three will receive cash prizes based on the judges’ assessment. The milestone prizes in 2017, 2018, and 2019 are part of the prize purse that IBM has committed to. By the final round, in 2020, three teams will have been selected from the 10 milestone prize candidates in 2019. The judges will have already approved the top three, and the public will weigh in on the final scoring. The judges score’ and the public score will be taken into account for selecting the first-, second-, and third-place winners."  The next major event, in December, will be the announcement of which of the currently registered teams (roughly half of the current total) will move on to the next phase. In January, the judges will announce which of the wild-card teams will be added to the roster. In January 2019, the field will be halved again, with survival dependent on the standards proposed and on the AI’s performance, scalability, and — most of all ??— likely worldwide impact.
Release time:2017-10-19 00:00 reading:1231 Continue reading>>
Implantable Fiber Diagnoses, Treats, Biodegrades in Place
  Light is used in medical applications to image, diagnose, and even treat maladies. But externally applied lasers can penetrate no more than centimeters and sometimes just microns into the body, depending on the wavelength of the light and the turbidity (opacity) of the targeted tissue. Implanting traditional optical fibers allows therapeutic light to reach the targeted tissue, but the fibers must be surgically removed after use, damaging the surrounding tissue and posing a danger if they break during removal.  Now an electrical engineer and a biomaterials engineer at The Pennsylvania State University have found a way to shine any wavelength of light at any depth into the body with what they say is the first citrate-based flexible biodegradable polymeric step-index optical fiber.  Deciphering all those adjectives reveals that the optical fiber is made from a citric-acid-based organic polymer that is biodegradable (meaning the fiber can safely be left in place after a procedure) and can be fabricated with a conductive core and opaque cladding to deliver any wavelength of light, anywhere in the body, with pinpoint accuracy.  “Light is an enabling tool for many medical applications. For example, our citrate-based biodegradable fibers can potentially be used to deliver laser light into the body to remove tumors. The fiber can enable deep tissue imaging for disease diagnosis and for monitoring clinical treatment outcomes. Another example is that light can be delivered into the body to activate drugs for cancer treatment — photodynamic [light activated] therapy,” Jian Yang, a professor of biomedical engineering at Penn State, told EE Times.  The citrate fiber can be used to perform the required therapeutic function repeatedly and then left in place to biodegrade safely at the end of its service lifetime. It can be engineered to be low-loss and flexible, with a variable index of refraction, to confine the light so that it shines only on a specific area for destroying tumors or stimulating neurons. It can be co-engineered with multiple cores for sensing fluorescence, imaging tissues, delivering liquid drugs, or precisely placing nanoparticles containing time-released medications.  “Our fibers are made of citrate-based polymers, of which there are many functional groups that can be used for drug conjugation [bonding a molecule to a toxin to render it harmless]. Drugs can be encapsulated in the cladding layer of the step-index fibers. We can also fabricate hollow channels that are juxtaposed with the solid fiber [so that] drug solutions can be delivered through the channels from outside the body to the implantation sites,” Yang said.  It’s still early days for the technology. Having proved the fibers in the lab, the researchers will work on optimizing the materials and improving the fabrication procedure to yield longer and lower-loss fibers as well as fibers with special functionalities. “We will also start to investigate biological and biomedical applications,” said Zhiwen Liu, an electrical engineer professor at Penn State.  Yang and Liu co-authored a “paper on the work along with former graduate student Surge Kalaba and current doctoral candidate Gloria Kim, both from Yang’s group, and postdoctoral researcher Nikhil Mehta from Liu’s group. The National Institutes of Health funded the project.
Release time:2017-10-18 00:00 reading:1103 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
TL431ACLPR Texas Instruments
CDZVT2R20B ROHM Semiconductor
BD71847AMWV-E2 ROHM Semiconductor
MC33074DR2G onsemi
RB751G-40T2R ROHM Semiconductor
model brand To snap up
ESR03EZPJ151 ROHM Semiconductor
BP3621 ROHM Semiconductor
TPS63050YFFR Texas Instruments
IPZ40N04S5L4R8ATMA1 Infineon Technologies
STM32F429IGT6 STMicroelectronics
BU33JA2MNVX-CTL ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code