[ad_1]
GRENOBLE — In the world of AI research, Europe has drawn a line in the sand, declaring that R&D must focus squarely on “Edge AI.”
This proclamation draws a stark contrast to “Cloud-based AI,” the model aggressively pursued by China and the United States. During “Innovation Days” hosted here by French research institute CEA-Leti this past week, Emmanuel Sabonnadiere, CEA-Leti’s CEO, discussed the “two schools of AI research” that have split the world in two.
Both the U.S. and China have been collecting massive amounts of data which they use for training AIs, the basis for their claims they lead the world AI race. Strict data privacy regulations in Europe might be seen as impeding European companies’ progress in AI, but that’s not necessarily the case. Conforming to those regulations is instead “shaping Europe to manage its AI research strategy very differently,” said Sabonnadiere. Once AI is trained in the cloud, Europe sees its role as applying further learning and personalization to “Edge AI.”
Is Europe behind in AI?
This prompts the question of whether Europe is really lagging in the race for AI. The answer: not necessarily. To begin with, from a worldwide perspective, AI research is barely out of the starting gate.
It’s easy to dismiss the notion of “edge AI,” given that it’s something plenty of companies say they’re already doing. That’s true to the extent that everybody seems to be slapping AI accelerators into smartphones and calling it “AI at the edge.”To be clear, when CEA-Leti talks about Edge AI, this means its researchers are thinking about technology used in inferences that go way beyond current edge AI practices.
Sabonnadiere explains that Edge AI is “a huge challenge,” one that Europe “absolutely must solve,” given that data privacy rules aren’t going away.
By definition, solving big challenges requires innovation. CEA-Leti and its research partners have laid out a 10-year roadmap for Edge AI. The technologies range from 3D stacking to in-memory computing and on-die integration of resistive non-volatile memories. Advancements in such areas will help reduce energy per operation.
The key to frugality with power consumption at the edge, said Leti’s Francois Perruchot, is: “Do not move data,” from an external memory block to an AI processor. Every time data moves, AI power consumption spikes at the edge by 100 to 1000 times,” he noted. Perruchot does strategic marketing for the organization’s sensor operations.
In parallel, reducing the number of AI operations at the edge is crucial. CEA-Leti is exploring a neuromorphic approach to data processing and spike-coding for deep neural network processing of sensor inputs.
10 TOPS per watt
The goal for CEA Leti’s Edge AI research is to develop — over five years — “an Edge AI processor running at 10 tera-operations per second (TOPS) per watt,” said Sabonnadiere. This requires a combination of new memory architecture, spiking algorithms and sensor arrays. Once achieved, he said, “this will be a game changer.” It will be a sharp contrast to a typical current GPU running at 1 TOPS per 200 watts, according to CEA-Leti.
In Sabonnadiere’s view, data privacy has created an R&D opportunity unique to Europe. This poses a constraint for European researchers, but it also forces them to tackle the issue head on, an arduous process the rest of the world has barely pondered. It remains to be seen if edge AI will indeed enable Europe to win the global AI race, but at least it is a goal Europe can use to differentiate its AI research from the rest of the world.
AI without hardware?
CEA-Leti’s CEO regards as a huge asset its 50-year history of R&D deeply involved in testing and manufacturing of microelectronics.
AI research in Silicon Valley has profited hugely profit from algorithms advanced by technology platform companies such as Facebook, Google, Amazon and Microsoft. Further, Sabonnadiere acknowledged that AI research success by those Internet giants — “without spending huge capital expenditures” — has spun the narrative that AI is ruled by AI software algorithms. However, he stressed that “AI without hardware” will eventually hobble AI’s potential.
AI researchers in Europe are also cognizant that the overwhelming hype around AI among investors, media and the public could kill AI prematurely. Noting that the long history of AI has gone through cycles of “AI winter,” Sabonnadiere said, “We could face yet another ‘deep freeze.’”
AI, he said, must stand on two pillars of discipline: Edge AI and Trusted AI.
AI that tells us ‘I don’t know’
By trusted AI, he means, AI that respects privacy, can explain itself, and is responsible and reliable.
Patrick Gros, CEO of the Inria/Grenoble, French Institute for Research in Computer Science and Automation, explained AI bluntly. “AI isn’t intelligent. If there is any intelligence, it is in AI developers.” AI, in the simplest terms, can be explained as “brute force applied to data,” he noted. The probabilistic nature of AI could also make it problematic. The issue is not that AI makes the right decisions all the time. When the outputs of an AI decision miss predictions by a mile, “We need AI to tell us, ‘I don’t know,’” said Gros.
AI’s confidence in its own decision-making greatly matters when AI is used in life-critical systems, be it an autonomous car, an airplane or a medical device.
One good example is Diabeloop, a startup working in partnership with CEA-Leti. The company has developed a type-1 diabetes management system, already approved by regulators in Germany and France. It monitors a patient’s blood sugar and reproduces the pancreatic function. The startup’s CEO, Erik Huneker, told us that because diabetics have “immensely varied” lifestyles and needs for insulation injection levels, it’s important that the AI system “locally learn” the patient’s needs and “personalize” the system.
Describing the system as “the first autonomous medical device that makes a decision,” Huneker said, “When AI’s decision deviates from predictions by 40 percent, the system shuts itself down. It would not send insulin automatically.” In other words, the device is run by two systems — an AI-driven autonomous system to execute local learning and a deterministic system to prevent the system from injecting the wrong insulin dosage.
AI needs formal certification
The factors that make AI system systems “less robust” and “unfair” are the incomplete data sets often used to train AI, Inria’s Gros added.
It’s not enough for companies to say, “We did our best” to make AI reliable, stressed Gros. “We need to formally certify that [AI-driven] systems are fair.” Unlike smartphones that could miss an “event” or two in receiving sensory data, life-critical AI systems must be designed based on “ethical and legal AI frameworks,” said Gros. “And they must be “formally certified.”
Next page: Global alliances
[ad_2]
Source link Google News