********** WORLD AFFAIRS **********
return to top
Xi’s cocoon in Moscow: A Chinese-owned hotel over Red Square opulence
Tue, 21 Mar 2023 14:15:52 EDT
Sino-Russian relations have grown substantially closer since Russia’s invasion of Ukraine and the West’s imposition of punishing sanctions.
Match ID: 0 Score: 55.00 source: www.washingtonpost.com age: 0 days
qualifiers: 35.00 sanctions, 20.00 russia
Russia-Ukraine war at a glance: what we know on day 391 of the invasion
Tue, 21 Mar 2023 15:00:04 GMT
Russian cruise missiles destroyed in Crimea blast; Xi Jinping meets Vladimir Putin in the Kremlin on day two of state visit
Russian cruise missiles intended for use by Moscow’s Black Sea fleet have been destroyed in an explosion in the city of Dzhankoi in the north of the Crimean peninsula, Ukraine’s defence ministry has said. Ihor Ivin, the Russian-installed head of the Dzhankoi administration, was quoted as saying the city had come under attack from drones, and that a 33-year-old man had suffered a shrapnel injury from a downed drone.
Vladimir Putin and Xi Jinping are holding a second set of talks on Tuesday. The Chinese president’s trip to Moscow has been viewed as a major boost for his strategic partner Putin. The Chinese leader is expected to continue to position himself as a potential peacemaker in the Ukraine war during his two-day visit to Russia – his first state visit since Putin’s invasion.
Xi met Russian prime minister Mikhail Mishustin on Tuesday morning, and invited both Mishustin and Putin to visit China later this year.
China’s state media has extensively and positively covered the visit. Most coverage has focused on Xi’s comments and talking about how strong the relationship is. None of the pieces mentioned the recent ICC arrest warrant for the Russian president over alleged war crimes committed in Ukraine.
US secretary of state Antony Blinken criticised Xi on Monday for providing “diplomatic cover” for Russia to continue committing war crimes. Blinken said the visit “suggests that China feels no responsibility to hold the Kremlin accountable for the atrocities committed in Ukraine.” Blinken voiced scepticism over China’s “peace” proposals to end the Ukraine conflict, warning they could be a “stalling tactic” to help Russian troops on the ground in Ukraine.
Japan’s prime minister Fumio Kishida has arrived for talks with Zelenskiy. Kishida and Zelenskiy are expected to discuss Japanese support for the reconstruction of Ukraine. Kishida, who is to chair the Group of Seven summit in May, is the only G-7 leader who hasn’t visited Ukraine and was under pressure to do so at home.
Japan has continued to import Russian timber worth US$414m since the start of the war in Ukraine, according to a London-based NGO, which accused Tokyo of “helping to bankroll Putin’s bloody war” despite joining other countries in imposing sanctions on the Kremlin.
Iryna Vereshchuk, who is one of Ukraine’s deputy prime ministers and is the minister of reintegration of temporarily occupied territories, has said there is no confirmed time agreed for a mooted call between Ukraine’s President Volodymyr Zelenskiy and Xi.
Ukraine is holding its defence of the besieged eastern city of Bakhmut as Russian forces attempted to advance to the city centre, a Ukrainian general has said. There was intense fighting along the eastern frontline, he said on Tuesday.
Continue reading...The Russian and Chinese leaders have met more than 40 times in recent years, fortifying personal and formal relations
A visit by China’s president, Xi Jinping, to Moscow this week has shone a light on his strengthening relationship with his Russian counterpart, Vladimir Putin.
The two leaders have met more than 40 times in recent years, growing increasingly familiar, and now refer to each other as “dear friends”. Xi has been described as a great friend by other world leaders before, including Robert Mugabe and Donald Trump, but does not always reciprocate the compliment like he has with Putin.
Continue reading...Damon Silvers, deputy chair of the Congressional Oversight Panel for the 2008 bank bailout, explains how deregulation paved the way for SVB’s collapse.
The post Understanding the Silicon Valley Bank Run appeared first on The Intercept.
“Miraculously, they still believe in the U.S. justice system and still want to tell their story to a U.S. jury.”
The post Iraqis Tortured by the U.S. in Abu Ghraib Never Got Justice appeared first on The Intercept.
Kyiv claims cruise missiles hit in city of Dzhankoi, in attack Russian-installed official blames on drone
Russian cruise missiles of the type used by its Black Sea navy to target Ukraine have been destroyed in an explosion in the Crimean logistics hub of Dzhankoi, according to Ukrainian military intelligence.
The strike appeared to have come from a drone, with a video of the explosion showing it was immediately preceded by a loudly buzzing engine, similar in sound to Iranian-made kamikaze drones used previously to attack Ukraine.
Continue reading...The officially sanctioned conspiracy theory that Saddam Hussein was behind 9/11 set a dangerous precedent.
The post Bush’s Iraq War Lies Created a Blueprint for Donald Trump appeared first on The Intercept.
The shadow of U.S. war crimes in Iraq hangs over the Pentagon's refusal to support probes into Russian atrocities in Ukraine.
The post Biden Administration Splits on Prosecuting Russia for War Crimes in Ukraine appeared first on The Intercept.
D.C. hawks say American military might brought order to the Middle East, but without U.S. meddling, regional rivals finally made a deal.
The post The Key Factor in the Saudi-Iran Deal: Absolutely No U.S. Involvement appeared first on The Intercept.
They’re all doing great, thanks for asking.
The post The Architects of the Iraq War: Where Are They Now? appeared first on The Intercept.
Russian leader reacts to comments by UK defence minister that Britain will supply armour-piercing rounds to Kyiv
Vladimir Putin has sought to exploit a British statement that it would supply Ukraine with tank shells made with depleted uranium, arguing that the delivery of the armour-piercing weapons would prompt a Russian response.
The Russian leader’s comments, made during the visit to Moscow by his Chinese counterpart Xi Jinping, came in response to a parliamentary answer given by a junior British defence minister in the House of Lords on Monday.
Continue reading...Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Inspired by the hardiness of bumblebees, MIT researchers have developed repair techniques that enable a bug-sized aerial robot to sustain severe damage to the actuators, or artificial muscles, that power its wings—but to still fly effectively.
[ MIT ]
This robot gripper is called DragonClaw, and do you really need to know anything else?
“Alas, DragonClaw wins again!”
[ AMTL ]
Here’s a good argument for having legs on a robot:
And here’s a less-good argument for having legs on a robot. But it’s still impressive!
[ ANYbotics ]
Always nice to see drones getting real work done! Also, when you offer your drone up for power-line inspections and promise that it won’t crash into anything, that’s confidence.
[ Skydio ]
Voxel robots have been extensively simulated because they’re easy to simulate, but not extensively built because they’re hard to build. But here are some that actually work.
[ Paper ]
Thanks, Bram!
Reinforcement learning (RL) has become a promising approach to developing controllers for quadrupedal robots. We explore an alternative to the position-based RL paradigm, by introducing a torque-based RL framework, where an RL policy directly predicts joint torques at a high frequency, thus circumventing the use of a PD controller. The proposed learning torque control framework is validated with extensive experiments, in which a quadruped is capable of traversing various terrain and resisting external disturbances while following user-specified commands.
[ Berkeley ]
In this work we show how bioinspired, 3D-printed snakeskins enhance the friction anisotropy and thus the slithering locomotion of a snake robot. Experiments have been conducted with a soft pneumatic snake robot in various indoor and outdoor settings.
[ Paper ]
For bipedal humanoid robots to successfully operate in the real world, they must be competent at simultaneously executing multiple motion tasks while reacting to unforeseen external disturbances in real time. We propose Kinodynamic Fabrics as an approach for the specification, solution, and simultaneous execution of multiple motion tasks in real time while being reactive to dynamism in the environment.
The RPD 35 from Built Robotics is the world’s first autonomous piling system. It combines four steps—layout, pile distribution, pile driving, and as-builts—into one package. With the RPD 35, a two-person crew can install pile more productivity than traditional methods.
[ Built Robotics ]
This work contributes a novel and modularized learning-based method for aerial robots navigating cluttered environments containing thin, hard-to-perceive obstacles without assuming access to a map or the full pose estimation of the robot.
[ ARL ]
Thanks, Kostas!
The video shows a use case developed by the FZI with the assistance of the KIT: the multirobot retrieval of hazardous materials using two FZI robots as well as a KIT virtual-reality environment.
[ FZI ]
Satisfying.
[ Soft Robtics ]
A year has passed since the launch of the ESA’s Rosalind Franklin rover mission was put on hold, but the work has not stopped for the ExoMars teams in Europe. In this program, the ESA Web TV crew travel back to Turin, Italy, to talk to the teams and watch as new tests are being conducted with the rover’s Earth twin, Amalia, while the real rover remains carefully stored in an ultraclean room.
[ ESA ]
Camilo Buscaron, chief technologist at AWS Robotics, sits down with Ramon Roche in this Behind the Tech episode to share his storied career in the robotics industry. Camilo explains how AWS provides a host of services for robotics developers from simulation and streaming to basic real-time cloud storage.
[ Behind the Tech ]
Most threats are directed at law enforcement and government officials, report says, after ex-president urged supporters to protest
Lindsey Graham is one of Donald Trump’s allies in the Senate, so it was little surprise that he predicted dire consequences if the former president is indicted, CNN reports:
He also criticized Florida governor and Trump’s chief rival for the Republican presidential nomination next year Ron DeSantis for his comments yesterday about the potential charges. “I don’t know what goes into paying hush money to a porn star to secure silence over some type of alleged affair. I just, I can’t speak to that,” DeSantis said.
Continue reading...Grand jury investigating ex-president over hush money payment to adult film star appears poised to complete its work soon
Law enforcement officials in New York on Tuesday continued preparing for possible unrest on the streets of Manhattan as a grand jury investigating Donald Trump over a hush money payment to the adult film-maker and star Stormy Daniels appeared poised to complete its work by criminally indicting the former president.
Barriers were brought to the area around the Manhattan criminal courthouse in the lower part of the island. Uniformed police were out in force. So were reporters and protesters.
Continue reading...A trove of secret intelligence cables obtained by The Intercept reveals Tehran’s political gains in Iraq since the 2003 invasion.
The post How Iran Won the U.S. War in Iraq appeared first on The Intercept.
Kremlin toasts the Russian-Chinese partnership, which appears warm but far from equal
Standing alongside Xi Jinping in the 15th-century Chamber of Facets, one of the most historic ceremonial venues at the Kremlin, Vladimir Putin raised his glass of white wine.
He proceeded to make a toast to the “health of our great friend Xi and the deepening of the Russian-Chinese partnership”, before ending his speech with the phrase ganbei, Chinese for “empty glass”.
Continue reading...But US warns against ‘any tactical move by Russia to freeze the war on its own terms’
Vladimir Putin has welcomed China’s proposals for peace in Ukraine at a joint press conference with Xi Jinping in Moscow – a plan the west has warned would allow the Kremlin to “freeze” its territorial gains in the country.
Speaking at the Kremlin during a joint news conference after the second day of talks with China’s president Xi Jinping, Putin said Beijing’s peace plan “correlates to the point of view of the Russian Federation” and said that Ukraine’s western allies so far have shown no interest in it.
Continue reading...Oil prices climbed on Tuesday to post a second straight session gain. Prices have bounced back sharply off their recent lows as “risk sentiment improved following the coordinated actions of major central banks at the weekend and UBS’s takeover of Credit Suisse, said Fawad Razaqzada, market analyst at City Index and FOREX.com. Still, oil prices had been trending lower for months, and their “breakdown last week from a multi-week consolidation pattern suggests there may be more downside potential in oil prices,” he said. For now, “news that Russia has decided to keep its oil production at a reduced level through June and calmer market conditions has helped to keep prices in the positive territory.” On the contract’s expiration day, April West Texas Intermediate crude CLJ23 rose $1.69, or 2.5%, to settle at $69.33 a barrel on the New York Mercantile Exchange. May WTI CLK23 settled at $69.67, up $1.85, or 2.7%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
The Observer’s wine critic chooses drinks to suit Good Friday fish, classic roast lamb – or just to sip with friends on a bank holiday
Specially Selected Greco di Tufo
Italy 2020 (£5.99, Aldi)
A southern Italian with gentle blossomy nose and pear, peach and lemon flavours. It’s the kind of softly refreshing dry white that performs a triple function: aperitif, unobtrusive and inexpensive party white, and partner for light fish, seafood or green leafy salad starters.
Chinese and Russian leaders to discuss Ukraine in formal talks after friendly dinner, while Fumio Kishida meets Volodymyr Zelenskiy in Kyiv
Xi Jinping has invited Vladimir Putin to visit China this year in a symbolic show of support after the international criminal court issued an arrest warrant for Russia’s president over accusations of unlawfully deporting Ukrainian children.
The Chinese leader extended the invitation during a meeting on Tuesday morning with the Russian prime minister, Mikhail Mishustin, as part of his state visit to Moscow.
Continue reading...A new series of long-distance hikes in Calabria take in captivating scenery and the legacy and legends of eco-conscious Francis of Paola, the first recorded vegan
A new sculpture appeared in a square in the town of Paola, Calabria, in April last year. It’s not the usual man on a horse, or even anything with legs; it’s a many-times-lifesize trout in grey stone – all fins, bulgy eyes and pout.
The unusual monument was erected to commemorate a miracle performed by the town’s patron saint, Francis. (He was actually made a saint in 1519, but Covid delayed the celebrations for the 500th anniversary of his canonisation.) This Saint Francis followed in the spiritual – and, for a time, literal – footsteps of his better-known predecessor, Francis of Assisi, but dialled up his reverence for the natural world and became one of the first recorded vegans.
Continue reading...Chinese leader expected to position himself as peacemaker but US condemns Xi for providing ‘diplomatic cover’ for atrocities in Ukraine
Xi Jinping said China was ready with Russia “to stand guard over the world order based on international law” as he arrived for a state visit to Moscow that comes days after Vladimir Putin was made the subject of an arrest warrant by the international criminal court.
The Chinese leader is expected to position himself as a potential peacemaker in the Ukraine war during his two-day visit to Russia – his first state visit since Putin’s invasion. For his part, the Russian president will be hoping to project unity in the face of western isolation, as the US condemned Xi for providing “diplomatic cover” for Moscow to continue to commit further crimes in Ukraine.
Continue reading...The U.S. has a long and disturbing habit of ignoring the violence it commits overseas as well as at home.
The post Americans Don’t Care About the Iraqi Dead. They Don’t Even Care About Their Own. appeared first on The Intercept.
“I think that we need to see what has actually transpired.”
The post Senators Aren’t Ready to Blame Themselves for Silicon Valley Bank Implosion appeared first on The Intercept.
Analysis: Tragic story struck a chord with other headteachers increasingly disillusioned with grading regime
Last week, the head of Ofsted and chief inspector of schools in England, took to Twitter. “The ad for my successor is out,” Amanda Spielman wrote. “I’ll be leaving this amazing job at the end of 2023, and if you think you have the experience, the energy and the commitment it needs, and want to work with our fantastic staff, apply here.”
Seven days later, the organisation she has led for six years has found itself at the centre of a public outcry. A headteacher has killed herself, according to her family, after an Ofsted inspection downgraded her school from “outstanding” to “inadequate”.
Continue reading...The Ericsson Technology Review is now available in the IEEE Xplore Digital Library. The monthly magazine provides insights on emerging innovations that are shaping the future of information and communication technology.
The publication, which dates back to 1924, is published by Ericsson, a multinational networking and telecommunications company based in Stockholm.
An IEEE Xplore subscription is not required to access the freely available research papers.
“IEEE is a respected organization, and Ericsson has the ambition to reach even further into the academic community and research institutes with our cutting-edge research and development,” says Erik Ekudden, the company’s chief technology officer. “We believe that IEEE Xplore is a good channel for this target group.”
The Review in IEEE Xplore includes newly published articles plus those from the magazine’s archives going back to 2020, according to Naveen Maddali, senior product manager of content partnerships for IEEE Global Products and Marketing. There are now more than 80 articles in the digital library. Topics include computing, robotics, and signal processing.
“The Ericsson Technology Review is a valuable publication for anyone using IEEE Xplore,” Maddali says. “There’s a lot of useful content on telecommunications and communications for all types of the digital library’s users.”
“Ericsson has the ambition to reach even further into the academic community and research institutes with our cutting-edge research and development. IEEE Xplore is a good channel for this target group.”
Maddali says the project was volunteer-driven. The effort was supported by Ericsson’s CTO office following an initiative by IEEE Senior Member Glenn Parsons, principal standards advisor with Ericsson Canada. He was a member of the IEEE Publication Services and Products Board and the IEEE Technical Activities Board/PSPB Products and Services Committee that developed the third-party content hosting process. Parsons suggested that Ericsson Technology Review be used to do a trial run of the new hosting process.
The journal’s articles, written by Ericsson’s researchers, cover topics including communication, networking, and broadcast technologies; computing and processing; power and energy applications; robotics and control systems; and signal processing and analysis.
Ekudden adds that with the new partnership, “Ericsson hopes to increase the understanding of important technology trends. Mobile technology, 5G, and its included technology capabilities are a vital base for the ongoing digital transformation of enterprises and society.”
IEEE Xplore contains publications from other large companies in addition to Ericsson, including the IBM Journal of Research and Development and the Bell Labs Technical Journal. Hosting the publications in IEEE Xplore aligns with IEEE’s goal of providing practical content from leading organizations to those in industry, Maddali says.
Regulation may allow ‘hydrogen-ready’ boilers that can run on fossil fuel gas, and are unlikely ever to use hydrogen
Ministers are preparing to allow new houses to continue to be fitted with gas boilers, long after they were supposed to be phased out, campaigners fear.
A loophole being considered for the forthcoming future homes standard, a housing regulation in England intended to reduce greenhouse gas emissions from newly built homes in line with the net zero target, would allow new homes to be fitted with “hydrogen-ready” boilers.
Continue reading...With migrant deaths at record highs, researchers say intensified border militarization is making a deadly problem much worse.
The post Mapping Project Reveals Locations of U.S. Border Surveillance Towers appeared first on The Intercept.
New nuclear looks different, which requires new types of financing. New investment and partnerships are seemingly occurring every day across the industry, including SK Group’s $250million investment into Terrapower, and X-energy’s partnership with Dow Chemical.
What can be done to encourage financial investment and improve the economic viability and the ROI of SMRs? How does new nuclear differ, and how do we finance that?
Reuters Events latest report – Capital Funding, Financing & Economic Viability of SMRs – dives into the vehicles that will assist with advancing financing to support SMRs and advanced reactors deployment and commercialization. What to expect from the report:
Tesla’s investor day on 1 March began with a rambling, detailed discourse on energy and the environment before transitioning into a series of mostly predictable announcements and boasts. And then, out of nowhere, came an absolute bombshell: “We have designed our next drive unit, which uses a permanent-magnet motor, to not use any rare-earth elements at all,” declared Colin Campbell, Tesla’s director of power-train engineering.
It was a stunning disclosure that left most experts in permanent magnetism wary and perplexed. Alexander Gabay, a researcher at the University of Delaware, states flatly: “I am skeptical that any non-rare-earth permanent magnet could be used in a synchronous traction motor in the near future.” And at Uppsala University, in Sweden, Alena Vishina, a physicist, elaborates, “I’m not sure it’s possible to use only rare-earth-free materials to make a powerful and efficient motor.”
The problem here is physics, which not even Tesla can alter.
And at a recent magnetics conference Ping Liu, a professor at the University of Texas, in Arlington, asked other researchers what they thought of Tesla’s announcement. “No one fully understands this,” he reports. (Tesla did not respond to an e-mail asking for elaboration of Campbell’s comment.)
Tesla’s technical prowess should never be underestimated. But on the other hand, the company—and in particular, its CEO—has a history of making sporadic sensational claims that don’t pan out (we’re still waiting for that US $35,000 Model 3, for example).
The problem here is physics, which not even Tesla can alter. Permanent magnetism occurs in certain crystalline materials when the spins of electrons of some of the atoms in the crystal are forced to point in the same direction. The more of these aligned spins, the stronger the magnetism. For this, the ideal atoms are ones that have unpaired electrons swarming around the nucleus in what are known as 3d orbitals. Tops are iron, with four unpaired 3d electrons, and cobalt, with three.
But 3d electrons alone are not enough to make superstrong magnets. As researchers discovered decades ago, magnetic strength can be greatly improved by adding to the crystalline lattice atoms with unpaired electrons in the 4f orbital—notably the rare-earth elements neodymium, samarium, and dysprosium. These 4f electrons enhance a characteristic of the crystalline lattice called magnetic anisotropy—in effect, they promote adherence of the magnetic moments of the atoms to the specific directions in the crystal lattice. That, in turn, can be exploited to achieve high coercivity, the essential property that lets a permanent magnet stay magnetized. Also, through several complex physical mechanisms, the unpaired 4f electrons can amplify the magnetism of the crystal by coordinating and stabilizing the spin alignment of the 3d electrons in the lattice.
Since the 1980s, a permanent magnet based on a compound of neodymium, iron, and boron (NdFeB), has dominated high-performance applications, including motors, smartphones, loudspeakers, and wind-turbine generators. A 2019 study by Roskill Information Services, in London, found that more than 90 percent of the permanent magnets used in automotive traction motors were NdFeB.
So if not rare-earth permanent magnets for Tesla’s next motor, then what kind? Among experts willing to speculate, the choice was unanimous: ferrite magnets. Among the non-rare-earth permanent magnets invented so far, only two are in large-scale production: ferrites and another type called Alnico (aluminum nickel cobalt). Tesla isn’t going to use Alnico, a half-dozen experts contacted by IEEE Spectrum insisted. These magnets are weak and, more important, the world supply of cobalt is so fraught that they make up less than 2 percent of the permanent-magnet market.
There are more than a score of permanent magnets that use no rare-earth elements, or don’t use much of them. But none of these have made an impact outside the laboratory.
Ferrite magnets, based on a form of iron oxide, are cheap and account for nearly 30 percent of the permanent-magnet market by sales. But they, too, are weak (one major use is holding refrigerator doors shut). A key performance indicator of a permanent magnet is its maximum energy product, measured in megagauss-oersteds (MGOe). It reflects both the strength of a magnet as well as its coercivity. For the type of NdFeB commonly used in automotive traction motors, this value is generally around 35 MGOe. For the best ferrite magnets, it is around 4.
“Even if you get the best-performance ferrite magnet, you will have performance about five to 10 times below neodymium-iron-boron,” says Daniel Salazar Jaramillo, a magnetics researcher at the Basque Center for Materials, Applications, and Nanostructures, in Spain. So compared to a synchronous motor built with NdFeB magnets, one based on ferrite magnets will be much larger and heavier, much weaker, or some combination of the two.
To be sure, there are more than a score of other permanent magnets that use no rare-earth elements or don’t use much of them. But none of these have made an impact outside the laboratory. The list of attributes needed for a commercially successful permanent magnet includes high field strength, high coercivity, tolerance of high temperatures, good mechanical strength, ease of manufacturing, and lack of reliance on elements that are scarce, toxic, or problematic for some other reason. All of the candidates today fail to tick one or more of these boxes.
Iron-nitride magnets, such as this one from startup Niron Magnetics, are among the most promising of an emerging crop of permanent magnets that do not use rare-earth elements.Niron Magnetics
But give it a few more years, say some researchers, and one or two of these could very well break through. Among the most promising: iron nitride, Fe16N2. A Minneapolis startup, Niron Magnetics, is now commercializing technology that was pioneered with funding from ARPA-E by Jian Ping Wang at the University of Minnesota in the early 2000s, after earlier work at Hitachi. Niron’s executive vice president, Andy Blackburn, told Spectrum that the company intends to introduce its first product late in 2024. Blackburn says it will be a permanent magnet with an energy product above 10 MGOe, for which he anticipates applications in loudspeakers and sensors, among others. If it succeeds, it will be the first new commercial permanent magnet since NdFeB, 40 years ago, and the first commercial non-rare-earth permanent magnet since strontium ferrite, the best ferrite type, 60 years ago.
Niron’s first offering will be followed in 2025 by a magnet with an energy product above 30 MGOe, according to Blackburn. For this he makes a rather bold prediction: “It’ll have as good or better flux than neodymium. It’ll have the coercivity of a ferrite, and it’ll have the temperature coefficients of samarium cobalt”—better than NdFeB. If the magnet really manages to combine all those attributes (a big if), it would be very well suited for use in the traction motors of electric vehicles.
There will be more to come, Blackburn declares. “All these new nanoscale-engineering capabilities have allowed us to create materials that would have been impossible to make 20 years ago,” he says.
Dead, and in a jacket and tie. That’s how he was on 1 December 1948, when two men found him slumped against a retaining wall on the beach at Somerton, a suburb of Adelaide, Australia.
The Somerton Man’s body was found on a beach in 1948. Nobody came forward to identify him.
JAMES DURHAM
Police distributed a photograph, but no one came forward to claim the body. Eyewitnesses reported having seen the man, whom the newspapers dubbed the Somerton Man and who appeared to be in his early 40s, lying on the beach earlier, perhaps at one point moving his arm, and they had concluded that he was drunk. The place of death led the police to treat the case as a suicide, despite the apparent lack of a suicide note. The presence of blood in the stomach, a common consequence of poisoning, was noted at the autopsy. Several chemical assays failed to identify any poison; granted, the methods of the day were not up to the task.
There was speculation of foul play. Perhaps the man was a spy who had come in from the cold; 1948 was the year after the Cold War got its name. This line of thought was strengthened, a few months later, by codelike writings in a book that came to be associated with the case.
These speculations aside, the idea that a person could simply die in plain view and without friends or family was shocking. This was a man with an athletic build, wearing a nice suit, and showing no signs of having suffered violence. The problem nagged many people over the years, and eventually it took hold of me. In the late 2000s, I began working on the Somerton Man mystery, devoting perhaps 10 hours a week to the research over the course of about 15 years.
Throughout my career, I have always been interested in cracking mysteries. My students and I used computational linguistics to identify which of the three authors of The Federalist Papers—Alexander Hamilton, James Madison, and John Jay—was responsible for any given essay. We tried using the same method to confirm authorship of Biblical passages. More recently, we’ve been throwing some natural-language processing techniques into an effort to decode the Voynich Manuscript, an early 15th-century document written in an unknown language and an unknown script. These other projects yield to one or another key method of inquiry. The Somerton Man problem posed a broader challenge.
My one great advantage has been my access to students and to scientific instruments at the University of Adelaide, where I am a professor of electrical and electronic engineering. In 2009, I established a working group at the university’s Center for Biomedical Engineering.
One question surrounding the Somerton Man had already been solved by sleuths of a more literary bent. In 1949, a pathologist had found a bit of paper concealed in one of the dead man’s pockets, and on it were printed the words Tamám Shud, the Persian for “finished.” The phrase appears at the end of Edward FitzGerald’s translation of the Rubáiyát of Omar Khayyám, a poem that remains popular to this day.
The police asked the public for copies of the book in which the final page had been torn out. A man found such a book in his car, where apparently it had been thrown in through an open window. The book proved a match.
The back cover of the book also included scribbled letters, which were at first thought to constitute an encrypted message. But statistical tests carried out by my team showed that it was more likely a string of the initial letters of words. Through computational techniques, we eliminated all of the cryptographic codes known in the 1940s, leaving as a remaining possibility a one-time pad, in which each letter is based on a secret source text. We ransacked the poem itself and other texts, including the Bible and the Talmud, but we never identified a plausible source text. It could have been a pedestrian aide-mémoire—to list the names of horses in an upcoming race, for example. Moreover, our research indicates that it doesn’t have the structural sophistication of a code. The Persian phrase could have been the man’s farewell to the world: his suicide note.
Also scribbled on the back cover was a telephone number that led to one Jo Thomson, a woman who lived merely a five-minute walk from where the Somerton Man had been found. Interviewers then and decades later reported that she had seemed evasive; after her death, some of her relatives and friends said they speculated that she must have known the dead man. I discovered a possible clue: Thomson’s son was missing his lateral incisors, the two teeth that normally flank the central incisors. This condition, found in a very small percentage of the population, is often congenital; oddly, the Somerton Man had it, too. Were they related?
And yet the attempt to link Thomson to the body petered out. Early in the investigation, she told the police that she had given a copy of the Rubáiyát to a lieutenant in the Australian Army whom she had known during the war, and indeed, that man turned out to own a copy. But Thomson hadn’t seen him since 1945, he was very much alive, and the last page of his copy was still intact. A trail to nowhere, one of many that were to follow.
We engineers in the 21st century had several other items to examine. First was a plaster death mask that had been made six months after the man died, during which time the face had flattened. We tried several methods to reconstruct its original appearance: In 2013 we commissioned a picture by Greg O’Leary, a professional portrait artist. Then, in 2020, we approached Daniel Voshart, who designs graphics for Star Trek movies. He used a suite of professional AI tools to create a lifelike reconstruction of the Somerton Man. Later, we obtained another reconstruction by Michael Streed, a U.S. police sketch artist. We published these images, together with many isolated facts about the body, the teeth, and the clothing, in the hope of garnering insights from the public. No luck.
As the death mask had been molded directly off the Somerton Man’s head, neck, and upper body, some of the man’s hair was embedded in the plaster of Paris—a potential DNA gold mine. At the University of Adelaide, I had the assistance of a hair forensics expert, Janette Edson. In 2012, with the permission of the police, Janette used a magnifying glass to find where several hairs came together in a cluster. She was then able to pull out single strands without breaking them or damaging the plaster matrix. She thus secured the soft, spongy hair roots as well as several lengths of hair shaft. The received wisdom of forensic science at the time held that the hair shaft would be useless for DNA analysis without the hair root.
Janette performed our first DNA analysis in 2015 and, from the hair root, was able to place the sample within a maternal genetic lineage, or haplotype, known as “H,” which is widely spread around Europe. (Such maternally inherited DNA comes not from the nucleus of a cell but from the mitochondria.) The test therefore told us little we hadn’t already known. The concentration of DNA was far too low for the technology of the time to piece together the sequencing we needed.
Fortunately, sequencing tools continued to improve. In 2018, Guanchen Li and Jeremy Austin, also at the University of Adelaide, obtained the entire mitochondrial genome from hair-root material and narrowed down the maternal haplotype to H4a1a1a.
However, to identify Somerton Man using DNA databases, we needed to go to autosomal DNA—the kind that is inherited from both parents. There are more than 20 such databases, 23andMe and Ancestry being the largest. These databases require sequences of from 500,000 to 2,000,000 single nucleotide polymorphisms, or SNPs (pronounced “snips”). The concentration levels of autosomes in the human cell tend to be much lower than those of the mitochondria, and so Li and Austin were able to obtain only 50,000 SNPs, of which 16,000 were usable. This was a breakthrough, but it still wasn’t good enough to work on a database.
In 2022, at the suggestion of Colleen Fitzpatrick, a former NASA employee who had trained as a nuclear physicist but then became a forensic genetics expert, I sent a hair sample to Astrea Forensics, a DNA lab in the United States. This was our best hair-root sample, one that I had nervously guarded for 10 years. The result from Astrea came back—and it was a big flop.
Seemingly out of options, we tried a desperate move. We asked Astrea to analyze a 5-centimeter-long shaft of hair that had no root at all. Bang! The company retrieved 2 million SNPs. The identity of the Somerton Man was now within our reach.
So why did the rootless shaft work in our case?
The DNA analysis that police use for standard crime-solving relies on only 20 to 25 short tandem repeats (STRs) of DNA. That’s fine for police, who mostly do one-to-one matches to determine whether the DNA recovered at a crime scene matches a suspect’s DNA.
But finding distant cousins of the Somerton Man on genealogical databases constitutes a one-to-many search, and for that you typically need around 500,000 markers. For these genealogical searches, SNPs are used because they contain information on ethnicity and ancestry generally. Note that SNPs have around 50 to 150 base pairs of nucleotides, whereas typical STRs tend to be longer, containing 80 to 450 base pairs. The hair shaft contains DNA that is mostly fragmented, so it’s of little use when you’re seeking longer STR segments but it’s a great source of SNPs. So this is why crime forensics traditionally focused on the root and ignored the shaft, although this practice is now changing very slowly.
Another reason the shaft was such a trove of DNA is that keratin, its principal component, is a very tough protein, and it had protected the DNA fragments lodged within it. The 74-year-old soft spongy hair root, on the other hand, had not protected the DNA to the same extent. We set a world record for obtaining a human identification, using forensic genealogy, from the oldest piece of hair shaft. Several police departments in the United States now use hair shafts to retrieve DNA, as I am sure many will start to do in other countries, following our example.
Libraries of SNPs can be used to untangle the branching lines of descent in a family tree. We uploaded our 2 million SNPs to GEDmatch Pro, an online genealogical database located in Lake Worth, Fla. (and recently acquired by Qiagen, a biotech company based in the Netherlands). The closest match was a rather distant relative based in Victoria, Australia. Together with Colleen Fitzpatrick, I built out a family tree containing more than 4,000 people. On that tree we found a Charles Webb, son of a baker, born in 1905 in Melbourne, with no date of death recorded.
Charles never had children of his own, but he had five siblings, and I was able to locate some of their living descendants. Their DNA was a dead match. I also found a descendant of one of his maternal aunts, who agreed to undergo a test. When a positive result came through on 22 July 2022, we had all the evidence we needed. This was our champagne moment.
In late 2021, police in South Australia ordered an exhumation of the Somerton Man’s body for a thorough analysis of its DNA. At the time we prepared this article, they had not yet confirmed our result, but they did announce that they were “cautiously optimistic” about it.
All at once, we were able to fill in a lot of blank spaces. Webb was born on 16 November 1905, in Footscray, a suburb of Melbourne, and educated at a technical college, now Swinburne University of Technology. He later worked as an electrical technician at a factory that made electric hand drills. Our DNA tests confirmed he was not related to Thomson’s son, despite the coincidence of their missing lateral incisors.
We discovered that Webb had married a woman named Dorothy Robinson in 1941 and had separated from her in 1947. She filed for divorce on grounds of desertion, and the divorce lawyers visited his former place of work, confirming that he had quit around 1947 or 1948. But they could not determine what happened to him after that. The divorce finally came through in 1952; in those days, divorces in Australia were granted only five years after separation.
At the time of Webb’s death his family had become quite fragmented. His parents were dead, a brother and a nephew had died in the war, and his eldest brother was ill. One of his sisters died in 1955 and left him money in her will, mistakenly thinking he was still alive and living in another state. The lawyers administering the will were unable to locate Charles.
We got more than DNA from the hair: We also vaporized a strand of hair by scanning a laser along its length, a technique known as laser ablation. By performing mass spectrometry on the vapor, we were able to track Webb’s varying exposure to lead. A month before Webb’s death, his lead level was high, perhaps because he had been working with the metal, maybe soldering with it. Over the next month’s worth of hair growth, the lead concentration declined; it reached its lowest level at his death. This might be a sign that he had moved.
With a trove of photographs from family albums and other sources, we were able to compare the face of the young Webb with the artists’ reconstructions we had commissioned in 2013 and 2021 and the AI reconstruction we had commissioned in 2020. Interestingly, the AI reconstruction had best captured his likeness.
A group photograph, taken in 1921, of the Swinburne College football team, included a young Webb. Clues found in newspapers show that he continued to participate in various sports, which would explain the athletic condition of his body.
What’s interesting about solving such a case is how it relies on concepts that may seem counterintuitive to forensic biologists but are quite straightforward to an electronics engineer. For example, when dealing with a standard crime scene that uses only two dozen STR markers, one observes very strict protocols to ensure the integrity of the full set of STRs. When dealing with a case with 2 million SNPs, by contrast, things are more relaxed. Many of the old-school STR protocols don’t apply when you have access to a lot of information. Many SNPs can drop out, some can even be “noise,” the signal may not be clean—and yet you can still crack the case!
Engineers understand this concept well. It’s what we call graceful degradation—when, say, a few flipped bits on a digital video signal are hardly noticed. The same is true for a large SNP file.
And so, when Astrea retrieved the 2 million SNPs, the company didn’t rely on the traditional framework for DNA-sequencing reads. It used a completely different mathematical framework, called imputation. The concept of imputation is not yet fully appreciated by forensics experts who have a biological background. However, for an electronics engineer, the concept is similar to error correction: We infer and “impute” bits of information that have dropped out of a received digital signal. Such an approach is not possible with a few STRs, but when handling over a million SNPs, it’s a different ball game.
Much of the work on identifying Charles Webb from his genealogy had to be done manually because there are simply no automated tools for the task. As an electronics engineer, I now see possible ways to make tools that would speed up the process. One such tool my team has been working on, together with Colleen Fitzpatrick, is software that can input an entire family tree and represent all of the birth locations as colored dots on Google Earth. This helps to visualize geolocation when dealing with a large and complex family.
The Somerton Man case still has its mysteries. We cannot yet determine where Webb lived in his final weeks or what he was doing. Although the literary clue he left in his pocket was probably an elliptical suicide note, we cannot confirm the exact cause of death. There is still room for research; there is much we do not know.
This article appears in the April 2023 print issue as “Finding Somerton Man.”
As technology continues to evolve, STEM education is needed more than ever. With the vast technical expertise of its 400,000-plus members and volunteers, IEEE is a leader in engineering and technology education. Its technical societies and its councils, sections, and regional groups offer educational events and resources at every level to support technical professions and prepare the workforce of tomorrow.
IEEE offers many ways to support the educational needs of learners. For preuniversity students, the organization offers summer camps and other opportunities to explore science, technology, engineering, and mathematics careers. IEEE’s continuing education courses allow professionals to stay up to date on technology, keep their skills sharp, and learn new things.
From 2 to 8 April, IEEE is highlighting resources available to students, educators, and technical professionals with IEEE Education Week. The annual celebration highlights educational opportunities provided by the world’s largest technical professional association and its many organizational units, societies, and councils.
Here are some of the events and resources available during this year’s Education Week.
Climate Change: IEEE’s Role in Bringing Technology Solutions to Meet the Challenge
3 April, noon to 1 p.m. EDT
IEEE President and CEO Saifur Rahman kicks off Education Week with a session on how the organization can serve as a vital connection between policymakers and the engineering and technology communities in bringing technological solutions to meet the universal challenge of climate change. Rahman plans to share how IEEE is committed to helping mitigate the effects of climate change through pragmatic and accessible technical solutions, as well as by providing engineers and technologists with a neutral space for discussion and action. The webinar also addresses the importance of educating the energy workforce.
3 April, 9 to 10 a.m. EDT
IEEE REACH (Raising Engineering Awareness through the Conduit of History) provides teachers with resources to help them explain the history of technology and the roles played by engineers. During this webinar, participants can learn how REACH can enhance the classroom experience.
5 April, 11 to 11:45 a.m. EDT
Many people are sharing their expertise on TikTok, Youtube and other online platforms. When sharing knowledge in a multimedia-rich environment, there are research-proven principles that can be applied to enhance the presentation—which in turn promotes knowledge transfer. This webinar is designed to show participants how to apply the principles to their presentations.
Here are some additional offerings and resources available during IEEE Education Week.
For a list of webinars and events and more resources, visit the IEEE Education Week website.
IEEE-affiliated groups can participate in IEEE Education Week by offering events, resources, and special offers such as discounted courses. Additionally, a tool kit is available to help groups promote IEEE Education Week and their event through newsletters, social media, and more.
The Education Week website provides special offers and discounts as well. You also can support education programs by donating to the IEEE Foundation.
Check out the IEEE Education Week video to learn more.
You do not need to be a member to participate in IEEE Education Week; however, members receive discounted or free access to many of the events and resources.
If you’re not an IEEE member, now would be a great time to join.
Civilians are being killed by Russian weapons just like in Ukraine, says special rapporteur Tom Andrews in call for global action
Myanmar is a “failing state” and the crisis is getting exponentially worse, a UN special rapporteur for the country has warned, urging countries to adopt the same unified resolve that followed the invasion of Ukraine.
“The same types of weapons that are killing Ukrainians are killing people in Myanmar,” Tom Andrews, special rapporteur on the situation of human rights in Myanmar, told the Guardian in an interview, citing the supply of Russian weapons to the junta since the coup two years ago. The junta relies heavily on aircraft from China and Russia, and has increasingly resorted to airstrikes to attempt to quell determined resistance forces.
Continue reading...At least three of the California governor's wine companies are held by SVB, and a bank president sits on the board of his wife’s charity.
The post Cheering Silicon Valley Bank Bailout, Gavin Newsom Doesn’t Mention He’s a Client appeared first on The Intercept.
A year has passed since the launch of the ESA’s Rosalind Franklin rover mission was put on hold, but the work has not stopped for the ExoMars teams in Europe.
In this programme, the ESA Web TV crew travel back to Turin, Italy to talk to the teams and watch as new tests are being conducted with the rover’s Earth twin Amalia while the real rover remains carefully stored in an ultra-clean room.
The 15-minute special programme gives an update on what happened since the mission was cancelled in 2022 because of the Russian invasion of Ukraine, the plan ahead, the new challenges, the latest deep drilling test and the stringent planetary protection measures in place.
ESA’s Rosalind Franklin rover has unique drilling capabilities and an on-board science laboratory unrivalled by any other mission in development. Its twin rover Amalia was back on its wheels and drilled down 1.7 metres into a martian-like ground in Italy – about 25 times deeper than any other rover has ever attempted on Mars. The rover also collected samples for analysis under the watchful eye of European science teams.
ESA, together with international and industrial partners, is reshaping the ExoMars Rosalind Franklin Mission with new European elements, including a lander, and a target date of 2028 for the trip to Mars.
The newly shaped Rosalind Franklin Mission will recover one of the original objectives of ExoMars – to create an independent European capability to access the surface of Mars with a sophisticated robotic payload.
More information: https://www.esa.int/ExoMars
Electric cars barely existed in 2010, when the Tesla Model S was still a glint in Elon Musk’s eye. Now more than 20 million EVs girdle the globe, according to BloombergNEF—and that count is expected to nearly quadruple to 77 million by 2025. A battery will be the high-voltage heart of each of those 77 million electric vehicles, and by far their most expensive component, setting off a worldwide race to ethically source their materials and crank up production to meet exploding demand.
EVs may have seized a record 5.8 percent of the United States market in 2022, according to J.D. Power, and could approach 11 percent of the global market this year. But experts still believe that better batteries, and many more of them, are a key to EVs reaching a market tipping point, even as Reuters projects automakers spending a whopping $1.2 trillion to develop and produce EVs through 2030.
IEEE Spectrum asked five industry experts to gaze deeply into their own crystal balls and outline what needs to happen in the EV battery space to wean the world off fossil-fueled transportation and onto the plug. Here’s what they said:
Upstart Lucid Motors hasn’t built many cars, but it’s built a reputation with the record-setting, 830-kilometer driving range of the Air Grand Touring Performance sedan. That range is a testament to Lucid’s obsessive pursuit of efficiency: The Air uses the same 2170-format cylindrical cells (supplied by Samsung SDI) as many EVs, but ekes out more miles via superior battery management, compact-yet-muscular power units and slippery aerodynamics.
Sophisticated chassis and battery design gives new life to “lesser” chemistries—especially lithium iron phosphate that’s the hottest thing in batteries around the world—that would otherwise be uncompetitive and obsolete.
One might think Lucid would call for every electric model to cover such vast distances. Instead, Lucid leaders see a bright future in cars that aim for maximum efficiency — rather than range per se — via smaller, more-affordable batteries.
Lucid’s latest Air Touring model is its most efficient yet on a per-mile basis. Now the world’s most aerodynamic production vehicle, with a 0.197 coefficient of drag, the Air Touring delivers an EPA-rated 7.44 kilometers from each onboard kilowatt hour. Yet propelling this full-size luxury barge still demands a 92 kWh battery aboard.
With all that in mind, the company is developing its next generation of batteries. Extrapolating from company targets, a future compact-size Lucid—think the size of Tesla Model 3 or Model Y—could decisively downsize its battery without sacrificing useful range.
“Our target is to improve efficiency even more,” Dlala says.
“If we do a 250-mile car, we could have a battery that’s just 40 kWh,” or less than half the size of the Air’s. That’s the same size battery as a relatively tiny, base-model Nissan Leaf, whose lesser efficiency translates to just 240 km of EPA-rated driving range.
Such compact batteries would not just save serious money for manufacturers and consumers. They would require fewer raw and refined materials., allowing automakers to theoretically build many more cars from a finite supply. That pack would also weigh about one-third as much as Lucid’s beefiest current battery. The upshot would be a chain of gains that would warm the heart of the most mass-conscious engineer: A lighter chassis to support the smaller battery, slimmer crash structures, downsized brakes. More useable space for passengers and cargo. All those savings would further boost driving range and performance.
This grand design, naturally, would demand an attendant burst of charger development. Once chargers are as ubiquitous and reliable as gas stations—and nearly as fast for fillups—“then I don’t need 400 miles of range,” Dlala says.
All this could grant the ultimate, elusive wish for EV makers: Price parity with internal-combustion automobiles.
“That combination of efficiency and infrastructure will allow us to create competitive prices versus internal combustion cars,” Dlala says.
Castilloux says that game-changing EV battery breakthroughs have to date been rare. Yet EV batteries are still central to automakers’ calculus, as they seek a sustainable, affordable supply in a period of explosive growth. In a marketplace starving for what they see as their rightful share of kilowatt-hours, smaller or less-connected automakers especially may go hungry.
“Everyone is competing for a limited supply,” says Ryan Castilloux. “That makes for a lumpy growth trajectory in EVs. It’s an immense challenge, and one that won’t go away until the growth slows and the supply side can keep up.”
“In recent decades, it wouldn’t have made sense to think of an automaker becoming a processing or mining company, but now with scarcity of supplies, they have to take drastic measures.”
—Ryan Castilloux, Adamas Intelligence
A battery industry that has succeeded in boosting nickel content for stronger performance, and cutting cobalt to reduce costs, has hit a wall of diminishing returns via chemistry alone. That leaves battery pack design as a new frontier: Castilloux lauds the push to eliminate “aluminum and other zombie materials” to save weight and space. The effort shows in innovations such as large-format cylindrical batteries with higher ratios of active material to surrounding cases—as well as so-called “cell-to-pack” or “pack-to-frame” designs. BMW’s critical “Neue Klasse” EVs, the first arriving in 2025, are just one example: Large-format cells, with no traditional cased modules required, fill an entire open floorpan and serve as a crash-resistant structural member.
“That becomes a low-cost way to generate big improvements in pack density and bolster the mileage of a vehicle,” Castillloux says.
That kind of sophisticated chassis and battery design can also help level the playing field, giving new life to “lesser” chemistries—especially lithium iron phosphate that’s the hottest thing in batteries around the world—that would otherwise be uncompetitive and obsolete.
“Things are moving in the right direction in North America and Europe, but it’s too little too late at the moment, and the West is collectively scrambling to meet demand.”
The drivetrain and battery of a Mercedes-Benz EQS electric vehicle on the assembly line at the Mercedes-Benz Group plant in Sindelfingen, Germany, on Monday, February 13, 2023. Krisztian Bocsi/Getty Images
The tragedy, Castilloux says, is that EV demand was anticipated for several years, “but the action is only happening now.”
“China was only one that acted on it, and is now a decade ahead of the rest of the world,” in both refining and processing battery materials, and cell production itself.
Tesla also got out in front of legacy automakers by thinking in terms of vertical integration, the need to control the entire supply chain, from lithium brine and cobalt mines to final production and recycling.
“In recent decades, it wouldn’t have made sense to think of an automaker becoming a processing or mining company, but now with scarcity of supplies, they have to take drastic measures.”
Automakers are racing to meet soaring EV demand and fill yawning gaps in the market, including building a homegrown supply chain of battery materials as well as batteries. In the United States alone, Atlas Public Policy tallies U.S. $128 billion in announced investments in EV and battery factories and recycling. That still leaves another blind spot: Charging infrastructure. Tesla’s dominant Superchargers aside, many experts cite a patchwork, notoriously unreliable charging network as a leading roadblock to mainstream EV adoption.
“Charging infrastructure is on our wish list of things that need to improve,” said Dan Nicholson, who helps lead General Motors’ new charger initiatives.
The 2021 U.S. Infrastructure Law is providing $7.5 billion to build a network of 500,000 EV chargers by 2030. But rather than own and operate their own chargers like Tesla—akin to automakers running chains of proprietary gas stations—GM, Ford and others argue that standardized, open-source chargers are critical to convince more Americans to kick the ICE habit. Those chargers must be available everywhere people live and work, Nicholson said, and open to drivers of any car brand.
It will help if those chargers actually work: A 2022 study showed nearly 25 percent of public chargers in the San Francisco Bay area—itself a mecca for EV ownership—weren’t functioning properly.
Automakers and battery manufacturers are on board with multiple solutions, including the stunning rise of lithium-iron-phosphate cells in Teslas, Fords and other models.
To fill gaps in public networks, GM is collaborating with EVGo on a national network of 2,000 DC fast-charging stalls, located at 500 Pilot and Flying J travel centers, most along major corridors. To reach people where they live, including people with no access to home charging, GM is tapping its more than 4,400 dealers to build up to 10 Level 2 charging stations each, at both dealers and key locations, including underserved urban and rural communities. Nicholson notes that 90 percent of the U.S. population lives within 16 kilometers of a GM dealer.
In his role as an SAE board member, Nicholson also supports future-proof standards for EVs, connectors and chargers. That includes the ISO 15118 international standard that defines two-way communication between EVs and chargers. That standard is key to “Plug and Charge,” the budding interoperability system that allows drivers of any EV to plug into any DC fast charger and simply be billed on the back end. That’s how Teslas have worked since 2012, though with the advantage of a closed system that need only recognize and communicate with Tesla models.
Nicholson said GM is also seeking “uptime guarantees” with charging collaborators. That will allow drivers to see in advance if a charger is operational, and to hold a spot.
“People need to be able to reserve a station, and know it’s going to work when they get there,” he said.
Despite an electric boom year in 2022, some analysts are downgrading forecasts of EV adoption, due to monkey wrenches of unpredictable demand, looming recession and supply-chain issues. S&P Global Mobility remains bullish, predicting that 42 percent of global buyers will choose an EV in 2030, within sight of President Biden’s goal of 50-percent EV penetration.
“That’s a lot of growth, but there are plenty of people who won’t move along as quickly,” Brinley said. Pushing EVs to a market majority will require stars to align. Brinley says the most critical key is a continued explosion of new EV models at every price point—including SUVs and pickups that are the lifeblood of U.S. buyers.
Regarding batteries, Brinley says ICE manufacturers with an existing manufacturing footprint, labor force and know-how could find an advantage over relative newcomers. The issue will be how well the likes of General Motors and Ford can manage the transition, from scaling back on ICE production to retraining workers — fewer of whom may be required to produce batteries and motors than ICE powertrains. In February, Ford announced a new $3.5 billion plant in Michigan to build LFP batteries, licensing tech from China’s CATL, currently the world’s largest lithium-ion producer.
“Some (legacy) automakers will use LFP for certain use cases, and solid-state in development could change the dynamic again,” Brinley says. “But for the time being, you need both batteries and engines, because people will be buying both,” Brinley says.
At some point, Brinley says, it’s a zero-sum game: A flat global market for cars can’t comfortably accommodate both types of powertrains.
“ICE sales have to come down for BEV sales to come up,” Brinley says. “And that’s going to make for a wild market in the next few years.”
NanoGraf is among several start-ups wishing for not just longer-lasting batteries, but a stable, competitive North American supply chain to counter China’s battery dominance. The Inflation Reduction Act has spurred an unprecedented tsunami of homegrown investment, by requiring robust domestic sourcing of batteries and battery materials as a condition of EV tax breaks for manufacturers and consumers. That includes a $35-per-kWh tax credit on every lithium-ion cell produced, and a $7,500 consumer tax break on eligible EVs.
Connor Hund says NanoGraf aims to onshore production of its silicon-anode material at a new Chicago facility beginning in Q2 this year. The company, whose backers include the Department of Defense, claims to have created the most energy-dense 18650 cylindrical cell yet, at 3.8 amp-hours. The technology key is a pre-lithiated core that allows an anode silicon percentage as high as 25 percent, versus cells that typically top out at 5-to-7 percent silicon.
“There’s certainly room to boost the range of EVs by 20, 30 or even 50 percent by using silicon,” he says.
But whether it’s NanoGraf, or the drive toward large-format 4680 cylindrical cells led by Tesla and Panasonic, scaling up to mass production remains a major hurdle. NanoGraf plans enough initial capacity for 35 to 50 tonnes of its anode materials. But it would need 1,000 tonnes annually to crack the automotive space, with its now-bottomless appetite for batteries—at competitive cost with what automakers currently pay for cells from China, South Korea or elsewhere.
“It’s so cutthroat in that space, and there’s a scale you have to reach,” Hund says.
One wish is being granted: No one is waiting for a magic bullet in technology, including from solid state batteries that many experts now insist won’t be ready for automobiles until 2030 or later. Instead, automakers and battery manufacturers are on board with multiple solutions, including the stunning rise of LFP cells in Teslas, Fords and other models.
“There’s a shortage of all these materials, not enough nickel, cobalt or manganese, so companies targeting different consumers with different solutions is really helpful.”
Western countries have struggled to take a holistic view of everything that’s required, especially when incumbent solutions from China are available. It’s not just raw materials, anodes or cathodes, but the cells, modules, electrolyte and separators.
“You need companies onshoring all those components to have a robust U.S. supply chain,” he says. “We need everyone upstream and downstream of us, whether it’s the graphite, electrolyte or separator. Everyone is just one piece of the puzzle.”
Hund says safer batteries should also be on the industry wish-list, as high-profile fires in Teslas and other models threaten to sully EVs reputation or keep skeptical consumers on the fence.
“We can’t have batteries self-discharging at the rate they are now,” he says, especially with automakers gearing up around the world for their biggest EV invasion yet.
“Getting ahead of this now, versus pushing millions of cars onto the road and dealing with safety later, is very important.”
Update 14 March 2023: This story was corrected to reflect that Lucid’s Air Touring model carries a 92 kWh battery. (A previous version of this story stated that the battery’s capacity was 112 kWh.)
The perpetual problem with scaling up most quantum computers is a seemingly mundane one—too many cables. Experts say quantum computers might need at least a million qubits kept at near absolute zero to do anything computationally noteworthy. But connecting them all by coaxial cable to control and readout electronics, which work at room temperature, would be impossible.
Computing giants such as IBM, Google, and Intel hope to solve that problem with cyrogenic silicon chips that can operate close to the qubits themselves. But researchers have recently put forward some more exotic solutions that could quicken the pace.
At the IEEE International Electron Device Meeting (IEDM) in December, two groups of researchers suggest that silicon might not be the best answer. Their solutions instead rely on semiconductors and transistors more commonly aimed at near-terahertz-frequency radio. And in February at the IEEE International Solid State Circuits Conference (ISSCC) a separate research group proposed technology that could use terahertz radio to eliminate communication cables altogether.
A type of device made from compound semiconductors such as indium gallium arsenide rather than silicon and called a high electron-mobility transistor (HEMT) is a natural at amplifying the kind of RF signals needed to interact with qubits. But researchers at Korea Advanced Institute of Technology (KAIST) and at IBM Zurich and École Polytechnique Fédérale de Lausanne (EPFL) calculate that it could also do the cable-reducing task of routing, multiplexing, and demultiplexing. Crucially, it could do it with little power loss, which is important, because at the coldest parts of the cryogenic chambers used for quantum computers, the refrigerator system can remove only a couple of watts of heat.
HEMTs have a layered semiconductor structure that creates a supernarrow region of free electrons, called a two-dimensional electron gas. Charge moves quickly and with little resistance through this “quantum well,” hence the HEMT’s proficiency at amplifying high-frequency signals. The KAIST and Swiss teams, reasoned that at cryogenic temperatures the 2D electron gas might carry signals with less resistance than metal.
To prove it, they built demultiplexer circuits composed of several transistors and tested them at 5 kelvins. Instead of connecting each transistor to its neighbor with a metal interconnect, they had them share the quantum well. The only metal involved was where the signal entered the multiplexing network and where it exited. “No matter how many transistors there are between the input and output there are only two sources of resistance,” says SangHeyon Kim, associate professor of electrical engineering at KAIST.
The Switzerland-based team built similar structures, measuring a 32 percent reduction in resistance between two transistors connected by a metal interconnect and two connected by a quantum well. A 1-to-8 multiplexer might need 14 transistors so that resistance improvement adds up quickly.
“We’re doing quite a lot of things with this technology, some still in planning phase,” says Cezar B. Zota, a research staff member at IBM Zurich. His team plans to scale up from their two-transistor test device to a full switching matrix. While Kim’s lab is focused on integrating the multiplexers with low-noise amplifiers and other electronics through 3D stacking.
Qubit control signals could be multiplexed to reduce the number of cables going to the quantum computing chip. Transmitting those signals inside the quantum well [blue] of high electron-mobility transistors generates less heat.IBM Research Zurich
Multiplexing can reduce the number of signal cables going to the qubit chip, but what if they could be eliminated altogether? Researchers at MIT, led by associate professor of electrical and computer engineering Ruonan Han, tested a scheme that would use terahertz waves instead. They settled on near-terahertz radiation—specifically 0.26 THz—because, among other reasons, it was too high a frequency to interfere with the qubit operations and worked with small enough antennas.
A full-powered terahertz transceiver would throw off too much heat to place it near the qubit chip. Instead the MIT team designed a terahertz “backscatter” system. The system would consist of two transceiver chips, one at the top of the refrigerator, where it’s warmest and power consumption is less of an issue, and one at the bottom as part of a 4-kelvin cryogenic control chip linked to the quantum computer chip.
Terahertz radiation is injected into the refrigerator where it’s funneled to the top warm transceiver chip. In “downlink” mode that transceiver encodes data onto the terahertz radiation. The signals travel down the refrigerator to the bottom where they are picked up by an array of patch antennas on the cold transceiver.
Instead of using cables to connect external electronics to quantum computers, MIT researchers propose using terahertz radiation.MIT
To get data from the quantum computing chip, the system switches to uplink mode. The warm transceiver sends a steady beam of terahertz radiation down to the cold transceiver. Switches on that chip alter the antenna circuits, causing them to reflect radiation instead of absorbing it, thereby sending data up to the warm transceiver.
In tests of the system, the uplink could send 4 gigabits per second while adding just 176 femtojoules per bit of heat. The downlink was even more energy efficient, at just 34 femtojoules per bit.
For about as long as engineers have talked about beaming solar power to Earth from space, they’ve had to caution that it was an idea unlikely to become real anytime soon. Elaborate designs for orbiting solar farms have circulated for decades—but since photovoltaic cells were inefficient, any arrays would need to be the size of cities. The plans got no closer to space than the upper shelves of libraries.
That’s beginning to change. Right now, in a sun-synchronous orbit about 525 kilometers overhead, there is a small experimental satellite called the Space Solar Power Demonstrator One (SSPD-1 for short). It was designed and built by a team at the California Institute of Technology, funded by donations from the California real estate developer Donald Bren, and launched on 3 January—among 113 other small payloads—on a SpaceX Falcon 9 rocket.
“To the best of our knowledge, this would be the first demonstration of actual power transfer in space, of wireless power transfer,” says Ali Hajimiri, a professor of electrical engineering at Caltech and a codirector of the program behind SSPD-1, the Space Solar Power Project.
The Caltech team is waiting for a go-ahead from the operators of a small space tug to which it is attached, providing guidance and attitude control. If all goes well, SSPD-1 will spend at least five to six months testing prototype components of possible future solar stations in space. In the next few weeks, the project managers hope to unfold a lightweight frame, called DOLCE (short for Deployable on-Orbit ultraLight Composite Experiment), on which parts of future solar arrays could be mounted. Another small assembly on the spacecraft contains samples of 32 different types of photovoltaic cells, intended to see which would be most efficient and robust. A third part of the vehicle contains a microwave transmitter, set up to prove that energy from the solar cells can be sent to a receiver. For this first experiment, the receivers are right there on board the spacecraft, but if it works, an obvious future step would be to send electricity via microwave to receivers on the ground.
Caltech’s Space Solar Power Demonstrator, shown orbiting Earth in this artist’s conception, was launched on 3 January.Caltech
One can dismiss the 50-kilogram SSPD-1 as yet another nonstarter, but a growing army of engineers and policymakers take solar energy from space seriously. Airbus, the European aerospace company, has been testing its own technology on the ground, and government agencies in China, Japan, South Korea, and the United States have all mounted small projects. “Recent technology and conceptual advances have made the concept both viable and economically competitive,” said Frazer-Nash, a British engineering consultancy, in a 2021 report to the U.K. government. Engineers working on the technology say microwave power transmissions would be safe, unlike ionizing radiation, which is harmful to people or other things in its path.
No single thing has happened to start this renaissance. Instead, say engineers, several advances are coming together.
For one thing, the cost of launching hardware into orbit keeps dropping, led by SpaceX and other, smaller companies such as Rocket Lab. SpaceX has a simplified calculator on its website, showing that if you want to launch a 50-kg satellite into sun-synchronous orbit, they’ll do it for US $275,000.
Meanwhile, photovoltaic technology has improved, step by step. Lightweight electronic components keep getting better and cheaper. And there is political pressure as well: Governments and major companies have made commitments to decarbonize in the battle against global climate change, committing to renewable energy sources to replace fossil fuels.
Most solar power, at least for the foreseeable future, will be Earth-based, which will be cheaper and easier to maintain than anything anyone can launch into space. Proponents of space-based solar power say that for now, they see it as best used for specialty needs, such as remote outposts, places recovering from disasters, or even other space vehicles.
But Hajimiri says don’t underestimate the advantages of space, such as unfiltered sunlight that is far stronger than what reaches the ground and is uninterrupted by darkness or bad weather—if you can build an orbiting array light enough to be practical.
Most past designs, dictated by the technology of their times, included impossibly large truss structures to hold solar panels and wiring to route power to a central transmitter. The Caltech team would dispense with all that. An array would consist of thousands of independent tiles as small as 100 square centimeters, each with its own solar cells, transmitter, and avionics. They might be loosely connected, or they might even fly in formation.
Time-lapse images show the experimental DOLCE frame for an orbiting solar array being unfolded in a clean room.Caltech
“The analogy I like to use is that it’s like an army of ants instead of an elephant,” says Hajimiri. Transmission to receivers on the ground could be by phased array—microwave signals from the tiles synchronized so that they can be aimed with no moving parts. And the parts—the photovoltaic cells with their electronics—could perhaps be so lightweight that they’re flexible. New algorithms could keep their signals focused.
“That’s the kind of thing we’re talking about,” said Harry Atwater, a coleader of the Caltech project, as SSPD-1 was being planned. “Really gossamer-like, ultralight, the limits of mass-density deployable systems.”
If it works out, in 30 years maybe there could be orbiting solar power fleets, adding to the world’s energy mix. In other words, as a recent report from Frazer-Nash concluded, this is “a potential game changer.”
The technical challenge of missile defense has been compared with that of hitting a bullet with a bullet. Then there is the still tougher economic challenge of using an expensive interceptor to kill a cheaper target—like hitting a lead bullet with a golden one.
Maybe trouble and money could be saved by shooting down such targets with a laser. Once the system was designed, built, and paid for, the cost per shot would be low. Such considerations led planners at the Pentagon to seek a solution from Lockheed Martin, which has just delivered a 300-kilowatt laser to the U.S. Army. The new weapon combines the output of a large bundle of fiber lasers of varying frequencies to form a single beam of white light. This laser has been undergoing tests in the lab, and it should see its first field trials sometime in 2023. General Atomics, a military contractor in San Diego, is also developing a laser of this power for the Army based on what’s known as the distributed-gain design, which has a single aperture.
Both systems offer the prospect of being inexpensive to use. The electric bill itself would range “from US $5 to $10,” for a pulse lasting a few seconds, says Michael Perry, the vice president in charge of laser systems for General Atomics.
Why are we getting ray guns only now, more than a century after H.G. Wells imagined them in his sci-fi novel The War of the Worlds? Put it down partly to the rising demand for cheap antimissile defense, but it’s mainly the result of technical advances in high-energy lasers.
The old standby for powerful lasers employed chemical reactions in flowing gas. That method was clumsy, heavy, and dangerous, and the laser itself became a flammable target for enemies to attack. The advantage was that these chemical lasers could be made immensely powerful, a far cry from the puny pulsed ruby lasers that wowed observers back in the 1960s by punching holes in razor blades (at power levels jocularly measured in “gillettes”).
“With lasers, if you can see it, you can kill it.” —Robert Afzal, Lockheed Martin
By 2014, fiber lasers had reached the point where they could be considered for weapons, and one 30-kW model was installed on the USS Ponce, where it demonstrated the ability to shoot down speedboats and small drones at relatively close range. The 300-kW fiber lasers being employed now in the two Army projects emit about 100 kW in optical power, enough to burn through much heftier targets (not to mention quite a few gillettes) at considerable distances.
“A laser of that class can be effective against a wide variety of targets, including cruise missiles, mortars, UAVs, and aircraft,” says Perry. “But not reentry vehicles [launched by ballistic missiles].” Those are the warheads, and to ward them off, he says, you’d probably have to hit the rocket when it’s still in the boost phase, which would mean placing your laser in orbit. Laser tech is still far from performing such a feat.
Even so, these futuristic weapons will no doubt find plenty of applications in today’s world. Israel made news in April by field-testing an airborne antimissile laser called Iron Beam, a play on the name Iron Dome, the missile system it has used to down rockets fired from Gaza. The laser system, reportedly rated at about 100 kW, is still not in service and hasn’t seen combat, but one day it may be able to replace some, if not all, of Iron Dome’s missiles with photons. Other countries have similar capabilities, or say they do. In May, Russia said it had used a laser to incinerate a Ukrainian drone from 5 kilometers away, a claim that Ukraine’s president, Volodymyr Zelenskyy, derided.
A missile is destroyed by a low-power, 2013 version of Lockheed Martin’s fiber laser www.youtube.com
Not all ray guns must be lasers, though. In March, Taiwan News reported that Chinese researchers had built a microwave weapon that in principle could be placed in orbit from where its 5-megawatt pulses could fry the electronic heart of an enemy satellite. But making such a machine in the lab is quite different from operating it in the field, not to mention in outer space, where supplying power and removing waste heat constitute major problems.
Because lasers performance falls off in bad weather, they can’t be relied on by themselves to defend critically important targets. They must instead be paired with kinetic weapons—missiles or bullets—to create a layered defense system.
“With lasers, if you can see it, you can kill it; typically rain and snow are not big deterrents,” says Robert Afzal, an expert on lasers at Lockheed Martin. “But a thundercloud—that’s hard.”
Afzal says that the higher up a laser is placed, the less interference it will face, but there is a trade-off. “With an airplane you have the least amount of resources—least volume, least weight—that is available to you. On a ship, you have a lot more resources available, but you’re in the maritime atmosphere, which is pretty hazy, so you may need a lot more power to get to the target. And the Army is in between: It deals with closer threats, like rockets and mortars, and they need a deep magazine, because they deal with a lot more targets.”
In every case, the point is to use expensive antimissile missiles only when you must. Israel opted to pursue laser weapons in part because its Iron Dome missiles cost so much more than the unguided, largely homemade rockets they defend against. Some of the military drones that Russia and Ukraine are now flying wouldn’t break the budget of the better-heeled sort of hobbyist. And it would be a Pyrrhic victory indeed to shoot them from the sky with projectiles so costly that you went broke.
This article appears in the January 2023 print issue as “Economics Drives a Ray-Gun Resurgence .”
Top Tech 2023: A Special Report
Preview exciting technical developments for the coming year.
Can This Company Dominate Green Hydrogen?
Fortescue will need more electricity-generating capacity than France.
Pathfinder 1 could herald a new era for zeppelins
A New Way to Speed Up Computing
Blue microLEDs bring optical fiber to the processor.
The Personal-Use eVTOL Is (Almost) Here
Opener’s BlackFly is a pulp-fiction fever dream with wings.
Baidu Will Make an Autonomous EV
Its partnership with Geely aims at full self-driving mode.
China Builds New Breeder Reactors
The power plants could also make weapons-grade plutonium.
Economics Drives a Ray-Gun Resurgence
Lasers should be cheap enough to use against drones.
A Cryptocurrency for the Masses or a Universal ID?
What Worldcoin’s killer app will be is not yet clear.
The company’s Condor chip will boast more than 1,000 qubits.
Vagus-nerve stimulation promises to help treat autoimmune disorders.
New satellites can connect directly to your phone.
The E.U.’s first exascale supercomputer will be built in Germany.
A dozen more tech milestones to watch for in 2023.
Nearly 90% of the multibillion-dollar federal lobbying apparatus in the United States serves corporate interests. In some cases, the objective of that money is obvious. Google pours millions into lobbying on bills related to antitrust regulation. Big energy companies expect action whenever there is a move to end drilling leases for federal lands, in exchange for the tens of millions they contribute to congressional reelection campaigns.
But lobbying strategies are not always so blunt, and the interests involved are not always so obvious. Consider, for example, a 2013 ...
In 2001, a team of engineers at a then-obscure R&D company called AC Propulsion quietly began a groundbreaking experiment. They wanted to see whether an electric vehicle could feed electricity back to the grid. The experiment seemed to prove the feasibility of the technology. The company’s president, Tom Gage, dubbed the system “vehicle to grid” or V2G.
The concept behind V2G had gained traction in the late 1990s after California’s landmark zero-emission-vehicle (ZEV) mandate went into effect and compelled automakers to commercialize electric cars. In V2G, environmental-policy wonks saw a potent new application of the EV that might satisfy many interests. For the utilities, it promised an economical way of meeting rising demand for electricity. For ratepayers, it offered cheaper and more reliable electricity services. Purveyors of EVs would have a new public-policy rationale backing up their market. And EV owners would become entrepreneurs, selling electricity back to the grid.
AC Propulsion’s experiment was timely. It occurred in the wake of the California electricity crisis of 2000 and 2001, when mismanaged deregulation, market manipulation, and environmental catastrophe combined to unhinge the power grid. Some observers thought V2G could prevent the kinds of price spikes and rolling blackouts then plaguing the Golden State. Around the same time, however, General Motors and other automakers were in the process of decommissioning their battery EV fleets, the key component of V2G.
AC Propulsion’s president, Tom Gage, explains the company’s vehicle-to-grid technology at a 2001 conference in Seattle. Photo-illustration: Max-o-matic; photo source: Alec Brooks
The AC Propulsion experiment thus became an obscure footnote in the tortuous saga of the green automobile. A decade later, in the 2010s, the battery EV began an astounding reversal of fortune, thanks in no small part to the engineers at ACP, whose electric-drive technology informed the development of the Roadster, the car that launched Tesla Motors. By the 2020s, automakers around the world were producing millions of EVs a year. And with the revival of the EV, the V2G concept was reborn.
If a modern electronics- and software-laden car can be thought of as a computer on wheels, then an electric car capable of discharging electricity to the grid might be considered a power plant on wheels. And indeed, that’s how promoters of vehicle-to-grid technology perceive the EV.
Keep in mind, though, that electricity’s unique properties pose problems to anyone who would make a business of producing and delivering it. Electricity is a commodity that is bought and sold, and yet unlike most other commodities, it cannot easily be stored. Once electricity is generated and passes into the grid, it is typically used almost immediately. If too much or too little electricity is present in the power grid, the network can suddenly become unbalanced.
At the turn of the 20th century, utilities promoted the use of electric truck fleets to soak up excess electricity. Photo-illustration: Max-o-matic; photo source: M&N/Alamy
Some operators of early direct-current power plants at the turn of the 20th century solved the problem of uneven power output from their generators by employing large banks of rechargeable lead-acid batteries, which served as a kind of buffer to balance the flow of electrons. As utilities shifted to more reliable alternating-current systems, they phased out these costly backup batteries.
Then, as electricity entrepreneurs expanded power generation and transmission capacity, they faced the new problem of what to do with all the cheap off-peak, nighttime electricity they could now produce. Utilities reconsidered batteries, not as stationary units but in EVs. As the historian Gijs Mom has noted, enterprising utility managers essentially outsourced the storage of electricity to the owners and users of the EVs then proliferating in northeastern U.S. cities. Early utility companies like Boston Edison and New York Edison organized EV fleets, favoring electric trucks for their comparatively capacious batteries.
In the early years of the automobile, battery-powered electric cars were competitive with cars fueled by gasoline and other types of propulsion.Photo-illustration: Max-o-matic; image source: Shawshots/Alamy
The problems of grid management that EVs helped solve faded after World War I. In the boom of the 1920s, U.S. utility barons such as Samuel Insull massively expanded the country’s grid systems. During the New Deal era, the federal government began funding the construction of giant hydropower plants and pushed transmission into rural areas. By the 1950s, the grid was moving electricity across time zones and national borders, tying in diverse sources of supply and demand.
The need for large-scale electrochemical energy storage as a grid-stabilizing source of demand disappeared. When utilities considered storage technology at all in the succeeding decades, it was generally in the form of pumped-storage hydropower, an expensive piece of infrastructure that could be built only in hilly terrain.
It wasn’t until the 1990s that the electric car reemerged as a possible solution to problems of grid electricity. In 1997, Willett Kempton, a professor at the University of Delaware, and Steve Letendre, a professor at Green Mountain College, in Vermont, began publishing a series of journal articles that imagined the bidirectional EV as a resource for electricity utilities. The researchers estimated that, if applied to the task of generating electricity, all of the engines in the U.S. light-duty vehicle fleet would produce around 16 times the output of stationary power plants. Kempton and Letendre also noted that the average light vehicle was used only around 4 percent of the time. Therefore, they reasoned, a fleet of bidirectional EVs could be immensely useful to utilities, even if it was only a fraction the size of the conventional vehicle fleet.
AC Propulsion cofounder Wally Rippel converted a Volkswagen microbus into an electric vehicle while he was still a student at Caltech. Photo-illustration: Max-o-matic; photo source: Herald Examiner Collection/Los Angeles Public Library
The engineers at AC Propulsion (ACP) were familiar with the basic precepts of bidirectional EV power. The company was the brainchild of Wally Rippel and Alan Cocconi, Caltech graduates who had worked in the late 1980s and early 1990s as consultants for AeroVironment, then a developer of lightweight experimental aircraft. The pair made major contributions to the propulsion system for the Impact, a battery-powered concept car that AeroVironment built under contract for General Motors. Forerunner of the famous EV1, the Impact was regarded as the most advanced electric car of its day, thanks to its solid-state power controls, induction motor, and integrated charger. The vehicle inspired California’s ZEV mandate, instituted in 1990. As Cocconi told me, the Impact was bidirectional-capable, although that function wasn’t fully implemented.
AeroVironment had encouraged its engineers to take creative initiative in developing the Impact, but GM tightly managed efforts to translate the idiosyncratic car into a production prototype, which rankled Cocconi and Rippel. Cocconi was also dismayed by the automaker’s decision to equip the production car with an off-board rather than onboard charger, which he believed would limit the car’s utility. In 1992, he and Rippel quit the project and, with Hughes Aircraft engineer Paul Carosa, founded ACP, to further develop battery electric propulsion. The team applied their technology to a two-seat sportscar called the tzero, which debuted in January 1997.
Electric Car tzero 0-60 3.6 sec faster than Tesla Roadster www.youtube.com
Through the 1990s and into the early 2000s, ACP sold its integrated propulsion systems to established automakers, including Honda, Volkswagen, and Volvo, for use in production models being converted into EVs. For car companies, this was a quick and cheap way to gain experience with battery electric propulsion while also meeting any quota they may have been subject to under the California ZEV mandate.
By the turn of the millennium, however, selling EV propulsion systems had become a hard way to make a living. In early 2000, when GM announced it had ceased production of the EV1, it signaled that the automaking establishment was abandoning battery electric cars. ACP looked at other ways of marketing its technology and saw an opportunity in the California electricity crisis then unfolding.
Traditionally, the electricity business combined several discrete services, including some designed to meet demand and others designed to stabilize the network. Since the 1930s, these services had been provided by regulated, vertically integrated utilities, which operated as quasi-monopolies. The most profitable was peaking power—electricity delivered when demand was highest. The less-lucrative stabilization services balanced electricity load and generation to maintain system frequency at 60 hertz, the standard for the United States. In a vertically integrated utility, peaking services essentially subsidized stabilization services.
With deregulation in the 1990s, these aggregated services were unbundled and commodified. In California, regulators separated generation from distribution and sold 40 percent of installed capacity to newly created independent power producers that specialized in peaking power. Grid-stabilization functions were reborn as “ancillary services.” Major utilities were compelled to purchase high-cost peaking power, and because retail prices were capped, they could not pass their costs on to consumers. Moreover, deregulation disincentivized the construction of new power plants. At the turn of the millennium, nearly 20 percent of the state’s generating capacity was idled for maintenance.
General Motors’ Impact debuted at the 1990 Los Angeles Auto Show. It was regarded as the most advanced electric vehicle of its era.Photo-illustration: Max-o-matic; photo source: Alec Brooks
The newly marketized grid was highly unstable, and in 2000 and 2001, things came to a head. Hot weather caused a demand spike, and the accompanying drought (the beginning of the multidecade southwestern megadrought) cut hydropower capacity. As Californians turned on their air conditioners, peaking capacity had to be kept in operation longer. Then market speculators got into the act, sending wholesale prices up 800 percent and bankrupting Pacific Gas & Electric. Under these combined pressures, grid reliability eroded, resulting in rolling blackouts.
With the grid crippled, ACP’s Gage contacted Kempton to discuss whether bidirectional EV power could help. Kempton identified frequency regulation as the optimal V2G market because it was the most profitable of the ancillary services, constituting about 80 percent of what the California Independent System Operator, the nonprofit set up to manage the deregulated grid, then spent on such services.
The result was a demonstration project, a task organized by Alec Brooks, manager of ACP’s tzero production. Like Rippel and Cocconi, Brooks was a Caltech graduate and part of the close-knit community of EV enthusiasts that emerged around the prestigious university. After earning a Ph.D. in civil engineering in 1981, Brooks had joined AeroVironment, where he managed the development of Sunraycer, an advanced solar-powered demonstration EV built for GM, and the Impact. He recruited Rippel and Cocconi for both jobs. During the 1990s, Brooks formed a team at AeroVironment that provided support for GM’s EV programs until he too tired of the corporate routine and joined ACP in 1999.
Before cofounding AC Propulsion, Alan Cocconi worked on Sunraycer, a solar-powered car for GM. Here, he’s testing the car’s motor-drive power electronics.Photo-illustration: Max-o-matic; photo source: Alec Brooks
Working with Gage and Kempton, and consulting with the ISO, Brooks set out to understand how the EV might function as a utility resource.
ACP adapted its second-generation AC-150 drivetrain, which had bidirectional capability, for this application. As Cocconi recalled, the bidirectional function had originally been intended for a different purpose. In the 1990s, batteries had far less capacity than they do today, and for the small community of EV users, the prospect of running out of juice and becoming stranded was very real. In such an emergency, a bidirectional EV with charge to spare could come to the rescue.
With funding from the California Air Resources Board, the team installed an AC-150 drive in a Volkswagen Beetle. The system converted AC grid power to DC power to charge the battery and could also convert DC power from the battery to AC power that could feed both external stand-alone loads and the grid. Over the course of the project, the group successfully demonstrated bidirectional EV power using simulated dispatch commands from the ISO’s computerized energy-management system.
This pair of graphs shows how AC Propulsion’s AC-150 drivetrain performed in a demonstration of grid frequency regulation. The magenta line in the upper graph tracks grid frequency centered around 60 hertz. The lower graph indicates power flowing between the grid and the drivetrain; a negative value means power is being drawn from the grid, while a positive value means power is being sent back to the grid.
Photo-illustration: Max-o-matic; photo source: Alec Brooks
The experiment demonstrated the feasibility of the vehicle-to-grid approach, yet it also revealed the enormous complexities involved in deploying the technology. One unpleasant surprise, Brooks recalled, came with the realization that the electricity crisis had artificially inflated the ancillary-services market. After California resolved the crisis—basically by re-regulating and subsidizing electricity—the bubble burst, making frequency regulation as a V2G service a much less attractive business proposition.
The prospect of integrating EV storage batteries into legacy grid systems also raised concerns about control. The computers responsible for automatically signaling generators to ramp up or down to regulate frequency were programmed to control large thermoelectric and hydroelectric plants, which respond gradually to signals. Batteries, by contrast, respond nearly instantaneously to commands to draw or supply power. David Hawkins, an engineer who served as a chief aide to the ISO’s vice president of operations and advised Brooks, noted that the responsiveness of batteries had unintended consequences when they were used to regulate frequency. In one experiment involving a large lithium-ion battery, the control computer fully charged or discharged the unit in a matter of minutes, leaving no spare capacity to regulate the grid.
In principle, this problem might have been solved with software to govern the charging and discharging. The main barrier to V2G in the early 2000s, it turns out, was that the battery EV would have to be massively scaled up before it could serve as a practical energy-storage resource. And the auto industry had just canceled the battery EV. In its place, automakers promised the fuel-cell electric car, a type of propulsion system that does not easily lend itself to bidirectional power flow.
The dramatic revival of the battery EV in the late 2000s and early 2010s led by Tesla Motors and Nissan revived prospects for the EV as a power-grid resource. This EV renaissance spawned a host of R&D efforts in bidirectional EV power, including ECOtality and the Mid-Atlantic Grid Interactive Cars Consortium. The consortium, organized by Kempton in conjunction with PJM, the regional transmission organization responsible for much of the eastern United States, used a car equipped with an AC-150 drivetrain to further study the use of V2G in the frequency-regulation market.
Over time, however, the research focus in bidirectional EV applications shifted from the grid to homes and commercial buildings. In the wake of the Fukushima nuclear disaster in 2011, for instance, Nissan developed and marketed a vehicle-to-building (V2B) charging system that enabled its Leaf EV to provide backup power.
In 2001, AC Propulsion engineers installed an AC-150 drivetrain in a Volkswagen Beetle to demonstrate the feasibility of V2G technology for regulating frequency on the power grid.Photo-illustration: Max-o-matic; photo source: Alec Brooks
The automaker later entered an R&D partnership with Fermata Energy, a Virginia-based company that develops bidirectional EV power systems. Founded by the entrepreneur and University of Virginia researcher David Slutzky in 2010, Fermata considered and then ruled out the frequency-regulation market, on the grounds that it was too small and unscalable.
Slutsky now believes that early markets for bidirectional EV power will emerge in supplying backup power and supplementing peak loads for individual commercial buildings. Those applications will require institutional fleets of EVs. Slutzky and other proponents of EV power have been pressing for a more favorable regulatory environment, including access to the subsidies that states such as California offer to users of stationary storage batteries.
Advocates believe that V2G can help pay for EV batteries. While interest in this idea seems likely to grow as EVs proliferate, the prospect of electric car owners becoming power entrepreneurs appears more distant. Hawkins, the engineer who advised Brooks, holds that the main barriers to V2G are not so much technological as economic: Viable markets need to emerge. The everyday participant in V2G, he argues, would face the difficult task of attempting to arbitrage the difference between wholesale and retail prices while still paying the retail rate. In principle, EV owners could take advantage of the same feed-in tariffs and net-metering schemes designed to enable homeowners to sell surplus solar power back to the grid. But marketing rooftop solar power has proven more complicated and costly for suburbanites than initially assumed, and the same would likely hold true for EV power.
Another major challenge is how to balance the useful lifetime of EV batteries in transportation and non-vehicle applications. That question turns on understanding how EV batteries will perform and age in stationary-power roles. Users would hardly be further ahead, after all, if they substantially degraded their batteries in the act of paying them off. Grid managers could also face problems if they come to depend on EV batteries that prove unreliable or become unavailable as driving patterns change.
In short, the core conundrum of V2G is the conflict of interest that comes from repurposing privately owned automobiles as power plants. Scaling up this technology will require intimate collaboration between automaking and electricity-making, enterprises with substantially different revenue models and systems of regulation. At the moment, the auto industry does not have a clear interest in V2G.
On the other hand, rising electricity demand, concerns about fossil fuels, greenhouse gases, and climate change, and the challenges of managing intermittent renewable energy have all created new justifications for bidirectional EV power. With the proliferation of EVs over the last decade, more demonstrations of the technology are being staged for a host of applications—sometimes expressed as V2X, or vehicle-to-everything. Some automakers, notably Nissan and now Ford, already sell bidirectional EVs, and others are experimenting with the technology. Enterprises are emerging to equip and manage demonstrations of V2B, V2G, and V2X for utilities and big institutional users of electricity. Some ambitious pilot projects are underway, notably in the Dutch city of Utrecht.
Back in 2002, at the end of their experiment, the engineers at AC Propulsion concluded that what V2G really needed was a powerful institutional champion. They went on to make further important contributions to EV technology. Brooks and Rippel worked for the nascent Tesla Motors, while Cocconi continued at ACP until a cancer diagnosis led him to reevaluate his life. In the mid-2000s, Cocconi sold his stake in the company and devoted himself to aviation, his first love, developing remote-controlled solar-powered aircraft. The rebirth of the battery electric car in the 2010s and 2020s reaffirmed the efforts of these three visionary pioneers.
A strong V2G patron has yet to emerge. Nevertheless, the idea of an off-the-shelf energy storage unit that also provides transportation and pays for itself is likely to remain attractive enough to sustain ongoing interest. Who knows? The electric car might still one day become a power plant on wheels.
The author thanks Alec Brooks, Alan Cocconi, David Hawkins, David Slutzky, and Wally Rippel for sharing their experiences. Parts of this article are adapted from the author’s new book, Age of Auto Electric (MIT Press, 2022).
Scientists today reported that they’ve observed room-temperature superconductivity. Superconductivity is a rarefied state of matter in which electrical resistance in a material drops to zero while its electrical and magnetic capacity vastly expands. Until now, the phenomenon has been observed only at cryogenic temperatures or phenomenally high pressures. Such a discovery, if confirmed, could open pathways to a range of applications including lossless electric transmission, high-efficiency electric motors, maglev trains, and low-cost magnets for MRI and nuclear fusion.
However, the caveats attached to today’s announcement are considerable. While the researchers say their material retains its coveted lossless properties at temperatures up to 20.6 ºC, it still requires substantial pressure (10 kilobars, or 9,900 atmospheres). Today’s publication is also tarnished by the fact that the scientists behind the discovery, publishing their work in today’s issue of the journal Nature, have retracted a previous paper on room-temperature superconductivity because of its unconventional data-reduction methods.
The primary researcher Ranga Dias—assistant professor in the departments of mechanical engineering and physics and astronomy at the University of Rochester—said the retracted research paper has since been revised to accommodate the criticisms and accusations. Originally published in Nature as well, the revised version is back under peer review with Nature, Dias said.
“We’ve made an open-door policy. We [allowed] everybody to come to our lab and see how we do the measurements.”
—Ranga Dias, University of Rochester
Last fall, when the group’s previous paper (reporting similarly compelling results involving a much higher-pressure material inside a diamond anvil) was retracted, many criticisms and even allegations of misconduct dogged the team across the science press. “I think this is a real problem,” Jorge Hirsch, professor of physics at the University of California, San Diego, told Science at the time. “You cannot leave it as, ‘Oh, it’s a difference of opinion.’ ”
Contacted by Spectrum, Hirsch said his views today—alleging misconduct—have only strengthened since then. According to him, some of Dias’s group’s reported data was allegedly computer-generated—a feat that Hirsch’s team says they can reproduce out to seven-digit accuracy. “When you read the paper, it superficially looks great. ... [And] if this is true it is an incredible breakthrough, worthy of a Nobel Prize. But when you look more carefully several warning signals become apparent,” Hirsch said via email.
Venkat Viswanathan, associate professor of mechanical engineering at Carnegie Mellon University, in Pittsburgh, said the degree of controversy the retraction merited may have been overstated. “It was unfortunate what happened,” he said. “But a lot of people seized on it. If people took a serious look at the work itself and all that’s transpired since, I think the data is still solid. It’s still very attractive for superconductivity.”
Paul C.W. Chu, professor of physics and founding director at the Texas Center for Superconductivity at the University of Houston, said he has seen many claims of high-temperature superconductivity in his more than half century in the field. Many such claims did not pan out. (He has also, in his time, grabbed headlines for high-temperature superconductivity claims—in his case, claims that were true and advanced the field.)
Spectrum spoke with Chu hours after Dias’s group had presented their findings to this year’s March Meeting of the American Physical Society, the same meeting that in 1987 Chu had legendarily presented some of his own groundbreaking superconducting discoveries. Chu said he is especially cautious about the Dias group’s background subtraction methods. Background subtraction is not uncommon in the field, he said. But in this case, the signal is small compared to the noise. So, he said, “the background subtraction has to take place carefully.”
Still, Chu continued, “It is a very nice experiment. This is definitely significant, if it is proven to be real.”
According to James Walsh, assistant professor of chemistry at the University of Massachusetts Amherst, some of the controversy behind the group’s findings may be related to the challenges posed by the medium itself. “High-pressure science imposes experimental difficulties that simply don’t exist with traditional methods,” he told Spectrum via email. “It is hard to overstate the skill and ingenuity of the high-pressure community that has made magnetism and resistivity data accessible at all.”
Because of the increased scrutiny occasioned by the Dias group’s publication history—as well as the outsized significance of the group’s new finding—Dias said that his team has abided by increased levels of transparency and repeatability.
“The history of materials science has shown us that technological leaps can often be traced back to the announcement of a newly discovered material with outstanding properties.”
—James Walsh, University of Massachusetts Amherst
“We’ve made an open-door policy,” Dias said. “We [allowed] everybody to come to our lab and see how we do the measurements. During the review process, we shared all our data with the referees.”
He added that in collecting data for their revised previous paper, the researchers collaborated with officials from Argonne and Brookhaven National Laboratories. “We did the measurements in front of a live audience,” Dias said. “They showed the superconducting transition. We are collaborating with both labs to understand the material properties and understand the exact structure of the material.” (A spokesperson for Argonne, contacted by Spectrum, said that U.S. Department of Energy policy prohibits them from speaking about research appearing in papers that their group did not author.)
The centerpiece material in the present research—the putative 10-kilobar superconductor—is sure to be the subject of a flurry of both controversy and at least short-term interest. The recipe for what the team calls “reddmatter” (a Star Trek reference) involves hydrogen, nitrogen and the 71st element on the periodic table, lutetium (Lu).
Carnegie Mellon’s Viswanathan said today’s discovery may represent the biggest gold rush on lutetium in the rare earth’s entire history. “He has singlehandedly spiked the metals index for this element,” he said of Dias.
Walsh, of the University of Massachusetts, expressed enthusiasm for the material itself—named for its ruby red hue in its high-pressure state. “The history of materials science has shown us that technological leaps can often be traced back to the announcement of a newly discovered material with outstanding properties,” he said via email. “It would be hard to argue that a result like this should not qualify.”
These microphotographs show the lutetium nitrogen hydrogen material (a.k.a. “reddmatter”) that researchers report superconducts at high pressures. Curiously, also at high pressures the previously blue material turns ruby red.Ranga Dias/University of Rochester
Of course, a result like this also requires highly pressurized cells, which might only swap the cryogenic equipment required for present-day superconductors with a different kind of elaborate, expensive, and unwieldy roomful of hardware. Chu says he will be collaborating with researchers investigating ways to transform rare-earth materials like the lutetium nitrogen hydrogen compound into superconductors that require substantially less pressure.
“These high-pressure cells interfere with measurements, and if you talk about applications, it’s not practical,” he said. “We want to see if we can stabilize it without pressure.”
Such notions have parallels in other fields. In semiconductor engineering, strained silicon transistors can retain in their lattice effective pressures three or more times as great as the pressures involved in the present material.
Eva Zurek, professor of chemistry at the University at Buffalo in New York state, said independent confirmations of the Dias group’s work are essential. But if the finding is validated, then she anticipates a challenging but not impossible road to develop a material that can perform at something close to ambient pressures as well as temperatures.
“If [the new finding is] proven to be true,” she said via email, “then I believe it would be relatively straightforward to either find ways to bring Lu-N-H to normal pressure/temperature conditions, or develop technologies where it may be used at very mild pressures.”
Update 13 March: The story was updated to include new remarks from Prof. Jorge Hirsch, a response to a call for comments that had arrived after this article was originally published.
A powerful trillion-watt laser shot at the sky can generate lightning rods in the air that can guide lightning strikes to keep them from causing havoc, a new study finds.
To date, the most common and effective form of protection against lightning is the lightning rod invented by Benjamin Franklin in 1752. These pointed electrically conductive metal rods intercept lightning strikes and guide their electric current safely to the ground.
However, a key drawback of a conventional lightning rod is that the radius of its area of protection is roughly equal to its height. Since there are practical limits to how tall one can build a lightning rod, this means they may not prove useful at protecting large areas, including sensitive infrastructure such as airports, rocket launchpads and nuclear power plants, says study senior author Jean-Pierre Wolf, a physicist at the University of Geneva.
“This is the first demonstration that lightning can be controlled by a laser.”
—Jean-Pierre Wolf, University of Geneva
Scientists first suggested using lasers to generate lightning rods in the air nearly 50 years ago. “The idea is to create a very long lightning rod with the laser,” Wolf says.
In the new study, researchers conducted experiments during the summer of 2021 at the top of Mount Säntis, which at 2,502 meters above sea level, is the highest mountain in the Alpstein massif of northeastern Switzerland. The laser was activated every time storms were forecast between June and September, with air traffic closed over the area during these tests.
Wolf and his colleagues sought to protect a 124-meter transmitter tower equipped with a traditional lightning rod at the summit belonging to telecommunications provider Swisscom. This tower is struck by lightning about 100 times a year, and scientists had previously equipped it with multiple sensors to analyze these strikes.
Near the tower, the researchers installed a near-infrared laser the size of a large car. It fired pulses each packing about a half-joule of energy and a picosecond (trillionth of a second) long roughly a thousand times a second, with a peak power of a terawatt (trillion watts). (It also shot a visible green beam to help show the laser’s path.)
“Imagine transporting a 10-ton laser to 2,500-meter altitude on a mountain with helicopters, making it run in very harsh conditions, tracking lightning in extreme weather like winds up to 200 kilometers per hour, heavy rain, hail, temperatures varying from -10 degrees to 20 degrees Celsius in the same day, and then, when it works, you get a massive lightning bolt some tens of meters next to you—and you’re so happy,” Wolf says.
The laser pulses can alter the refractive index of the air—the quality of a material that controls how quickly light travels within it. This can make the air behave like a series of lenses.
After crossing this lensing air, the intense, short laser pulses can rapidly ionize and heat air molecules, expelling them from the path of the beam at supersonic speeds. This leaves behind a channel of low-density air for roughly a millisecond. These “filaments” possess high electric conductivity and can thus serve as lightning rods, and can range up to 100 meters long. The researchers could adjust the laser to create filaments that appear up to a kilometer from the machine.
In experiments, the scientists created filaments above, but near, the tip of the tower’s lightning rod. This essentially boosted the rod’s height by at least 30 meters, extending its area of protection so that lightning would not strike parts of the tower otherwise outside the rod’s shelter, says study lead author Aurélien Houard, a research scientist at the Superior National School of Advanced Techniques in Paris.
The laser operated for more than six hours during thunderstorms happening within three kilometers of the tower. The tower was hit by at least 16 lightning flashes, all of which streaked upward.
Four of these flashes occurred while the laser was operating. High-speed camera footage and radio and X-ray detectors showed the laser helped guide the course of these discharges. One of these guided strikes was recorded on camera and revealed it followed the laser path for nearly 60 meters.
During tests carried out on the summit of Mt. Säntis by Jean-Pierre Wolf and Aurélien Houard’s team, the scientists noted that lightning discharges followed laser beams for several dozen meters before reaching the Swisscom telecommunications tower (in red and white).Xavier Ravinet/UNIGE
“This is the first demonstration that lightning can be controlled by a laser,” Wolf says.
Although lab experiments had suggested that lasers could help guide lightning strikes, previous experiments failed to do so in the field over the past 20 or so years. Wolf, Houard and their colleagues suggest their new work may have succeeded because of the pulse rate of their laser was hundreds of times greater than prior attempts. The more pulses are used, the greater the chance one might successfully intercept all of the activity leading up to a lightning flash. In addition, higher pulse rates are likely better at keeping filaments electrically conductive, they added.
Wolf noted their work is not geoengineering research. “We are not modifying the climate,” he says. “We deflect lightning to protect areas.”
In the long term, the scientists would like to use lasers to extend lightning rods by 500 meters. In addition, they would likely to run experiments at sites such as airports and rocket launchpads, Wolf notes.
The researchers detailed their findings 16 January in the journal Nature Photonics.
Each January, the editors of IEEE Spectrum offer up some predictions about technical developments we expect to be in the news over the coming year. You’ll find a couple dozen of those described in the following special report. Of course, the number of things we could have written about is far higher, so we had to be selective in picking which projects to feature. And we’re not ashamed to admit, gee-whiz appeal often shaped our choices.
For example, this year’s survey includes an odd pair of new aircraft that will be taking to the skies. One, whose design was inspired by the giant airships of years past, is longer than a football field; the other, a futuristic single-seat vertical-takeoff craft powered by electricity, is about the length of a small car.
While some of the other stories might not light up your imagination as much, they highlight important technical issues the world faces—like the challenges of shifting from fossil fuels to a hydrogen-based energy economy or the threat that new plutonium breeder reactors in China might accelerate the proliferation of nuclear weapons. So whether you prefer reading about topics that are heavy or light (even lighter than air), you should find something here to get you warmed up for 2023.
This article appears in the January 2023 print issue.
Top Tech 2023: A Special Report
Preview exciting technical developments for the coming year.
Can This Company Dominate Green Hydrogen?
Fortescue will need more electricity-generating capacity than France.
Pathfinder 1 could herald a new era for zeppelins
A New Way to Speed Up Computing
Blue microLEDs bring optical fiber to the processor.
The Personal-Use eVTOL Is (Almost) Here
Opener’s BlackFly is a pulp-fiction fever dream with wings.
Baidu Will Make an Autonomous EV
Its partnership with Geely aims at full self-driving mode.
China Builds New Breeder Reactors
The power plants could also make weapons-grade plutonium.
Economics Drives a Ray-Gun Resurgence
Lasers should be cheap enough to use against drones.
A Cryptocurrency for the Masses or a Universal ID?
What Worldcoin’s killer app will be is not yet clear.
The company’s Condor chip will boast more than 1,000 qubits.
Vagus-nerve stimulation promises to help treat autoimmune disorders.
New satellites can connect directly to your phone.
The E.U.’s first exascale supercomputer will be built in Germany.
A dozen more tech milestones to watch for in 2023.
Planning for the return journey is an integral part of the preparations for a crewed Mars mission. Astronauts will require a total mass of about 50 tonnes of rocket propellent for the ascent vehicle that will lift them off the planet’s surface, including 31 tonnes of oxygen approximately. The less popular option is for crewed missions to carry the required oxygen themselves. But scientists are optimistic that it could instead be produced from the carbon dioxide–rich Martian atmosphere itself, using a system called MOXIE.
The Mars Oxygen ISRU (In-Situ Resource Utilization) Experiment is an 18-kilogram unit housed within the Perseverance rover on Mars. The unit is “the size of a toaster,” adds Jeffrey Hoffman, professor of aerospace engineering at MIT. Its job is to electrochemically break down carbon dioxide collected from the Martian atmosphere into oxygen and carbon monoxide. It also tests the purity of the oxygen.
Between February 2021, when it arrived on Mars aboard the Perseverance, and the end of the year, MOXIE has had several successful test runs. According to a review of the system by Hoffman and colleagues, published in Science Advances, it has demonstrated its ability to produce oxygen during both night and day, when temperatures can vary by over 100 ºC. The generation and purity rates of oxygen also meet requirements to produce rocket propellent and for breathing. The authors assert that a scaled-up version of MOXIE could produce the required oxygen for lift-off as well as for the astronauts to breathe.
Next question: How to power any oxygen-producing factories that NASA can land on Mars? Perhaps via NASA’s Kilopower fission reactors?
MOXIE is a first step toward a much larger and more complex system to support the human exploration of Mars. The researchers estimate a required generation rate of 2 to 3 kilograms per hour, compared with the current MOXIE rate of 6 to 8 grams per hour, to produce enough oxygen for lift-off for a crew arriving 26 months later. “So we’re talking about a system that’s a couple of hundred times bigger than MOXIE,” Hoffman says.
They calculate this rate accounting for eight months to get to Mars, followed by some time to set up the system. “We figure you'd probably have maybe 14 months to make all the oxygen.” Further, he says, the produced oxygen would have to be liquefied to be used a rocket propellant, something the current version of MOXIE doesn’t do.
MOXIE also currently faces several design constraints because, says Hoffman, a former astronaut, “our only ride to Mars was inside the Perseverance rover.” This limited the amount of power available to operate the unit, the amount of heat they could produce, the volume and the mass.
“MOXIE does not work nearly as efficiently as a stand-alone system that was specifically designed would,” says Hoffman. Most of the time, it’s turned off. “Every time we want to make oxygen, we have to heat it up to 800 ºC, so most of the energy goes into heating it up and running the compressor, whereas in a well-designed stand-alone system, most of the energy will go into the actual electrolysis, into actually producing the oxygen.”
However, there are still many kinks to iron out for the scaling-up process. To begin with, any oxygen-producing system will need lots of power. Hoffman thinks nuclear power is the most likely option, maybe NASA’s Kilopower fission reactors. The setup and the cabling would certainly be challenging, he says. “You’re going to have to launch to all of these nuclear reactors, and of course, they’re not going to be in exactly the same place as the [other] units,” he says. "So, robotically, you’re going to have to connect to the electrical cables to bring power to the oxygen-producing unit.”
Then there is the solid oxide electrolysis units, which Hoffman points out are carefully machined systems. Fortunately, the company that makes them, OxEon, has already designed, built, and tested a full-scale unit, a hundred times bigger than the one on MOXIE. “Several of those units would be required to produce oxygen at the quantities that we need,” Hoffman says.
He also adds that at present, there is no redundancy built into MOXIE. If any part fails, the whole system dies. “If you’re counting on a system to produce oxygen for rocket propellant and for breathing, you need very high reliability, which means you’re going to need quite a few redundant units.”
Moreover, the system has to be pretty much autonomous, Hoffman says. “It has to be able to monitor itself, run itself.” For testing purposes, every time MOXIE is powered up, there is plenty of time to plan. A full-scale MOXIE system, though, would have to run continuously, and for that it has to be able to adjust automatically to changes in the Mars atmosphere, which can vary by a factor of two over a year, and between nighttime and daytime temperature differences.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Agilicious is a co-designed hardware and software framework tailored to autonomous, agile quadrotor flight, which has been developed and used since 2016 at the Robotics and Perception Group of the University of Zurich. Agilicious is completely open-source and open hardware and supports both model-based and neural-network-based controllers!
[ Agilicious ]
Flexiv’s adaptive robot masseur fuses force control, computer vision, and artificial intelligence to emulate the skill and dexterity of a human massage therapist.
[ Flexiv ]
Freely inspired by Jean de la Fontaine’s “The Wolf and the Lamb”, Coperni’s fashion show in Paris. features the Spot robots from Boston Dynamics in relation with human beings. Unlike the original fable written in the 17th century, which raises questions relating to the balance of power between the human groups that make up society, Coperni reinterprets the story and transposes it to the year of 2023 with a positive vision of the future. The figure of the wolf is replaced by Spot robots, whereas the lambs by human beings. The fashion show tells the story of their meeting, their mutual taming and their cohabitation. The show presents Coperni’s vision which is that there is neither a dominant nor a dominated, but that the mankind and machine live in harmony.
[ Coperni ]
Meet the third-generation prototype Honda Autonomous Work Vehicle (AWV), a new category of capable work vehicle that can operate autonomously in a variety of dynamic work environments. The Honda AWV has the potential to bring greater efficiencies, enhanced workforce productivity, and better environmental performance to the construction industry, and to other industries seeking an autonomous off-road solution.
[ Honda ]
Skydio’s partnership with WeRobotics provides microgrant scholarships to Flying Labs across the globe, with Nepal Flying Labs being selected as a recipient for their project to create a digital twin of Changu Narayan Temple, the oldest temple in Kathmandu Valley, dating back to the 5th Century.
[ Skydio ]
This is perhaps the jauntiest gait I have ever seen in a humanoid robot.
[ GitHub ]
Interesting “autoloader” for Wing delivery drones.
[ Wing ]
In this video, we showcase how robots can learn from human experts to master complex task sequencing in various manufacturing processes, such as surface finishing and composite layup. We demonstrate how robots can learn the expert’s task sequencing policies for composite layup task and perform execution on a brand new test part.
[ USC Viterbi ]
We present a sim-to-real learning-based approach for real-world humanoid locomotion. We do not use state estimation, dynamics models, trajectory optimization, reference trajectories, or pre-computed gait libraries. Our controller is trained with large-scale model-free reinforcement learning on an ensemble of randomized environments in simulation and deployed to the real world in a zero-shot fashion. To the best of our knowledge, this is the first demonstration of a fully learning-based method for real-world full-sized humanoid locomotion.
[ Paper ]
Roboball? Roboball.
[ Texas A&M ]
This work addresses the problem of active collaborative localization of robot teams with unknown data association. In particular, it involves positioning a small number of identical unmanned ground vehicles (UGVs) at desired positions so that an unmanned aerial vehicle (UAV) can, through unlabelled measurements of UGVs, uniquely determine its global pose. This work enables robust localization in perceptually challenged GPS-denied environments, thus paving the road for large-scale multi-robot navigation and mapping.
[ UPenn ]
This work presents a multi-segment vine robot that can navigate complex paths without interacting with its environment. This is achieved by a new steering method that selectively actuates each single pouch at the tip, providing high degrees of freedom with few control inputs.
[ Paper ]
TBNET and DEEP Robotics deployed Jueying X20 quadruped robot with two high-precision surveying and mapping equipment. It has completed on-site data collection in the complex terrain of real scenes such as indoor and outdoor construction sites, electric tower groups, railways, underground garages, and ancient buildings.
[ DeepRobotics ]
Here’s a quick look at RoMeLa’s novel robotic limb EEMMMa (Elastic Extending Mechanism for Mobility and Manipulation), a long-reach steel tentacle that can both extend and bend. It can be used to deploy grappling hooks for climbing, and morph its shape to bend around obstacles. This can allow it to place cameras and retrieve samples from hard-to-reach places. The limb’s unique tape spring construction results in a versatile, lightweight, and compact system. This can enable future mobile robots to move easily and safely through highly unstructured terrain such as forests or cave systems.
[ RoMeLa ]
Micro Robots are a revolutionary new technology that could change how we interact with the world around us. For the first time, a collaborative research team of electrical and computer engineers, with support from NSF, has installed electronic brains on solar-powered microbots the size of a human hair. One of the biggest challenges is their small size-requiring external control, such as a computer or smartphone, limiting their range and making the bots difficult to manipulate remotely until now.
[ NSF ]
A team of scientists, engineers, and designers embark on an Arctic expedition to test space technology. The MIT Space Exploration Initiative expedition in Svalbard was not simply a space analog mission, but an experience to learn how to help enable better access to remote regions from the far corners of planet Earth, to the Moon, and Mars.
[ MIT ]
The Perseverance rover, which landed on Mars in February 2021, has the most advanced autonomous driving capability ever flown to Mars. Having such an advanced capability contributes to the rover’s challenging mission to discover signs of life that may have existed on Mars in a distant past. This talk provides an overview on the current research and development efforts on robotics autonomy at JPL, with an emphasis on enhancing the safety, efficiency, and performance of robotic mobility through the applications of risk-aware decision making and machine learning.
[ JPL ]
An interactive webinar discussing the potential and the path toward general-purpose robotic manipulation. This will be the first in a series, hosted by Northwestern’s Center for Robotics and Biosystems, devoted to this grand challenge of robotics.
[ Northwestern ]
Uncrewed combat aerial vehicles, or attack drones, have become a common feature of the modern battlefield. Russia has deployed them to terrorise civilians in Ukraine and disable essential infrastructure, and Ukraine has also relied heavily on drones for attack, reconnaissance and surveillance. But these aren't the only 'killer robots' that armies are utilising. Josh Toussaint-Strauss explores how the weaponisation of general-purpose robots and the developments of a wider array of advanced mobile robotics and AI powered machine are changing the dynamics of modern warfare in ways that have prompted leading robotics companies and the UN to raise the alarm and call for greater restrictions
Part of the kill chain’: how can we control weaponised robots?
Just a few miles from the Ukraine's southern frontline, Russian missiles have been pummelling a village near Zaporizhzhia, and turned a newly refurbished medical clinic into a ravaged, abandoned shell. The Guardian’s Luke Harding and Christopher Cherry meet civilians who have refused to flee the bombardment and troops who are determined to fight back.
Update 5 Sept.: For now, NASA’s giant Artemis I remains on the ground after two launch attempts scrubbed by a hydrogen leak and a balky engine sensor. Mission managers say Artemis will fly when everything's ready—but haven't yet specified whether that might be in late September or in mid-October.
“When you look at the rocket, it looks almost retro,” said Bill Nelson, the administrator of NASA. “Looks like we’re looking back toward the Saturn V. But it’s a totally different, new, highly sophisticated—more sophisticated—rocket, and spacecraft.”
Artemis, powered by the Space Launch System rocket, is America’s first attempt to send astronauts to the moon since Apollo 17 in 1972, and technology has taken giant leaps since then. On Artemis I, the first test flight, mission managers say they are taking the SLS, with its uncrewed Orion spacecraft up top, and “stressing it beyond what it is designed for”—the better to ensure safe flights when astronauts make their first landings, currently targeted to begin with Artemis III in 2025.
But Nelson is right: The rocket is retro in many ways, borrowing heavily from the space shuttles America flew for 30 years, and from the Apollo-Saturn V.
Much of Artemis’s hardware is refurbished: Its four main engines, and parts of its two strap-on boosters, all flew before on shuttle missions. The rocket’s apricot color comes from spray-on insulation much like the foam on the shuttle’s external tank. And the large maneuvering engine in Orion’s service module is actually 40 years old—used on 19 space shuttle flights between 1984 and 1992.
“I have a name for missions that use too much new technology—failures.”
—John Casani, NASA
Perhaps more important, the project inherits basic engineering from half a century of spaceflight. Just look at Orion’s crew capsule—a truncated cone, somewhat larger than the Apollo Command Module but conceptually very similar.
Old, of course, does not mean bad. NASA says there is no need to reinvent things engineers got right the first time.
“There are certain fundamental aspects of deep-space exploration that are really independent of money,” says Jim Geffre, Orion vehicle-integration manager at the Johnson Space Center in Houston. “The laws of physics haven’t changed since the 1960s. And capsule shapes happen to be really good for coming back into the atmosphere at Mach 32.”
Roger Launius, who served as NASA’s chief historian from 1990 to 2002 and as a curator at the Smithsonian Institution from then until 2017, tells of a conversation he had with John Casani, a veteran NASA engineer who managed the Voyager, Galileo, and Cassini probes to the outer planets.
“I have a name for missions that use too much new technology,” he recalls Casani saying. “Failures.”
The Artemis I flight is slated for about six weeks. (Apollo 11 lasted eight days.) The ship roughly follows Apollo’s path to the moon’s vicinity, but then puts itself in what NASA calls a distant retrograde orbit. It swoops within 110 kilometers of the lunar surface for a gravity assist, then heads 64,000 km out—taking more than a month but using less fuel than it would in closer orbits. Finally, it comes home, reentering the Earth’s atmosphere at 11 km per second, slowing itself with a heatshield and parachutes, and splashing down in the Pacific not far from San Diego.
If all four, quadruply redundant flight computer modules fail, there is a fifth, entirely separate computer onboard, running different code to get the spacecraft home.
“That extra time in space,” says Geffre, “allows us to operate the systems, give more time in deep space, and all those things that stress it, like radiation and micrometeoroids, thermal environments.”
There are, of course, newer technologies on board. Orion is controlled by two vehicle-management computers, each composed of two flight computer modules (FCMs) to handle guidance, navigation, propulsion, communications, and other systems. The flight control system, Geffre points out, is quad-redundant; if at any point one of the four FCMs disagrees with the others, it will take itself offline and, in a 22-second process, reset itself to make sure its outputs are consistent with the others’. If all four FCMs fail, there is a fifth, entirely separate computer running different code to get the spacecraft home.
Guidance and navigation, too, have advanced since the sextant used on Apollo. Orion uses a star tracker to determine its attitude, imaging stars and comparing them to an onboard database. And an optical navigation camera shoots Earth and the moon so that guidance software can determine their distance and position and keep the spacecraft on course. NASA says it’s there as backup, able to get Orion to a safe splashdown even if all communication with Earth has been lost.
But even those systems aren’t entirely new. Geffre points out that the guidance system’s architecture is derived from the Boeing 787. Computing power in deep space is limited by cosmic radiation, which can corrupt the output of microprocessors beyond the protection of Earth’s atmosphere and magnetic field.
Beyond that is the inevitable issue of cost. Artemis is a giant project, years behind schedule, started long before NASA began to buy other launches from companies like SpaceX and Rocket Lab. NASA’s inspector general, Paul Martin, testified to Congress in March that the first four Artemis missions would cost US $4.1 billion each—“a price tag that strikes us as unsustainable.”
Launius, for one, rejects the argument that government is inherently wasteful. “Yes, NASA’s had problems in managing programs in the past. Who hasn’t?” he says. He points out that Blue Origin and SpaceX have had plenty of setbacks of their own—they’re just not obliged to be public about them. “I could go on and on. It’s not a government thing per se and it’s not a NASA thing per se.”
So why return to the moon with—please forgive the pun—such a retro rocket? Partly, say those who watch Artemis closely, because it’s become too big to fail, with so much American money and brainpower invested in it. Partly because it turns NASA’s astronauts outward again, exploring instead of maintaining a space station. Partly because new perspectives could come of it. And partly because China and Russia have ambitions in space that threaten America’s.
“Apollo was a demonstration of technological verisimilitude—to the whole world,” says Launius. “And the whole world knew then, as they know today, that the future belongs to the civilization that can master science and technology.”
Update 7 Sept.: Artemis I has been on launchpad 39B, not 39A as previously reported, at Kennedy Space Center.
Researchers have developed a range of analog and other unconventional machine learning systems in the expectation that they will prove vastly more energy efficient than today’s computers. But training these AIs to do their tasks has been a big stumbling block. Researchers at NTT Device Technology Labs and the University of Tokyo now say they’ve come up with a training algorithm (announced by NTT last month) that goes a long way toward letting these systems meet their promise.
Their results, established on an optical analog computer, represent progress towards obtaining the potential efficiency gains that researchers have long sought from “unconventional” computer architectures.
Modern AI programs use a biologically-inspired architecture called an artificial neural network to execute tasks like image recognition or text generation. The strength of connections between artificial neurons, which control the outputs of the computation, must be modified or trained using standard algorithms. The most prominent of these algorithms is called backpropagation, which updates the connection strengths to reduce the network’s errors, while it processes trial data. Because adjustments to some parameters depend on adjustments to others, there is a need for active information passing and routing by the computer.
As Spectrum has elsewhere explained, “Error backpropagation is like running inference in reverse, moving from the last layer of the network back to the first layer; weight update then combines information from the original forward inference run with these backpropagated errors to adjust the network weights in a way that makes the model more accurate.”
Alternative computing architectures, which trade complexity for efficiency, often cannot perform the information passing required by the algorithm. As a consequence, the trained parameters of the network must be obtained from an independent physics simulation of the entire hardware setup and its information processing. But creating simulations of sufficient quality can itself be challenging.
“We found that it was very hard and tough to apply backpropagation algorithms to our device,” said Katsuma Inoue of NTT Device Technology Labs, one of the researchers involved in the study. “There always existed a gap between the mathematical model and the real device, owing to several factors, such as physical noise and inaccurate modeling.”
The difficulty of implementing backpropagation led the authors to study and implement an alternative training algorithm. It builds on an algorithm called direct feedback alignment (DFA), first introduced in a paper from 2016. That algorithm reduced the need to pass information during training and therefore the extent to which the physical system needs to be simulated. The authors’ new “augmented DFA” algorithm entirely removes the need for any detailed device simulation.
To study and test the algorithm, they implemented it on an optical analog computer. In it, the connections between neurons are represented as intensities of light traveling through a ring of optical fiber instead of as digitally represented numbers. The connections of the neural network are represented with the intensities in a light beam that is passed through a ring-shaped optical fiber.
“It’s an absolutely essential demonstration,” said Daniel Brunner of the FEMTO-ST Institute, a French public research organization. Brunner develops photonic unconventional computers of a similar sort used by the researchers in the study. “The beauty of this particular algorithm is that it is not too difficult to implement in hardware—which is why this is so important.”
Despite tech giants including Meta, Microsoft, and Nvidia investing billions of dollars in the development of the metaverse, it is still little more than a fantasy. Making it a reality is likely to require breakthroughs in a range of sectors such as storage, modeling, and communication.
To spur progress in the advancement of those technologies, the IEEE Standards Association has launched the Persistent Computing for Metaverse initiative. As part of the IEEE’s Industry Connections Program, it will bring together experts from both industry and academia to help map out the innovations that will be needed to make the metaverse a reality.
Although disparate virtual-reality experiences exist today, the metaverse represents a vision of an interconnected and always-on virtual world that can host thousands, if not millions, of people simultaneously. The ultimate goal is for the virtual world to become so realistic that it is almost indistinguishable from the real one.
Today’s technology is a long way from making that possible, says Yu Yuan, president of the IEEE Standards Association. The Institute spoke with Yuan to find out more about the initiative and the key challenges that need to be overcome. His answers have been edited for clarity.
The Institute: What is persistent computing?
Yu Yuan: I have been working in virtual reality and multimedia for more than 20 years, I just didn’t call my work metaverse. After metaverse became a buzzword, I asked myself, ‘What’s the difference between metaverse and VR?’ My answer is: persistence, or the ability to leave traces in a virtual world.
Persistent computing refers to the combination of all the technologies needed to support the development and operation of a persistent virtual world. In other words, a metaverse. There are different kinds of VR experiences, but many of them are one-time events. Similar to how video games work, every time a user logs in, the entire virtual world resets. But users in the metaverse can leave traces. For example, they can permanently change the virtual world by destroying a wall or building a new house. Those changes have to be long-lasting so there will be a meaningful virtual society or meaningful economy in that virtual world.
What are the key components that are required to make persistent computing possible?
Yuan: The first is storage. In most of today’s video games, users can destroy a building, only for it to be restored the next time the user logs in to the game. But in a persistent virtual world the current status of the virtual world needs to be stored constantly. Users can create or destroy something in that world and the next time they log in, those changes will still be there. These kinds of things have to be properly stored—which means a very large amount of data needs to be stored.
It’s also important to support persistence from a modeling perspective because, as we can imagine, people will demand higher and higher quality experiences. To do this we need a larger scale in the future as well as finer granularity, or more detail, to make those virtual objects and environments more realistic.
“The metaverse is a truly long-term vision. We may need another 15 to 20 years, or even longer, to make it happen.”
That also requires technology to support upgrading the virtual world on the fly. For example, let’s say the building block of your virtual world would be at the brick level, but later, along with the advancement of technologies, we may be able to bring that detail level to grains of sand.
Along with that upgrade the buildings that users created before will have to be maintained. So that raises some challenges, like: How can we support modeling and operation of virtual worlds without interruption, providing continuous experiences for the users?
You say you need a lot more storage to maintain all the information. But does that just mean more powerful memory technologies, or is it more complicated than that?
Yuan: Larger storage capacity and low power consumption will be necessary. This may also be an important factor, because some people are concerned that the metaverse will consume lots of energy, making the entire thing not sustainable. But we also need to address some other issues.
Let’s say the ultimate goal for the metaverse is to be able to create virtual universes that are indistinguishable from a real physical universe. In order to simulate and store, for example, a million virtual atoms, how many physical atoms do we need? That is ultimately one of the questions we need to answer in terms of connecting the universe of atoms to the universe of bits. We will hit a wall in terms of how many physical atoms we need to create an equal or larger number of virtual atoms. That requires not only innovations in storage, computation, and communications technology in general but also some special innovations in modeling, and engines dedicated for the metaverse. It could be some data-compression technology, but that’s just one of the directions we need to explore.
I think communications are equally important. Most people believe the metaverse means lots of users—which means we will definitely need innovations in communications to support real-time, massive user experiences. If we are talking about supporting a million users in a virtual city, we definitely need some disruptive innovations from the communications perspective. That’s also an integral component of persistent computing.
How will the issues that you’ve identified be solved?
Yuan: That’s part of the mission for the Persistent Computing for Metaverse Initiative. It serves as a platform for information exchange and discussions on the gaps in today’s existing technologies.
Maybe we already have most of the technologies in place, but we just need to find a way to integrate them together. Or maybe there are gaps where we need to do R&D on some particular subarea of technology. With this gap analysis, we will know what other innovations are needed—which could provide some direction for academia and industry.
The initiative plans to host events, publish white papers with its findings, and propose new standards.
A lot of the development in the space is happening internally in companies. Is there an appetite to collaborate, or is there a danger that everyone is racing to set up walled gardens?
Yuan: I wouldn’t say that it’s a danger, but I don’t think it’s efficient. That’s why I think standards will play a leading role to help pave the way for the metaverse. We need to develop standards to identify gaps and set up road maps for the industry. The industry will then have some basis for discussion and how we can work together to make this happen. Working together is also important so that companies aren’t reinventing wheels in different silos.
I think the metaverse will have a profound impact on all industries.
Is a lot of this kind of pie-in-the sky at the moment? Are we still a long way from persistent virtual worlds?
Yuan: The metaverse is a truly long-term vision. We may need another 15 to 20 years, or even longer, to make it happen. I believe the metaverse should be indistinguishable from our current universe, and to do that we need to address many grand challenges. Some of these include how to create a persistent virtual universe and how to make our perception realistic enough. Currently we are using XR [extended reality] devices, but eventually we may need innovations in brain-machine interface or neural interface technologies to be able to comprehensively take over the interface between our consciousness and the virtual world. But along with this long-term development there are also preliminary embodiments of the metaverse that can be useful and generate value for industry and for consumers.
The controversial decision to approve a new coalmine in Cumbria was met with dismay by UK environmental groups, with many wondering what it meant for a country that has pitched itself as a leader in the green energy revolution. But in the town of Whitehaven where the mine is to be situated, the feeling is very different, with vast support across the political spectrum. The Guardian's Richard Sprenger travels to the Mirehouse estate, a short distance from the Woodhouse Colliery site, to find out what lies behind this positivity in the face of a profound climate crisis
Continue reading...More than 2m households fell into fuel poverty last year, and in one community in north-east England many must make a daily choice between heating and eating. Video producers Maeve Shearlaw and Chris Cherry visited a centre in Shiremoor, North Tyneside, that is supporting people through the cost of living crisis, and saw how mouldy properties and prepayment meters are exacerbating problems for the most vulnerable people
The 19-seater Dornier 228 propeller plane that took off into the cold blue January sky looked ordinary at first glance. Spinning its left propeller, however, was a 2-megawatt electric motor powered by two hydrogen fuel cells—the right side ran on a standard kerosene engine—making it the largest aircraft flown on hydrogen to date. Val Miftakhov, founder and CEO of ZeroAvia, the California startup behind the 10-minute test flight in Gloucestershire, England, called it a “historical day for sustainable aviation.”
Los Angeles–based Universal Hydrogen plans to test a 50-seat hydrogen-powered aircraft by the end of February. Both companies promise commercial flights of retrofitted turboprop aircraft by 2025. French aviation giant Airbus is going bigger with a planned 2026 demonstration flight of its iconic A380 passenger airplane, which will fly using hydrogen fuel cells and by burning hydrogen directly in an engine. And Rolls Royce is making headway on aircraft engines that burn pure hydrogen.
The aviation industry, responsible for some 2.5 percent of global carbon emissions, has committed to net-zero emissions by 2050. Getting there will require several routes, including sustainable fuels, hybrid-electric engines, and battery-electric aircraft.
Hydrogen is another potential route. Whether used to make electricity in fuel cells or burned in an engine, it combines with oxygen to emit water vapor. If green hydrogen scales up for trucks and ships, it could be a low-cost fuel without the environmental issues of batteries.
Flying on hydrogen brings storage and aircraft-certification challenges, but aviation companies are doing the groundwork now for hydrogen flight by 2035. “Hydrogen is headed off to the sky, and we’re going to take it there,” says Amanda Simpson, vice president for research and technology at Airbus Americas.
The most plentiful element, hydrogen is also the lightest—key for an industry fighting gravity—packing three times the energy of jet fuel by weight. The problem with hydrogen is its volume. For transport, it has to be stored in heavy tanks either as a compressed high-pressure gas or a cryogenic liquid.
ZeroAvia is using compressed hydrogen gas, since it is already approved for road transport. Its test airplane had two hydrogen fuel cells and tanks sitting inside the cabin, but the team is now thinking creatively about a compact system with minimal changes to aircraft design to speed up certification in the United States and Europe. The fuel cells’ added weight could reduce flying range, but “that’s not a problem, because aircraft are designed to fly much further than they’re used,” says vice president of strategy James McMicking.
The company has backing from investors that include Bill Gates and Jeff Bezos; partnerships with British Airways and United Airlines; and 1,500 preorders for its hydrogen-electric power-train system, half of which are for smaller, 400-kilometer-range 9- to 19-seaters.
By 2027, ZeroAvia plans to convert larger, 70-seater turboprop aircraft with twice the range, used widely in Europe. The company is developing 5-MW electric motors for those, and it plans to switch to more energy-dense liquid hydrogen to save space and weight. The fuel is novel for the aviation industry and could require a longer regulatory approval process, McMicking says.
Next will come a 10-MW power train for aircraft with 100 to 150 seats, “the workhorses of the industry,” he says. Those planes—think Boeing 737—are responsible for 60 percent of aviation emissions. Making a dent in those with hydrogen will require much more efficient fuel cells. ZeroAvia is working on proprietary high-temperature fuel cells for that, McMicking says, with the ability to reuse the large amounts of waste heat generated. “We have designs and a technology road map that takes us into jet-engine territory for power,” he says.
Universal Hydrogen
Universal Hydrogen, which counts Airbus, GE Aviation, and American Airlines among its strategic investors, is placing bets on liquid hydrogen. The startup, “a hydrogen supply and logistics company at our core,” wants to ensure a seamless delivery network for hydrogen aviation as it catches speed, says founder and CEO Paul Eremenko. The company sources green hydrogen, turns it into liquid, and puts it in relatively low-tech insulated aluminum tanks that it will deliver via road, rail, or ship. “We want them certified by the Federal Aviation Administration for 2025, which means they can’t be a science project,” he says.
The cost of green hydrogen is expected to be on par with kerosene by 2025, Eremenko says. But “there’s nobody out there with an incredible hydrogen-airplane solution. It’s a chicken-and-egg problem.”
To crack it, Universal Hydrogen partnered with leading fuel-cell-maker Plug Power to develop a few thousand conversion kits for regional turboprop airplanes. The kits swap the engine in its streamlined housing (also known as nacelle) for a fuel-cell stack, power electronics, and a 2-MW electric motor. While the company’s competitors use batteries as buffers during takeoff, Eremenko says Universal uses smart algorithms to manage fuel cells, so they can ramp up and respond quickly. “We are the Nespresso of hydrogen,” he says. “We buy other people’s coffee, put it into capsules, and deliver to customers. But we have to build the first coffee machine. We’re the only company incubating the chicken and egg at the same time.”
This rendering of an Airbus A380 demonstrator flight (presently slated for 2026) reveals current designs on an aircraft that’s expected to fly using fuel cells and by burning hydrogen directly in the engine. Airbus
Fuel cells have a few advantages over a large central engine. They allow manufacturers to spread out smaller propulsion motors over an aircraft, giving them more design freedom. And because there are no high-temperature moving parts, maintenance costs can be lower. For long-haul aircraft, however, the weight and complexity of high-power fuel cells makes hydrogen-combustion engines appealing.
Airbus is considering both fuel-cell and combustion propulsion for its ZEROe hydrogen aircraft system. It has partnered with German automotive fuel-cell-maker Elring Klinger and, for direct combustion engines, with CFM International, a joint venture between GE Aviation and Safran. Burning liquid hydrogen in today’s engines is still expected to require slight modifications, such as a shorter combustion chamber and better seals.
Airbus is also evaluating hybrid propulsion concepts with a hydrogen-engine-powered turbine and a hydrogen-fuel-cell-powered motor on the same shaft, says Simpson, of Airbus Americas. “Then you can optimize it so you use both propulsion systems for takeoff and climb, and then turn one off for cruising.”
The company isn’t limiting itself to simple aircraft redesign. Hydrogen tanks could be stored in a cupola on top of the plane, pods under the wings, or a large tank at the back, Simpson says. Without liquid fuel in the wings, as in traditional airplanes, she says, “you can optimize wings for aerodynamics, make them thinner or longer. Or maybe a blended-wing body, which could be very different. This opens up the opportunity to optimize aircraft for efficiency.” Certification for such new aircraft could take years, and Airbus isn’t expecting commercial flights until 2035.
Conventional aircraft made today will be around in 2050 given their 25- to 30-year life-span, says Robin Riedel, an analyst at McKinsey & Co. Sustainable fuels are the only green option for those. He says hydrogen could play a role there, through “power-to-liquid technology, where you can mix hydrogen and captured carbon dioxide to make aviation fuel.”
Even then, Riedel thinks hydrogen will likely be a small part of aviation’s sustainability solution until 2050. “By 2070, hydrogen is going to play a much bigger role,” he says. “But we have to get started on hydrogen now.” The money that Airbus and Boeing are putting into hydrogen is a small fraction of aerospace, he says, but big airlines investing in hydrogen companies or placing power-train orders “shows there is desire.”
The aviation industry has to clean up if it is to grow, Simpson says. Biofuels are a stepping-stone, because they reduce only carbon emissions, not other harmful ones. “If we’re going to move towards clean aviation, we have to rethink everything from scratch and that’s what ZEROe is doing,” she says. “This is an opportunity to make not an evolutionary change but a truly revolutionary one.”
Did you know that ESA is researching human hibernation for long distance spaceflight to Mars or beyond?
Hibernating astronauts could be the best way to save mission costs, reduce the size of spacecraft by a third and keep crew healthy on their way to Mars. An ESA-led investigation suggests that human hibernation goes beyond the realm of science-fiction and may become a game-changing technique for space travel.
When packing for a return flight to the Red Planet, space engineers account for around two years’ worth of food and water for the crew.
Torpor during hibernation is an induced state that reduces the metabolic rate of an organism. This ‘suspended animation’ is a common mechanism in animals who wish to preserve energy.
Reducing the metabolic rate of a crew en route to Mars down to 25% of the normal state would dramatically cut down the amount of supplies and habitat size, making long-duration exploration more feasible.
Mimicking therapeutic torpor, the idea of putting human into a state of hibernation, has been around in hospitals since the 1980s – doctors can induce hypothermia to reduce metabolism during long and complex surgeries. However, it is not an active reduction of energy and misses most of the advantages of torpor. Studies on hibernation to visit other planets could offer new potential applications for patient care on Earth.
Animals hibernate to survive periods of cold and food or water scarcity, reducing their heart rate, breathing and other vital functions to a fraction of their normal life, while body temperature lowers close to ambient temperature. Tardigrades, frogs and reptiles are very good at it.
Lower testosterone levels seem to aid long hibernation in mammals, estrogens in humans strongly regulate energy metabolism.
With the crew at rest for long periods, artificial intelligence will come into play during anomalies and emergencies.
The possibilities of hibernation for medical use is of particular interest to the European research community and could transform how we approach many severe illnesses.
Inducing torpor is already used in some medical environments such as surgical theathers to replace anesthesia in those patients allergic to anesthetic drugs.
The step to space research is closer than you might think. Get involved with spaceflight research via https://www.esa.int/spaceflightAO. Find out about our commercial partnerships and opportunities in human and robotic exploration via https://www.esa.int/explorationpartners to run your research in microgravity as well.
An experimental, potentially revolutionary all-electric airplane designed by NASA will soon be taking its first test flight, which will mark a major milestone for battery-powered aviation. However, the program already appears destined to fall short of its lofty goal to exploit the unique features of electric propulsion to rewrite the design rules for modern aircraft. Its time and funding has nearly run out.
Part of the agency’s storied X-plane program, the X-57 Maxwell set out with the ambitious goal of tackling two grand challenges in aerospace engineering simultaneously. Not only did it aim to show that an airplane could be powered entirely by electricity, it also planned to demonstrate the significant gains in efficiency and performance that could be made by switching from two large engines to many smaller ones evenly distributed across the wings—a configuration known as a “blown wing.”
The plan was to demonstrate both of these propositions through a series of increasingly advanced test vehicles. Ultimately though, the complexity of the first challenge, compounded by disruptions caused by the COVID-19 pandemic, saw timelines repeatedly pushed back. As a result, the project’s leaders say it no longer has the funding to progress to the latter stages of the program.
“It turned out to be actually a pretty tall order to work through all of those airworthiness, and qualification, and design challenges.”
—Sean Clarke, NASA
The first iteration of the X-57, a modified Tecnam P2006T light aircraft whose gas-powered engines have been replaced with electric motors, will take flight this coming spring or possibly summer. (As of early January, NASA is still unclear as to precisely when that maiden voyage will be. NASA officials Spectrum contacted could only narrow the timeframe down to “first half of 2023.”) That will be a significant achievement, making the X-57 one of just a handful of electrically powered aircraft to get off the ground. But the team say they plan to wrap up flight testing by the end of the year and will no longer be building more advanced designs featuring novel wing configurations and distributed propulsion, such as the blown wing.
“We tried to do a very ambitious thing. Trying to do a new type of airframe and a new motor project is not very typical, because those are both very, very challenging endeavors,” says Nick Borer, deputy principal investigator for the X-57 project at the NASA Langley Research Center. “The agency funds a lot of different things and they’ve been very generous with what they’ve provided to us. But there are priorities at the top and eventually, you’ve got to finish up.”
The project’s ultimate goal was to take advantage of the benefits of electric propulsion to reimagine the design of aircraft wings. For instance, in the case of that blown wing: the large number of motors and props on the leading-edge force air at high rates over the wing, which can generate significant lift even at low speeds. This makes it possible to take off from shorter runways and can also allow you to shrink the size of the wing, reducing drag and boosting cruise efficiency.
The design is difficult to achieve with conventional combustion engines, because they are relatively heavy and become increasingly inefficient as they are scaled down in size. The same is not true of electric motors though, which means it’s relatively simple to switch from several large motors to many smaller ones distributed along the wing.
The current iteration of the X-57, pictured here, is powered by two electric motors and is based at the NASA Armstrong Flight Research Center in California.Carla Thomas/NASA
The final design iteration of the X-57 had six small electrically powered propellers across the front of each wing. The wings themselves would be only 40 percent of the size of a conventional P2006T wing. The design also featured two larger motors mounted on the tips of each wing, which would further reduce drag by counteracting the vortices normally produced at the end of each wing. Because the high lift generated by the smaller propellers along the leading edge would only be needed at take-off, these were designed to fold up once at cruising altitude to further reduce drag.
“The whole idea of an X-plane is to do something that has never been done before, and so I think it is just normal to expect that there is a learning curve.”
—Sergio Cecutta, SMG Consulting
Altogether these aerodynamic innovations would slash the planes’ power consumption at cruise by as much as a third, according to Borer. Electric motors are also about three times more efficient in terms of their power-to-weight ratio compared to gasoline-burning ones, he adds, so combined these design changes were expected to lead to a roughly fivefold reduction in energy requirements while flying at cruise speeds of around 280 kilometers per hour.
Switching to electric propulsion turned out to be more complicated than envisioned. The team had to completely redesign their battery packs in 2017 to avoid the risk of catastrophic fires. The high voltages and power levels required for electric aviation also posed significant complications, says Borer, requiring several iterations of the systems designed to protect components from electromagnetic interference.
Early on in the project they also found that state-of-the-art transistors able to withstand high power levels couldn’t tolerate the vibrations and temperatures involved in flight. This was resolved only recently by switching to a newer generation of silicon carbide MOSFET modules, says Sean Clarke, principal investigator for the X-57 project at the NASA Armstrong Flight Research Center in California. “It turned out to be actually a pretty tall order to work through all of those airworthiness, and qualification, and design challenges,” he says.
This led to delays that will mean the more ambitious goals of the project may not come to fruition, but Borer hopes that others will be able to pick up from where they left off. The team has been regularly publishing their findings and data as they’ve progressed, he says. They are also actively contributing to standards for electric aviation and are working with regulators to help develop aircraft certification processes. “We’re pushing out everything that we can,” says Borer.
The X-57’s custom-made battery packs installed in the aircraft’s cabin provide all the aircraft’s power, rather than the JET A/A-1 fuel that powers most aviation today. Lauren Hughes/NASA
This information sharing has already borne fruit. NASA’s main subcontractor for the project California-based Empirical Systems Aerospace has been able to commercialize the X-57’s battery pack design, and the agency has a technology-transfer agreement with Virginia-based electric-aircraft designer Electra, which involved the team sharing information on their aerodynamic innovations. The company that NASA initially contracted to build their electric motors, Joby Aviation, has also gone on to develop their own electric vertical take-off and landing (eVTOL) vehicle and is today one of the leaders in the industry.
This is the beauty of a publicly funded effort like the X-57, says Sergio Cecutta, founder and partner at SMG Consulting, who covers the electric-aviation industry. Unlike a private development effort, he says, all of the advances and lessons that have come out of the project will be in the public domain and can spread throughout the industry. And while it may not have achieved its most ambitious goals, Cecutta says it has done exactly what was intended, which was to remove some of the roadblocks holding back electric aviation.
“The whole idea of an X-plane is to do something that has never been done before, and so I think it is just normal to expect that there is a learning curve,” he says. “In the end, you want to lay the groundwork for the industry to become successful, and I think on that metric, the X-57 has been a successful project.”
Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.
Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
Andrew Ng on...
The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?
Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
When you say you want a foundation model for computer vision, what do you mean by that?
Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
What needs to happen for someone to build a foundation model for video?
Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.
Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI
I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
I expect they’re both convinced now.
Ng: I think so, yes.
Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
How do you define data-centric AI, and why do you consider it a movement?
Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.
You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?
Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?
Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng
For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?
Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.
One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
When you talk about engineering the data, what do you mean exactly?
Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
What about using synthetic data, is that often a good solution?
Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
Do you mean that synthetic data would allow you to try the model on more data sets?
Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng
Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?
Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.
One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?
Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.
Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?
Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.
Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.
But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.
How is AI currently being used to design the next generation of chips?
Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.
Heather GorrMathWorks
Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.
What are the benefits of using AI for chip design?
Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.
So it’s like having a digital twin in a sense?
Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.
So, it’s going to be more efficient and, as you said, cheaper?
Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.
We’ve talked about the benefits. How about the drawbacks?
Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years.
Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together.
One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.
How can engineers use AI to better prepare and extract insights from hardware or sensor data?
Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.
One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.
What should engineers and designers consider when using AI for chip design?
Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.
How do you think AI will affect chip designers’ jobs?
Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.
How do you envision the future of AI and chip design?
Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.
“Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.
Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.
DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.
“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.
“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”
So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.
The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.
The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA
But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.
NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.
The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.
The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA
What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”
So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.
“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”
There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.
“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”
RSS Rabbit links users to publicly available RSS entries.
Vet every link before clicking! The creators accept no responsibility for the contents of these entries.
Relevant
Fresh
Convenient
Agile
We're not prepared to take user feedback yet. Check back soon!