logo RSS Rabbit quadric
News that matters, fast.
Good luck, have news.
Happy scrolling!

Categories



Date/Time of Last Update: Mon Nov 28 09:00:31 2022 UTC




********** WORLD AFFAIRS **********
return to top



Western sanctions catch up with Russia’s wartime economy
Sat, 26 Nov 2022 02:00:02 EST
The establishment of a new council to coordinate military supplies reflects the Kremlin's grave concerns over the economy.
Match ID: 0 Score: 55.00 source: www.washingtonpost.com age: 2 days
qualifiers: 35.00 sanctions, 20.00 russia

Russia-Ukraine war live: Zelenskiy warns of new missile attacks; Russia may be preparing to leave Zaporizhzhia plant, nuclear chief says
Mon, 28 Nov 2022 08:34:45 GMT

Volodymyr Zelenskiy urges Ukraine to prepare for a fresh series of strikes; Ukraine’s state-run nuclear energy firm suggests Russian forces are ‘packing their bags’ to leave power plant

In case you missed it, hundreds of Ukrainians streamed out of Kherson city on Sunday to flee Russian shelling, two weeks after its recapture from Russian occupying forces prompted jubilant celebrations.

The liberation of Kherson marked a major battlefield gain for Kyiv – reconquered after the Russians retreated to the east bank of the Dnipro River. However, since then inhabitants have struggled with no water, heating and electricity, because Moscow’s troops destroyed thermal and power plants before they left.

Continue reading...
Match ID: 1 Score: 45.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia, 15.00 energy, 10.00 nuclear

Russia-Ukraine war at a glance: what we know on day 278 of the invasion
Mon, 28 Nov 2022 00:39:11 GMT

Zelenskiy says Russia planning new missile attacks; Ukraine’s nuclear energy chief suggests Russian forces may be preparing to leave Zaporizhzhia nuclear power plant

Continue reading...
Match ID: 2 Score: 45.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia, 15.00 energy, 10.00 nuclear

‘Extinction is on the table’: Jaron Lanier warns of tech’s existential threat to humanity
Sun, 27 Nov 2022 09:00:04 GMT

The American computer scientist, who coined the term ‘virtual reality,’ cautions against online ‘psychological operatives’

Jaron Lanier, the eminent American computer scientist, composer and artist, is no stranger to skepticism around social media, but his current interpretations of its effects are becoming darker and his warnings more trenchant.

Lanier, a dreadlocked free-thinker credited with coining the term “virtual reality”, has long sounded dire sirens about the dangers of a world over-reliant on the internet and at the increasing mercy of tech lords, their social media platforms and those who work for them.

Continue reading...
Match ID: 3 Score: 45.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 trump, 20.00 russia

Russian attacks on energy grid amount to genocide, says Ukraine
Sun, 27 Nov 2022 16:24:09 GMT
The prosecutor-general tells the BBC Russian attacks are aimed at "the full Ukrainian nation".
Match ID: 4 Score: 35.00 source: www.bbc.co.uk age: 0 days
qualifiers: 20.00 russia, 15.00 energy

I Experienced Jack Smith's Zeal Firsthand. Will Trump Get the Same Treatment?
Wed, 23 Nov 2022 16:15:01 +0000

His handling of the ex-president will show whether Smith really is an aggressive prosecutor — or just aggressive against the powerless.

The post I Experienced Jack Smith’s Zeal Firsthand. Will Trump Get the Same Treatment? appeared first on The Intercept.


Match ID: 5 Score: 32.14 source: theintercept.com age: 4 days
qualifiers: 17.86 trump, 14.29 russia

The U.S.-China Chip Ban, Explained
Mon, 21 Nov 2022 17:28:29 +0000


It has now been over a month since the U.S. Commerce Department issued new rules that clamped down on the export of certain advanced chips—which have military or AI applications—to Chinese customers.

China has yet to respond—but Beijing has multiple options in its arsenal. It’s unlikely, experts say, that the U.S. actions will be the last fighting word in an industry that is becoming more geopolitically sensitive by the day.

This is not the first time that the U.S. government has constrained the flow of chips to its perceived adversaries. Previously, the United States has blocked chip sales to individual Chinese customers. In response to the Russian invasion of Ukraine earlier this year, the United States (along with several other countries, including South Korea and Taiwan) placed Russia under a chip embargo.


But none of these prior U.S. chip bans were as broad as the new rules, issued on 7 October. “This announcement is perhaps the most expansive export control in decades,” says Sujai Shivakumar, an analyst at the Center for International and Strategic Studies, in Washington.

The rules prohibit the sale, to Chinese customers, of advanced chips with both high performance (at least 300 trillion operations per second, or 300 teraops) and fast interconnect speed (generally, at least 600 gigabytes per second). Nvidia’s A100, for comparison, is capable of over 600 teraops and matches the 600 Gb/s interconnect speed. Nvidia’s more-impressive H100 can reach nearly 4,000 trillion operations and 900 Gb/s. Both chips, intended for data centers and AI trainers, cannot be sold to Chinese customers under the new rules.

Additionally, the rules restrict the sale of fabrication equipment if it will knowingly be used to make certain classes of advanced logic or memory chips. This includes logic chips produced at nodes of 16 nanometers or less (which the likes of Intel, Samsung, and TSMC have done since the early 2010s); NAND long-term memory integrated circuits with at least 128 layers (the state of the art today); or DRAM short-term memory integrated circuits produced at 18 nanometers or less (which Samsung began making in 2016).

Chinese chipmakers have barely scratched the surface of those numbers. SMIC switched on 14-nm mass production this year, despite facing existing U.S. sanctions. YMTC started shipping 128-layer NAND chips last year.

The rules restrict not just U.S. companies, but citizens and permanent residents as well. U.S. employees at Chinese semiconductor firms have had to pack up. ASML, a Dutch maker of fabrication equipment, has told U.S. employees to stop servicing Chinese customers.

Speaking of Chinese customers, most—including offices, gamers, designers of smaller chips—probably won’t feel the controls. “Most chip trade and chip production in China is unimpacted,” says Christopher Miller, a historian who studies the semiconductor trade at Tufts University.

The controlled sorts of chips instead go into supercomputers and large data centers, and they’re desirable for training and running large machine-learning models. Most of all, the United States hopes to stop Beijing from using chips to enhance its military—and potentially preempt an invasion of Taiwan, where the vast majority of the world’s semiconductors and microprocessors are produced.

In order to seal off one potential bypass, the controls also apply to non-U.S. firms that rely on U.S.-made equipment or software. For instance, Taiwanese or South Korean chipmakers can’t sell Chinese customers advanced chips that are fabricated with U.S.-made technology.

It’s possible to apply to the U.S. government for an exemption from at least some of the restrictions. Taiwanese fab juggernaut TSMC and South Korean chipmaker SK Hynix, for instance, have already acquired temporary exemptions—for a year. “What happens after that is difficult to say,” says Patrick Schröder, a researcher at Chatham House in London. And the Commerce Department has already stated that such licenses will be the exception, not the rule (although Commerce Department undersecretary Alan Estevez suggested that around two-thirds of licenses get approved).

More export controls may be en route. Estevez indicated that the government is considering placing restrictions on technologies in other sensitive fields—specifically mentioning quantum information science and biotechnology, both of which have seen China-based researchers forge major progress in the past decade.

The Chinese government has so far retorted with harsh words and little action. “We don’t know whether their response will be an immediate reaction or whether they have a longer-term approach to dealing with this,” says Shivakumar. “It’s speculation at this point.”

Beijing could work with foreign companies whose revenue in the lucrative Chinese market is now under threat. “I’m really not aware of a particular company that thinks it’s coming out a winner in this,” says Shivakumar. This week, in the eastern city of Hefei, the Chinese government hosted a chipmakers’ conference whose attendees included U.S. firms AMD, Intel, and Qualcomm.

Nvidia has already responded by introducing a China-specific chip, the A800, which appears to be a modified A100 cut down to meet the requirements. Analysts say that Nvidia’s approach could be a model for other companies to keep up Chinese sales.

There may be other tools the Chinese government can exploit. While China may be dependent on foreign semiconductors, foreign electronics manufacturers are in turn dependent on China for rare-earth metals—and China supplies the supermajority of the world’s rare earths.

There is precedent for China curtailing its rare-earth supply for geopolitical leverage. In 2010, a Chinese fishing boat collided with two Japanese Coast Guard vessels, triggering an international incident when Japanese authorities arrested the boat’s captain. In response, the Chinese government cut off rare-earth exports to Japan for several months.

Certainly, much of the conversation has focused on the U.S. action and the Chinese reaction. But for third parties, the entire dispute delivers constant reminders of just how tense and volatile the chip supply can be. In the European Union, home to less than 10 percent of the world’s microchips market, the debate has bolstered interest in the prospective European Chips Act, a plan to heavily invest in fabrication in Europe. “For Europe in particular, it’s important not to get caught up in this U.S.-China trade issue,” Schröder says.

“The way in which the semiconductor industry has evolved over the past few decades has predicated on a relatively stable geopolitical order,” says Shivakumar. “Obviously, the ground realities have shifted.”


Match ID: 6 Score: 32.14 source: spectrum.ieee.org age: 6 days
qualifiers: 15.00 sanctions, 8.57 russia, 8.57 korea

Russia-Ukraine war live: UK confirms supply of missiles to Kyiv as Russian forces might be preparing to leave Zaporizhzhia nuclear plant
Sun, 27 Nov 2022 17:59:07 GMT

UK MoD says it has provided Brimstone 2 missiles to Ukraine; reports suggest there are signs troops could be getting ready to leave

The Belgium prime minister, Prime Minister Alexander De Croo, is on the second day of an unannounced visit to Ukraine.

Accompanied by Belgian foreign minister, Hadja Lahbib, De Croo used the visit to announce additional Belgian support of around 37.4 million euros.

Continue reading...
Match ID: 7 Score: 30.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia, 10.00 nuclear

South Korea clings to North’s denuclearization, despite dwindling chances
Sat, 26 Nov 2022 01:00:16 EST
There's little optimism that negotiations over the issue will resume — and much concern that North Korea may soon carry out its first nuclear test since 2017.
Match ID: 8 Score: 30.00 source: www.washingtonpost.com age: 2 days
qualifiers: 20.00 korea, 10.00 nuclear

Twitter Allows Russian Officials to Share Antisemitic Cartoon of Zelenskyy
Wed, 23 Nov 2022 03:34:28 +0000

Elon Musk’s Twitter permitted a Russian Embassy account to post a cartoon that echoed Nazi propaganda in its depiction of Ukraine’s Jewish president, Volodymyr Zelenskyy.

The post Twitter Allows Russian Officials to Share Antisemitic Cartoon of Zelenskyy appeared first on The Intercept.


Match ID: 9 Score: 25.71 source: theintercept.com age: 5 days
qualifiers: 14.29 trump, 11.43 russia

Elon Musk’s “Free Speech” Twitter Is Still Censoring DDoSecrets
Tue, 22 Nov 2022 17:00:14 +0000

Twitter has censored the website of nonprofit transparency collective Distributed Denial of Secrets for more than two years.

The post Elon Musk’s “Free Speech” Twitter Is Still Censoring DDoSecrets appeared first on The Intercept.


Match ID: 10 Score: 25.71 source: theintercept.com age: 5 days
qualifiers: 14.29 trump, 11.43 russia

Donald Trump ‘shied away from criticising Nick Fuentes’
Mon, 28 Nov 2022 07:00:20 GMT

Advisers wanted ex-president to distance himself from white supremacist with whom he dined but Trump feared alienating supporters – insiders

Donald Trump repeatedly refused to disavow the outspoken antisemite and white supremacist Nick Fuentes after they spoke over dinner at his Mar-a-Lago resort, rejecting the advice from advisers over fears he might alienate a section of his base, two people familiar with the situation said.”

The former US president was urged publicly and privately to denounce Fuentes in the aftermath of the dinner, which included the performer Ye, previously known as Kanye West, who has also recently been propagating antisemitic remarks.

Continue reading...
Match ID: 11 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 trump

Elon Musk's Twitter is fast proving that free speech at all costs is a dangerous fantasy | Nesrine Malik
Mon, 28 Nov 2022 06:00:18 GMT

Reinstating the likes of Donald Trump and Kanye West looks likely to turn the social media site into an extremist ghetto

Free speech absolutists are like the cocky audience of a spectator sport – they think they could do better than the players, if they were just allowed a crack at it. To them, speech should be as free as possible, period. Nowhere is their oversimplification of the issue more evident than on social media, where abuse and disinformation have created a new frontier of regulation – and with it a cohort of disingenuous free speech warriors.

These absolutists are so unaccustomed to facing consequences for their actions that they have pushed the idea that a censoring “woke” orthodoxy now prevails, and is a threat to freedom of expression. Elon Musk is among them, but since his takeover of Twitter he is having to learn quickly that free speech is not simply about saying whatever you want, unchecked, but about negotiating complicated compromises.

Nesrine Malik is a Guardian columnist

Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

Continue reading...
Match ID: 12 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 trump

World Cup 2022: Niclas Fullkrug comes up trumps to keep German hopes alive
Sun, 27 Nov 2022 22:57:31 GMT
Hansi Flick's wildcard Niclas Fullkrug comes up trumps as Germany cling on to their World Cup hopes with a draw against Spain
Match ID: 13 Score: 25.00 source: www.bbc.co.uk age: 0 days
qualifiers: 25.00 trump

Republican says Trump ‘empowering’ extremists by having dinner with white supremacist
Sun, 27 Nov 2022 18:32:36 GMT

‘It’s very troubling,’ says Arkansas governor who condemned Trump’s meeting with anti-semite Nick Fuentes

The Republican governor of Arkansas on Sunday joined a chorus of criticism of Donald Trump for having dinner with American white supremacist and anti-semite Nick Fuentes, accusing the former US president of effectively “empowering” such extremists.

“It’s very troubling and it should not happen,” Asa Hutchinson told CNN’s State of the Union show on Sunday morning, becoming the most senior Republican politician to condemn the meeting.

Continue reading...
Match ID: 14 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 trump

A Special Prosecutor Found Kevin Johnson’s Case Was Tainted by Racism. Missouri Is About to Kill Him Anyway.
Sun, 27 Nov 2022 15:15:12 +0000

Kevin Johnson is facing execution for killing a cop when he was 19. A special prosecutor says his sentence should be vacated.

The post A Special Prosecutor Found Kevin Johnson’s Case Was Tainted by Racism. Missouri Is About to Kill Him Anyway. appeared first on The Intercept.


Match ID: 15 Score: 25.00 source: theintercept.com age: 0 days
qualifiers: 25.00 trump

Covering a cylinder with a magnetic coil triples its energy output in nuclear fusion test
2022-11-27T10:27:42+00:00
Covering a cylinder with a magnetic coil triples its energy output in nuclear fusion test submitted by /u/Sorin61
[link] [comments]

Match ID: 16 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 15.00 energy, 10.00 nuclear

Trump criticized for dining with far-right activist Nick Fuentes and rapper Ye
Fri, 25 Nov 2022 18:26:07 EST
The former president dined with Nick Fuentes and hip-hop artist Ye earlier this week
Match ID: 17 Score: 25.00 source: www.washingtonpost.com age: 2 days
qualifiers: 25.00 trump

Writer E. Jean Carroll sues Trump under new N.Y. sexual assault law
Fri, 25 Nov 2022 10:49:05 EST
The Adult Survivors Act gives adult sexual assault survivors up to one year to file a lawsuit, regardless of when the alleged violation happened.
Match ID: 18 Score: 25.00 source: www.washingtonpost.com age: 2 days
qualifiers: 25.00 trump

Former surgeon general faces his wife’s cancer — and the ‘Trump Effect’
Fri, 25 Nov 2022 06:00:00 EST
Jerome Adams and his wife want to tell their own story about melanoma and prevention.
Match ID: 19 Score: 25.00 source: www.washingtonpost.com age: 2 days
qualifiers: 25.00 trump

Can America’s Aging Leadership Deliver the Future?
Fri, 25 Nov 2022 11:00:00 +0000
The Political Scene’s Washington roundtable discusses whether the United States is a gerontocracy, and what that means for the country’s politics.
Match ID: 20 Score: 25.00 source: www.newyorker.com age: 2 days
qualifiers: 25.00 trump

Once a Year, We Gather and Blame Those Most Deserving
Thu, 24 Nov 2022 11:00:00 +0000
Wait, what am I talking about? Not once a year—every day!
Match ID: 21 Score: 21.43 source: www.newyorker.com age: 3 days
qualifiers: 21.43 trump

Ukraine live briefing: Zelensky warns citizens to brace for more Russian airstrikes
Mon, 28 Nov 2022 03:32:35 EST
Recent attacks have left civilian infrastructure in bad shape at the onset of winter. Despite repairs, scheduled stabilization blackouts are still in effect.
Match ID: 22 Score: 20.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 russia

Beaten, jailed, exiled and still taunting Putin: inside Pussy Riot’s filthy, furious show
Mon, 28 Nov 2022 08:00:20 GMT

They fled Russia disguised as food couriers. Now a major exhibition is celebrating the collective’s punky protest art, from a urine-splattered portrait of Putin to the cathedral gig that landed them in prison

The first thing you see is a framed portrait of Vladimir Putin propped against a table. The Russian leader looks like a secular icon, like Lenin in his mausoleum, seemingly incapable of human expression. But this being a video installation, there is more. Standing on the table is figure in a long gown and orange balaclava, like Rasputin in women’s clothes, or a very unorthodox priest. The figure raises their skirts and a jet of urine spurts over the portrait.

Welcome to Reykjavík and to Velvet Terrorism, an exhibition tracing the decade-long history of Russian art collective Pussy Riot. “Is that you?” I ask Maria Alyokhina, AKA Masha, pointing at the masked urinator? The Pussy Riot co-founder has been showing me, over a video conferencing app, around the exhibition she and members of Icelandic art collective Kling & Bang (Dorothee Kirch, Ingibjörg Sigurjónsdóttir and Ragnar Kjartansson) are installing. Kjartansson, who earlier this year helped Alyokhina flee Russia, holds the phone and gives me a view of Alyokhina at work.

Continue reading...
Match ID: 23 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

Weather tracker: temperatures plunge to -45C in parts of Russia
Mon, 28 Nov 2022 07:28:43 GMT

Siberian cold front forecast to push into China, Mongolia and then UK by Wednesday

A large area of high pressure currently covering the whole of eastern Europe is bringing severely low temperatures across Siberia. In a part of the world where temperatures are often below freezing at this time of year, the mercury has been 20C to 25C below average in areas over the weekend, with central and eastern Russia experiencing temperatures widely of -25C to -45C.

Over the next couple of days, westerly winds will result in temperatures rising above average in northern Russia but temperatures will remain 10C to 20C below normal in southern Russia. The orientation of the high pressure system will allow northerly winds to send the cold air southwards across central and eastern Asia over the next few days. This will see temperatures widely fall below average here too, with northern and eastern parts of China and Mongolia particularly affected. Temperatures here will fall 10C to 20C below normal by midweek as the same area of high pressure builds in and traps the cold air with temperatures struggling to rise above freezing by day.

Continue reading...
Match ID: 24 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

Weather tracker: temperatures plunge as low as -45C in parts of Russia
Mon, 28 Nov 2022 07:28:43 GMT

Siberian cold front forecast to push into China, Mongolia and then UK by Wednesday

A large area of high pressure currently covering the whole of eastern Europe is bringing severely low temperatures across Siberia. In a part of the world where temperatures are often below freezing at this time of year, the mercury has been 20C to 25C below average in areas over the weekend, with central and eastern Russia experiencing temperatures widely of -25C to -45C.

Over the next couple of days, westerly winds will result in temperatures rising above average in northern Russia but temperatures will remain 10C to 20C below normal in southern Russia. The orientation of the high pressure system will allow northerly winds to send the cold air southwards across central and eastern Asia over the next few days. This will see temperatures widely fall below average here too, with northern and eastern parts of China and Mongolia particularly affected. Temperatures here will fall 10C to 20C below normal by midweek as the same area of high pressure builds in and traps the cold air with temperatures struggling to rise above freezing by day.

Continue reading...
Match ID: 25 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

Surgeons work by flashlight as Ukraine power grid battered
Mon, 28 Nov 2022 02:22:45 EST
Devastating Russian strikes in Ukraine have cut off power to many hospitals
Match ID: 26 Score: 20.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 russia

‘Russia wins by losing’: Timothy Snyder on raising funds for Ukrainian drone defence
Mon, 28 Nov 2022 05:00:15 GMT

Historian says Russia needs to lose its war in order to become a ‘normal’ European country – and stopping destruction of Ukraine’s power grid is essential

When the Yale historian Timothy Snyder was asked by Ukraine’s government to fundraise for the war effort, he considered a project to restore Chernihiv library. It would have been an obvious choice for the bestselling author, who has visited the ruined library – a gracious gothic terracotta structure that survived two world wars but was smashed to rubble in March by Russia’s 500kg bombs.

Yet he soon decided that a fundraiser for a library would be “kind of morally self indulgent”. When he asked his friends in Kyiv what was most urgently needed, nobody hesitated: anti-drone defence. “I thought I should do the thing which is most urgent now,” Snyder told the Guardian in a phone interview from the Yale campus. “The ruins of the library are going to be there. I can raise money for that later. But right now, what’s happening is that the Russians are trying to freeze millions of people out by destroying the power grid. And so what I should be trying to do is try to stop that.”

Continue reading...
Match ID: 27 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

Ukraine live briefing: Electricity coming back online, Zelensky says; Russians may be fleeing power plant
Sun, 27 Nov 2022 21:05:17 EST
House Republicans, who will soon hold a slim majority, warned the Biden administration to expect tougher oversight of the $20 billion in military aid to Ukraine
Match ID: 28 Score: 20.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 russia

Rishi Sunak to set out pragmatic approach to Russia and China
Sun, 27 Nov 2022 22:35:24 GMT

Prime minister to make first major foreign policy speech, favouring a long-term, pragmatic attitude to Moscow and Beijing

Rishi Sunak will pledge an “evolutionary approach” to British foreign policy, arguing that states like Russia and China plan for the long term and the UK needs to follow suit as he attempts to set out his vision for the country’s place on the global stage.

In his first major foreign policy speech since becoming prime minister, he will draw on his years running the Treasury to say that the UK’s strength abroad must be underpinned by a strong economy at home as it stands up to competitors with “robust pragmatism”.

Continue reading...
Match ID: 29 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

Football game in Turkey abandoned after fan assaults player with corner flag
Sun, 27 Nov 2022 21:39:28 GMT
  • Derby between Göztepe and Altay called off after 25 minutes
  • Six red cards for Zenit St Petersburg and Spartak Moscow

A derby game in Turkey was abandoned after a pitch invader violently attacked the visiting goalkeeper with a corner flag.

The second-tier match between the Anatolian rivals Göztepe and Altay was called off 25 minutes into the first half. Play had been stopped because fans were receiving medical treatment after disturbances in the stands, at which point a Göztepe supporter ran on to the pitch, picked up the corner flag and used it to attack the Altay goalkeeper, Ozan Evrim Ozenc, before being restrained by nearby players and security staff.

Continue reading...
Match ID: 30 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

Anger grows as illegal construction partly blamed for landslide deaths on Italian island
Sun, 27 Nov 2022 21:10:15 GMT

Seven confirmed dead and search continues for people missing after Saturday’s disaster on Ischia

As rescuers continued to search for five people still missing after a catastrophic landslide in Ischia, anger was growing on the southern Italian island on Sunday over the years of rampant illegal construction that contributed to the disaster.

Seven people, including a three-week-old baby and a pair of young siblings, are confirmed to have died in Saturday’s landslide, which was triggered by a violent storm that sent mud and debris from Monte Epomeo, a 789-metre (2,590ft) peak, crashing into the hamlet of Casamicciola Terme. One victim – 32-year-old Eleonora Sirabella – has been named. The others, who include the infant boy’s parents, a five-year-old girl and her 11-year-old brother, a 31-year-old island resident and a Bulgarian tourist, have not yet been officially identified.

Continue reading...
Match ID: 31 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 italy

Newborn among 7 dead in landslide on Italy's Ischia island
Sun, 27 Nov 2022 15:01:05 EST
Search teams have recovered seven dead, including a three week-old infant, buried in mud and debris that hurtled down a mountainside and through a densely populated port city on the resort island of Ischia, officials said Sunday
Match ID: 32 Score: 20.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 italy

Italy landslide: Five bodies found as rescue work continues
Sun, 27 Nov 2022 17:29:42 GMT
Heartbreaking accounts emerge of the last moments of victims of a landslide on the Italian island of Ischia.
Match ID: 33 Score: 20.00 source: www.bbc.co.uk age: 0 days
qualifiers: 20.00 italy

Hundreds of Ukrainians flee Kherson as Russian shelling intensifies
Sun, 27 Nov 2022 16:53:50 GMT

People continue to struggle with no water, heating and electricity two weeks after city was recaptured

Hundreds of Ukrainians streamed out of Kherson city on Sunday to flee Russian shelling, two weeks after its recapture from Russian occupying forces prompted jubilant celebrations.

The liberation of Kherson marked a major battlefield gain for Kyiv – reconquered after the Russians retreated to the east bank of the Dnipro River. However, since then inhabitants have struggled with no water, heating and electricity, because Moscow’s troops destroyed thermal and power plants before they left.

Continue reading...
Match ID: 34 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

Alexei Navalny aide says his survival may depend on value to Vladimir Putin
Sun, 27 Nov 2022 15:01:11 GMT

Leonid Volkov warns Russian opposition leader’s health is at risk from indefinite solitary confinement

Alexei Navalny’s survival may depend on his value to Vladimir Putin as a future bargaining chip, his chief aide said, warning that the opposition leader’s health was at risk after being forced into indefinite solitary confinement.

Leonid Volkov, speaking on a visit to London, added that Navalny had lost access to his family and was being permanently detained in a “8 by 12ft” cell after the isolation decision by Russian authorities last week.

Continue reading...
Match ID: 35 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

Belarusian Foreign Minister Vladimir Makei dies ‘suddenly,’ state says
Sun, 27 Nov 2022 08:15:50 EST
The Foreign Ministry provided no details about Makei's death. He was set to meet his Russian counterpart, Sergei Lavrov, this week.
Match ID: 36 Score: 20.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 russia

Kim Jong Un’s daughter emerges from secrecy for second time in days
Sun, 27 Nov 2022 06:49:05 EST
New photos released by North Korean state media show the leader with his young daughter at a missile launch event, fueling speculation over his motivation or possible succession plans.
Match ID: 37 Score: 20.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 korea

‘Soldiers without the uniforms’: Kherson couple recall anti-Russia resistance
Sun, 27 Nov 2022 09:22:27 GMT

A young couple who helped citizens and Kyiv’s army tell of torture and endurance during the Russian onslaught

Yulia and Roma* were a young couple in love when Russia invaded Ukraine on 24 February. They saw panic unfold in their southern city of Kherson, as food and medicine quickly disappeared from shelves and cancer patients – including Yulia’s grandfather - died from a lack of essential drugs.

The couple organised deliveries from friends and family, taking food and medicine around the city to those in need – people they found via word of mouth and social media. On their route they saw terrible things. A young teenage boy and an old man, who had died from the wave of an explosion after a strike on a shopping centre, lying face down, with no one who could collect them; a soldier who had been reduced to flesh strewn across the windshield of an army truck.

Continue reading...
Match ID: 38 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

The big picture: Bruno Barbey captures life on the road in 1960s Palermo
Sun, 27 Nov 2022 07:00:14 GMT

The Magnum photographer’s image of a family in Sicily recalls Fellini and Visconti in its romantic depiction of everyday Italian life

Bruno Barbey chanced upon this family defying gravity on their dad’s scooter in Palermo in 1963. The French-Moroccan photographer had been travelling in Italy for a couple of years by then, restless for exactly this kind of image, with its seductive mix of humour and authenticity. Has there ever been a better articulation of contrasting roles in the patriarchal family? Father sitting comfortably in his jacket and cap and smiling for the camera, while behind him his possibly pregnant wife sees trouble ahead, as she and their three kids and their big checked bag compete for precarious discomfort.

Barbey, then 22, had gone to Italy to try to find pictures that captured “a national spirit” as the country sought to rediscover the dolce vita in cities still recovering from war. He travelled in an old VW van and in Palermo in particular he located scenes that might have been choreographed for the working-class heroes of the Italian neorealist films, the self-absorbed dreamers of Fellini and Visconti (The Leopard, the latter’s Hollywood epic set in Sicily was released in the same year). Barbey’s camera with its wide angle lens picked up the detail of vigorous crowd scenes among street children and barflies and religious processions. His book, The Italians, now republished, is a time capsule of that already disappearing black-and-white world of priests and mafiosi and nightclub girls and nuns.

Les Italiens (French edition) by Bruno Barbey is republished by delpire & co

Continue reading...
Match ID: 39 Score: 20.00 source: www.theguardian.com age: 1 day
qualifiers: 20.00 italy

Ukraine live briefing: European leaders in Kyiv for show of support during famine commemoration
Sat, 26 Nov 2022 20:12:16 EST
Electricity was restored to much of Ukraine, but 3 million customers remained without electricity, President Volodymyr Zelensky said Saturday
Match ID: 40 Score: 20.00 source: www.washingtonpost.com age: 1 day
qualifiers: 20.00 russia

Ukraine conflict: How much damage has Russia done to power supplies?
Sun, 27 Nov 2022 01:09:26 GMT
Russian air strikes have hit power supplies and health services, putting millions of lives at risk.
Match ID: 41 Score: 20.00 source: www.bbc.co.uk age: 1 day
qualifiers: 20.00 russia

The State of the Transistor in 3 Charts
Sat, 26 Nov 2022 16:00:01 +0000


The most obvious change in transistor technology in the last 75 years has been just how many we can make. Reducing the size of the device has been a titanic effort and a fantastically successful one, as these charts show. But size isn’t the only feature engineers have been improving.


In 1947, there was only one transistor. According to TechInsight’s forecast, the semiconductor industry is on track to produce almost 2 billion trillion (1021) devices this year. That’s more transistors than were cumulatively made in all the years prior to 2017. Behind that barely conceivable number is the continued reduction in the price of a transistor, as engineers have learned to integrate more and more of them into the same area of silicon.


Scaling down transistors in the 2D space of the plane of the silicon has been a smashing success: Transistor density in logic circuits has increased more than 600,000-fold since 1971. Reducing transistor size requires using shorter wavelengths of light, such as extreme ultraviolet, and other lithography tricks to shrink the space between transistor gates and between metal interconnects. Going forward, it’s the third dimension, where transistors will be built atop one another, that counts. This trend is more than a decade old in flash memory, but it’s still in the future for logic (see “Taking Moore’s Law to New Heights.”)


Perhaps the crowning achievement of all this effort is the ability to integrate millions, even billions, of transistors into some of the most complex systems on the planet: CPUs. Here’s a look at some of the high points along the way.

What Transistors Have Become


Besides making them tiny and numerous, engineers have devoted their efforts to enhancing the device’s other qualities. Here is a small sampling of what transistors have become in the last 75 years:


Icon of a series of circles.

Ephemeral:

Researchers in Illinois developed circuits that dissolve in the body using a combination of ultrathin silicon membranes, magnesium conductors, and magnesium oxide insulators. Five minutes in water was enough to turn the first generation to mush. But recently researchers used a more durable version to make temporary cardiac pacemakers that release an anti-inflammatory drug as they disappear.


An icon of lightning bolt over a circle.

Fast:

The first transistor was made for radio frequencies, but there are now devices that operate at about a billion times those frequencies. Engineers in South Korea and Japan reported the invention of an indium gallium arsenide high-electron mobility transistor, or HEMT, that reached a maximum frequency of 738 gigahertz. Seeking raw speed, engineers at Northrop Grumman made a HEMT that passed 1 terahertz.



An icon of an iron with a line underneath.

Flat:

Today’s (and yesterday’s) transistors depend on the semiconducting properties of bulk (3D) materials. Tomorrow’s devices might rely on 2D semiconductors, such as molybdenum disulfide and tungsten disulfide. These transistors might be built in the interconnect layers above a processor’s silicon, researchers say. So 2D semiconductors could help lead to 3D processors.


An icon of a circle with a series of lines on it

Flexible:

The world is not flat, and neither are the places transistors need to operate. Using indium gallium arsenide, engineers in South Korea recently made high-performance logic transistors on plastic that hardly suffered when bent around a radius of just 4 millimeters. And engineers in Illinois and England have made microcontrollers that are both affordable and bendable.



Icon of a eye with a question mark in the center.

Invisible:

When you need to hide your computing in plain sight, turn to transparent transistors. Researchers in Fuzhou, China, recently made a see-through analogue of flash memory using organic semiconductor thin-film transistors. And researchers in Japan and Malaysia produced transparent diamond devices capable of handling more than 1,000 volts.


Icon of a brain made out of square icons

Mnemonic:

NAND flash memory cells can store multiple bits in a single device. Those on the market today store either 3 or 4 bits each. Researchers at Kioxia Corp. built a modified NAND flash cell and dunked it in 77-kelvin liquid nitrogen. A single superchilled transistor could store up to 7 bits of data, or 128 different values.



Icon of a circle with a star inside.

Talented:

In 2018, engineers in Canada used an algorithm to generate all the possible unique and functional elementary circuits that can be made using just two metal-oxide field-effect transistors. The number of circuits totaled an astounding 582. Increasing the scope to three transistors netted 56,280 circuits, including several amplifiers previously unknown to engineering.


Icon of a shield

Tough:

Some transistors can take otherworldly punishment. NASA Glenn Research Center built 200-transistor silicon carbide ICs and operated them for 60 days in a chamber that simulates the environment on the surface of Venus—460 °C heat, a planetary-probe-crushing 9.3 megapascals of pressure, and the hellish planet’s corrosive atmosphere.

This article appears in the December 2022 print issue as “The State of the Transistor.”


Match ID: 42 Score: 20.00 source: spectrum.ieee.org age: 1 day
qualifiers: 20.00 korea

Fitting the World Cup into tiny Qatar
Sat, 26 Nov 2022 03:56:56 EST
Qatar is the smallest country ever to hold the tournament, dwarfed by prior hosts Russia and Brazil, and a future host: the entire continent of North America.
Match ID: 43 Score: 20.00 source: www.washingtonpost.com age: 2 days
qualifiers: 20.00 russia

Putin’s grip on regional allies loosens again after Armenia snub
Fri, 25 Nov 2022 14:08:46 GMT

Damaging optics of ‘family’ photo at CSTO summit highlights fragility of Russia’s influence in wake of war in Ukraine

Armenia has asked the French president, Emmanuel Macron, to chair peace talks with Azerbaijan in a fresh challenge to Vladimir Putin’s increasingly loose grip on Russia’s regional allies in the wake of the war in Ukraine.

The snub from a traditional ally to Putin, who had hosted an inconsequential meeting of the warring countries’ leaders last month, comes immediately on the back of his disastrous summit with six former Soviet states.

Continue reading...
Match ID: 44 Score: 20.00 source: www.theguardian.com age: 2 days
qualifiers: 20.00 russia

The Data Guy Who Got the Midterms Right
Thu, 24 Nov 2022 03:43:32 +0000

Tom Bonier of TargetSmart on how Republican polls were able to skew media predictions.

The post The Data Guy Who Got the Midterms Right appeared first on The Intercept.


Match ID: 45 Score: 17.86 source: theintercept.com age: 4 days
qualifiers: 17.86 trump

What happened to the 10 Republicans who voted to impeach Trump?
Wed, 23 Nov 2022 21:01:18 EST
A look at the 10 House Republicans who voted to impeach Donald Trump
Match ID: 46 Score: 17.86 source: www.washingtonpost.com age: 4 days
qualifiers: 17.86 trump

Lisa Murkowski and Mary Peltola win Alaska races, defeating Trump-backed opponents
Wed, 23 Nov 2022 20:45:16 EST
In Alaska, Sen. Lisa Murkowski (R) and Rep. Mary Peltola (D) are projected to win their races, defeating Trump-backed opponents.
Match ID: 47 Score: 17.86 source: www.washingtonpost.com age: 4 days
qualifiers: 17.86 trump

Jan. 6 panel staffers angry at Cheney for focusing so much of report on Trump
Wed, 23 Nov 2022 16:40:49 EST
Fifteen former and current staffers expressed concerns that important findings unrelated to Trump will not become available to the American public.
Match ID: 48 Score: 17.86 source: www.washingtonpost.com age: 4 days
qualifiers: 17.86 trump

Musk breaks promise to form Twitter moderation panel, blames activists
Wed, 23 Nov 2022 16:50:09 +0000
Activists say they would never make such a deal with Musk.
Match ID: 49 Score: 17.86 source: arstechnica.com age: 4 days
qualifiers: 17.86 trump

Supreme Court clears way for Trump tax returns to go to Congress
Wed, 23 Nov 2022 11:18:52 EST
Lawmakers say they need Donald Trump’s tax returns from his time in office to help evaluate the effectiveness of annual presidential audits -- a premise Trump rejects.
Match ID: 50 Score: 17.86 source: www.washingtonpost.com age: 4 days
qualifiers: 17.86 trump

10 Things for Americans to Be Grateful for at Thanksgiving 2022
Wed, 23 Nov 2022 11:00:34 +0000

Elon Musk, functioning elections, “Andor,” and death — Americans can genuinely give thanks for all these things and more this holiday.

The post 10 Things for Americans to Be Grateful for at Thanksgiving 2022 appeared first on The Intercept.


Match ID: 51 Score: 17.86 source: theintercept.com age: 4 days
qualifiers: 17.86 trump

No 10 set to allow new onshore wind projects in England in U-turn
Mon, 28 Nov 2022 08:38:35 GMT

Grant Shapps says there will be more onshore wind projects ‘where communities are in favour of it’

Downing Street appears set to allow new onshore wind projects in England following years of an effective ban, Grant Shapps has indicated, with ministers giving way in the face of a growing backbench Conservative rebellion.

Shapps, the business and energy secretary, said there would be more onshore wind projects “where communities are in favour of it”, which would mean the end of a de facto block on such projects since 2014 under planning rules.

Continue reading...
Match ID: 52 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Dodson ‘taken aback’ by Nationals’ call on Indigenous voice – as it happened
Mon, 28 Nov 2022 08:01:08 GMT

This blog is now closed

Coalition is alienating young voters, PM says

Over on ABC radio Melbourne, Anthony Albanese has been asked what he thinks about the Victorian election and the lessons for the Liberal party.

One of the things that we’re seeing, I believe is an alienation from younger voters from the Coalition.

When you have a position where you have senior members of the Coalition [who] can’t say that climate change is real in spite of the floods and bushfires and all of the evidence of the heating of the planet that we’re seeing, let alone any time something is put up to take action on climate change. They dismiss it.

[It] depends where you work. There will be some businesses, for example, which refuse to bargain with their staff where they used to and their staff where they used to and the better-off-overall test became too complex. Getting rid of the red tape we got there will bring some of the businesses back to the table straight away.

Also, any businesses that are concerned, like ... that actually don’t want to be involved in multi-employer bargaining, the simple fix for them is for them to negotiate with their staff now.

Continue reading...
Match ID: 53 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Brazil look unified and have strength in depth to cope without Neymar | Juninho Pernambucano
Mon, 28 Nov 2022 07:30:18 GMT

The World Cup favourites showed enough in their win against Serbia to suggest they can perform well against Switzerland even without their star player

Brazil had a decent start to their 2022 World Cup. The first game of the tournament is always a little bit more difficult. An early wrong turn, a miscontrolled ball or a badly placed pass can dictate the story of the game. A team that wants to go on to win the tournament need to realise that and manage the raw emotions and the overall energy of those first minutes. Brazil did that in their 2-0 win against Serbia. But let’s be careful. The three points does not guarantee anything, not even progress to the knockout stage. The players need to know that humility and respect is key here and you can never have too much of that.

Now comes a game against another European competitor, Switzerland. Football is exciting because we will never be able to fully understand it, but a win on Monday and early qualification for the second round is what the coach, Tite, is after. Then he could use the third game to rotate the squad somewhat and try a few different tactical things for the battles ahead.

Continue reading...
Match ID: 54 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Crypto will survive the FTX collapse – but more scandals will follow | Kenneth Rogoff
Mon, 28 Nov 2022 07:00:18 GMT

Bitcoin and ethereum prices have plummeted, but it is unlikely the US will ban cryptocurrencies soon

The epic collapse of wunderkind Sam Bankman-Fried’s $32bn (£27bn) crypto empire, FTX, looks set to go down as one of the great financial debacles of all time. With a storyline full of celebrities, politicians, sex and drugs, the future looks bright for producers of feature films and documentaries. But, to paraphrase Mark Twain, rumours of the death of crypto itself have been much exaggerated.

True, the loss of confidence in “exchanges” such as FTX – essentially crypto financial intermediaries – almost surely means a sustained steep drop in prices for the underlying assets. The vast majority of bitcoin transactions are done “off-chain” in exchanges, not in the bitcoin blockchain itself. These financial intermediaries are vastly more convenient, require much less sophistication to use and do not waste nearly as much energy.

Continue reading...
Match ID: 55 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Albanese government may need states to help with energy price cut amid ‘complex’ situation
Mon, 28 Nov 2022 06:43:29 GMT

Exclusive: It is expected Labor will adopt a suite of reforms to address an anticipated 56% increase in electricity prices

The Albanese government could struggle to provide comprehensive energy price relief, unless the governments of New South Wales and Queensland cooperate with a plan to temporarily cap the wholesale price of coal.

The energy minister, Chris Bowen, updated cabinet on Monday on the components of Labor’s long-telegraphed regulatory intervention in the energy sector.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...
Match ID: 56 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

No strikers bad, two strikers good, as Spain and Germany share the spoils | Barney Ronay
Sun, 27 Nov 2022 22:17:29 GMT

There were no goals until Álvaro Morata and Niclas Füllkrug came on, changing the scoreline if not the outcome

Well, there’s a thing. Maybe there is something to be said for these so‑called experts after all. For 53 minutes of this 1-1 draw Germany and Spain played out a carefully hedged, engrossingly mannered game of football. This was a game of midfield squared, of Big Midfield Energy, a quiet debauchery of midfield.

Al Bayt Stadium is essentially a vast illuminated fibreglass tent dumped down in the desert scrub. It was packed here, or almost packed. But at times in the second half it was so quiet in the stands you could hear the air conditioning hiss. This is, it seems likely, not a sentence that has been typed before in a football report.

Continue reading...
Match ID: 57 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Britishvolt scraps plan for second factory in Canada
Sun, 27 Nov 2022 16:39:02 GMT

Troubled startup battery maker formally abandons long-shot plan to build 60GWh plant in Quebec

Battery startup Britishvolt has formally abandoned a long-shot plan to build a second factory in Canada, as it focuses on securing new funding for its struggling UK project.

Britishvolt’s main project is an attempt to build a factory near Blyth in north-east England capable of producing batteries with a capacity of 30 gigawatt hours (30GWh) every year.

Continue reading...
Match ID: 58 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

UK households have cut energy consumption by 10%, say suppliers
Sun, 27 Nov 2022 15:30:08 GMT

E.ON reports up to 15% drop as Grant Shapps writes to firms saying customers cutting back on energy use should not face direct debit rise

Britons have cut their gas and electricity use by more than 10% since October in the first evidence of the impact of the energy crisis on household habits, according to two of Britain’s biggest suppliers.

E.ON, Britain’s second-largest supplier, and the owner of Utility Warehouse have reported “double-digit” declines in recent weeks.

Continue reading...
Match ID: 59 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Criticism mounts of ‘climate killer’ modern art museum in Berlin
Sun, 27 Nov 2022 13:50:26 GMT

Calls for construction on Museum of the 20th Century to be halted until energy efficiency issues addressed

A vast modern art museum under construction in Berlin has been castigated by conservation experts and architecture critics for its poor environmental credentials, as the energy crisis intensifies scrutiny of the efficiency of new buildings.

The Museum of the 20th Century, designed by the Swiss star architects Herzog and de Meuron, is intended to propel the German capital into the top tier of world cities for modern art, competing with New York’s Moma and London’s Tate Modern.

Continue reading...
Match ID: 60 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Millions of households will be spending nearly third of income on fuel by spring
Sun, 27 Nov 2022 06:00:13 GMT

Findings come amid mounting evidence that the poorest people in the UK are paying a ‘poverty premium’ for basic services

Millions of households will be paying almost a third of their income in fuel costs this spring, amid warnings that a “black hole in provision” remains for Britain’s poorest families.

The vast majority of households in some vulnerable groups – including some 70% of pensioners – will be spending a tenth or more of their income on fuel from April, when support for energy costs will be reduced.

Continue reading...
Match ID: 61 Score: 15.00 source: www.theguardian.com age: 1 day
qualifiers: 15.00 energy

Video Friday: Turkey Sandwich
Fri, 25 Nov 2022 17:13:24 +0000


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Happy Thanksgiving, for those who celebrate it. Now spend 10 minutes watching a telepresence robot assemble a turkey sandwich.

[ Sanctuary ]

Ayato Kanada, an assistant professor at Kyushu University in Japan, wrote in to share "the world's simplest omnidirectional mobile robot."

We propose a palm-sized omnidirectional mobile robot with two torus wheels. A single torus wheel is made of an elastic elongated coil spring in which the two ends of the coil connected each other and is driven by a piezoelectric actuator (stator) that can generate 2-degrees-of-freedom (axial and angular) motions. The stator converts its thrust force and torque into longitudinal and meridian motions of the torus wheel, respectively, making the torus work as an omnidirectional wheel on a plane.

[ Paper ]

Thanks, Ayato!

This work entitled "Virtually turning robotic manipulators into worn devices: opening new horizons for wearable assistive robotics" proposes a novel hybrid system using a virtually worn robotic arm in augmented-reality, and a real robotic manipulator servoed on such virtual representation. We basically aim at bringing an illusion of wearing a robotic system while its weight is fully deported. We believe that this approach could offers a solution to the critical challenge of wight and discomfort cause by robotic sensorimotor extensions (such as supernumerary robotics limbs (SRL), prostheses or handheld tools), and open new horizons for the development of wearable robotics.

[ Paper ]

Thanks, Nathanaël!

Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jump, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-sized jumping robots.

[ Georgia Tech ]

Thanks, Jason!

The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the Moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the Moon's shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800 m² 'lunar' terrain. The winning team will have the opportunity to have their technology implemented on the Moon.

[ ESA ]

Thanks, Arne!

If only cobots were as easy to use as this video from Kuka makes it seem.

The Kuka website doesn't say how much this thing costs, which means it's almost certainly not something that you impulse buy.

[ Kuka ]

We present the tensegrity aerial vehicle, a design of collision-resilient rotor robots with icosahedron tensegrity structures. With collision resilience and re-orientation ability, the tensegrity aerial vehicles can operate in cluttered environments without complex collision-avoidance strategies. These capabilities are validated by a test of an experimental tensegrity aerial vehicle operating with only onboard inertial sensors in a previously-unknown forest.

[ HiPeR Lab ]

The robotics research group Brubotics and polymer science and physical chemistry group FYSC of the university of Brussels have developed together self-healing materials that can be scratched, punctured or completely cut through and heal themselves back together, with the required heat, or even at room temperature.

[ Brubotics ]

Apparently, the World Cup needs more drone footage, because this is kinda neat.

[ DJI ]

Researchers at MIT's Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.

[ MIT ]

The researchers from North Carolina State University have recently developed a fast and efficient soft robotic swimmer that swims resembling human's butterfly-stroke style. It can achieve a high average swimming speed of 3.74 body length per second, close to five times faster than the fastest similar soft swimmers, and also a high-power efficiency with low cost of energy.

[ NC State ]

To facilitate sensing and physical interaction in remote and/or constrained environments, high-extension, lightweight robot manipulators are easier to transport and reach substantially further than traditional serial chain manipulators. We propose a novel planar 3-degree-of-freedom manipulator that achieves low weight and high extension through the use of a pair of spooling bistable tapes, commonly used in self-retracting tape measures, which are pinched together to form a reconfigurable revolute joint.

[ Charm Lab ]

SLURP!

[ River Lab ]

This video may encourage you to buy a drone. Or a snowmobile.

[ Skydio ]

Moxie is getting an update for the holidays!

[ Embodied ]

Robotics professor Henny Admoni answers the internet's burning questions about robots! How do you program a personality? Can robots pick up a single M&M? Why do we keep making humanoid robots? What is Elon Musk's goal for the Tesla Optimus robot? Will robots take over my job writing video descriptions...I mean, um, all our jobs? Henny answers all these questions and much more.

[ CMU ]

This GRASP on Robotics talk is from Julie Adams at Oregon State University, on “Towards Adaptive Human-Robot Teams: Workload Estimation.”

The ability for robots, be it a single robot, multiple robots or a robot swarm, to adapt to the humans with which they are teamed requires algorithms that allow robots to detect human performance in real time. The multi-dimensional workload algorithm incorporates physiological metrics to estimate overall workload and its components (i.e., cognitive, speech, auditory, visual and physical). The algorithm is sensitive to changes in a human’s individual workload components and overall workload across domains, human-robot teaming relationships (i.e., supervisory, peer-based), and individual differences. The algorithm has also been demonstrated to detect shifts in workload in real-time in order to adapt the robot’s interaction with the human and autonomously change task responsibilities when the human’s workload is over- or underloaded. Recently, the algorithm was used to post-hoc analyze the resulting workload for a single human deploying a heterogeneous robot swarm in an urban environment. Current efforts are focusing on predicting the human’s future workload, recognizing the human’s current tasks, and estimating workload for previously unseen tasks.

[ UPenn ]


Match ID: 62 Score: 15.00 source: spectrum.ieee.org age: 2 days
qualifiers: 15.00 energy

European Parliament declares Russia a terrorism sponsor, then its site goes down
Wed, 23 Nov 2022 20:59:00 +0000
Pro-Kremlin group called Killnet takes credit.
Match ID: 63 Score: 14.29 source: arstechnica.com age: 4 days
qualifiers: 14.29 russia

The long, tangled journey of a European rover to Mars takes another twist
Wed, 23 Nov 2022 16:59:09 +0000
"I am very glad to say that we have found a positive way forward."
Match ID: 64 Score: 14.29 source: arstechnica.com age: 4 days
qualifiers: 14.29 italy

Are We Doomed to See a Biden-Trump Rematch in 2024?
Wed, 23 Nov 2022 01:25:47 +0000
One thing’s sure: the early betting is often wrong—ask President Rand Paul.
Match ID: 65 Score: 14.29 source: www.newyorker.com age: 5 days
qualifiers: 14.29 trump

Georgia Voting Numbers Do Indeed Show Youth Surge
Tue, 22 Nov 2022 19:17:36 +0000

The number of young voters in Georgia more than doubled since 2014, counter to the narrative provided by David Shor.

The post Georgia Voting Numbers Do Indeed Show Youth Surge appeared first on The Intercept.


Match ID: 66 Score: 14.29 source: theintercept.com age: 5 days
qualifiers: 14.29 trump

Delving for Joules in the Fusion Mines
Tue, 22 Nov 2022 16:00:00 +0000


The Big Picture features technology through the lens of photographers.

Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition.

Enjoy the latest images, and if you have suggestions, leave a comment below.

Shot of Nuclear Fusion


A women using robotic equipment in front of multiple screens.

An old saw regarding the multitude of dashed hopes about fusion energy’s promise goes “Fusion is 30 years away—and it always will be.” After decades of researchers predicting that fusion was just around the corner, a team at the UK Atomic Energy Authority (which hosts the Joint European Torus [JET] plasma physics experiment) did something that suggests scientists are homing in on exactly which corner that is. In February 2022, the JET experimenters induced the single greatest sustained energy pulse ever created by humans. It had twice the energy of the previous record-setting blast, triggered a quarter century earlier. A doubling every 25 years is far behind the pace of the microchip improvements described by Moore’s Law. But that hasn’t dampened enthusiasm over an alternative energy source that could make fossil fuels and their effect on the environment relics of a bygone era. In the foreground of the picture is a trainee learning how to use the systems involved in accomplishing the feat.

Leon Neal/Getty Images


A laser cut rice caked based drone.

Turning Drones into Scones

What has two wings, can reach a person stranded in a disaster zone, and doubles as a source of precious calories when no other food is available? This drone, designed and built by a team of researchers at the Swiss Federal Institute of Technology Lausanne (EPFL), has wings made entirely of laser-cut rice cakes held together with “glue” made from gelatin. The EPFL group says it plans to keep refining the edible aircraft to improve its aeronautics and enhance its nutritional profile.

EPFL


Green laser light illuminates a metasurface that is a hundred times as thin as paper.

Metasurface Weaves Entangled Photons

Creating the quantum mechanical state of entanglement (in which paired atoms influence each other from across vast distances) has heretofore been reminiscent of the story of Noah’s ark. The tried-and-true method for entangling photons (by shining light through a nonlinear crystal) puts them in this state two by two, the way the animals are said to have boarded the ark. The ambition of quantum researchers has been to expand these connections from pairs to parties. And it seems they’ve figured out how to reliably entangle multiple photons in a complicated web, using half-millimeter-thick metasurfaces covered with forests of microscopic pillars. This, say experts, will not only greatly simplify the setup needed for quantum technology but also help support more-complex quantum applications.

Craig Fritz


A large camera within a lab in Chile.

Colossal Camera Coming to Chile

In a world obsessed with miniaturization, it’s almost shocking when, every now and then, a big deal is made of something, er, big. That is certainly the case with the new camera being built for the Vera C. Rubin Observatory in Chile. When the camera is delivered and set up in May 2023, its 1.57-meter-wide lens will make it the world’s largest device for taking snapshots. The gargantuan point-and-shoot instrument will capture images of a swath of the sky seven times the width of the moon.

Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory


A young woman with brown eyes and neck length hair dyed rose gold sits at a white table. In one hand she holds a carbon fiber robotic arm and hand. Her other arm ends near her elbow. Her short sleeve shirt has a pattern on it of illustrated hands.

Bionic Hands Haven’t Fully Grasped Users’ Needs

When we’re carrying out our quotidian activities, most of us rarely stop to think about what marvels of engineering our arms and hands are. But for those who have lost the use of a limb—or, like Britt Young, the woman pictured here, were born without one—there’s hardly ever a day when the challenges of navigating a two-handed world are not in the forefront of their thoughts. In Young’s October 2022 IEEE Spectrum cover story, she discusses these challenges, as well as how the bionic-hand technology intended to come to the rescue falls short of designers’ and users’ expectations.

Gabriela Hasbun. Makeup: Maria Nguyen for Mac Cosmetics; Hair: Joan Laqui for Living Proof


Match ID: 67 Score: 14.29 source: spectrum.ieee.org age: 5 days
qualifiers: 8.57 energy, 5.71 nuclear

‘Gold Hydrogen’ Is an Untapped Resource in Depleted Oil Wells
Thu, 24 Nov 2022 12:00:00 +0000
The fuel can be produced by adding bacteria to spent drill holes—meaning there are thousands of potential hydrogen sources worldwide.
Match ID: 68 Score: 12.86 source: www.wired.com age: 3 days
qualifiers: 12.86 energy

What is the energy price cap and what will happen to bills?
Thu, 24 Nov 2022 11:22:17 GMT
The energy bill support package will continue after April, but will be less generous.
Match ID: 69 Score: 12.86 source: www.bbc.co.uk age: 3 days
qualifiers: 12.86 energy

The Meaning of the Colorado Springs Attack
Tue, 22 Nov 2022 20:06:40 +0000
The essential precondition for mass violence is not guns or hate but a culture of terror, a common imaginary that includes the possibility of a mass shooting.
Match ID: 70 Score: 11.43 source: www.newyorker.com age: 5 days
qualifiers: 11.43 russia

Climate-tech startup Redaptive withdraws IPO
Wed, 23 Nov 2022 21:18:26 GMT

San Francisco-based Redaptive Inc. has scrapped plans to become public, according to a filing late Wednesday. The startup, which funds and installs energy-saving and energy-generating products in commercial buildings, filed for an initial public offering a year ago. As customary, the filing withdrawing the application did not detail reasons for the withdrawal. The company filed to sell $100 million worth of shares.

Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.


Match ID: 71 Score: 10.71 source: www.marketwatch.com age: 4 days
qualifiers: 10.71 energy

We now know why black hole jets make high-energy radiation
Wed, 23 Nov 2022 18:12:16 +0000
New data decisively favors shockwave-generated radiation in the jets.
Match ID: 72 Score: 10.71 source: arstechnica.com age: 4 days
qualifiers: 10.71 energy

Americans remain resistant to the lure of EVs, which are still unaffordable
Wed, 23 Nov 2022 16:37:36 +0000
The lack of cheaper electric vehicles is going to be a problem.
Match ID: 73 Score: 10.71 source: arstechnica.com age: 4 days
qualifiers: 10.71 energy

Can Any Republican Rival Take Down Donald Trump?
Tue, 22 Nov 2022 00:48:53 +0000
The only way for non-MAGA Republicans to consign the former President to history is for them to unite against him.
Match ID: 74 Score: 10.71 source: www.newyorker.com age: 6 days
qualifiers: 10.71 trump

Biden has appointed many judges but hasn’t recast the bench like Trump
Mon, 21 Nov 2022 10:46:39 EST
By keeping their Senate majority, Democrats can keep confirming judges. But thanks to the GOP’s 2015-2016 blockade, the makeup of the courts hasn’t shifted as substantially.
Match ID: 75 Score: 10.71 source: www.washingtonpost.com age: 6 days
qualifiers: 10.71 trump

Puzzling Out the Drone War Over Ukraine
Fri, 25 Mar 2022 12:26:23 +0000


In 2014, Ukrainian soldiers fighting in Crimea knew that the sight of Russian drones would soon be followed by a heavy barrage of Russian artillery. During that war, the Russian military integrated drones into tactical missions, using them to hunt for Ukrainian forces, whom they then pounded with artillery and cannon fire. Russian drones weren’t as advanced as those of their Western counterparts, but the Russian military’s integration of drones into its battlefield tactics was second to none.

Eight years later, the Russians are again invading Ukraine. And since the earlier incursion, the Russian military has spent approximately US $9 billion to domestically produce an armada of some 500 drones (a.k.a. unmanned aerial vehicles, or UAVs). But, astonishingly, three weeks into this invasion, the Russians have not had anywhere near their previous level of success with their drones. There are even signs that in the drone war, the Ukrainians have an edge over the Russians.

How could the drone capabilities of these two militaries have experienced such differing fortunes over the same period? The answer lies in a combination of trade embargoes, tech development, and the rising importance of countermeasures.

Since 2014’s invasion of Crimea, Russia’s drone-development efforts have lagged—during a time of dynamic evolution and development across the UAV industry.

First, some background. Military drones come in a wide variety of sizes, purposes, and capabilities, but they can be grouped into a few categories. On one end of the spectrum are relatively tiny flying bombs, small enough to be carried in a rucksack. On the other end are high-altitude drones, with wingspans up to 25 meters and capable of staying aloft for 30 or 40 hours, of being operated from consoles thousands of kilometers from the battlefield, and of firing air-to-surface missiles with deadly precision. In between are a range of intermediate-size drones used primarily for surveillance and reconnaissance.

Russia’s fleet of drones includes models in each of these categories. However, sanctions imposed after the 2014 invasion of Crimea blocked the Russian military from procuring some key technologies necessary to stay on the cutting edge of drone development, particularly in optics, lightweight composites, and electronics. With relatively limited capabilities of its own in these areas, Russia’s drone development efforts became somewhat sluggish during a time of dynamic evolution and development elsewhere.

Current stalwarts in the Russian arsenal include the Zala Kyb, which is a “loitering munition” that can dive into a target and explode. The most common Russian drones are midsize ones used for surveillance and reconnaissance. These include the Eleron-3SV and the Orlan-10 drones, both of which have been used extensively in Syria and Ukraine. In fact, just last week, an Orlan-10 operator was awarded a military medal for locating a site from which Ukrainian soldiers were ambushing Russian tanks, and also a Ukrainian basing area outside Kyiv containing ten artillery pieces, which were subsequently destroyed. Russia’s only large, missile-firing drone is the Kronshtadt Orion, which is similar to the American MQ-1 Predator and can be used for precision strikes as well as reconnaissance. An Orion was credited with an air strike on a command center in Ukraine in early March 2022.

Meanwhile, since the 2014 Crimea war, when they had no drones at all, the Ukrainians have methodically assembled a modest but highly capable set of drones. The backbone of the fleet, with some 300 units fielded, are the A1-SM Fury and the Leleka-100 reconnaissance drones, both designed and manufactured in Ukraine. The A1-SM Fury entered service in April 2020, and the Leleka-100, in May, 2021.

On offense, the Ukrainian and Russian militaries are closely matched in the drone war. The difference is on defense.

The heavy hitter for Ukraine in this war, though, is the Bayraktar TB2 drone, a combat aerial flyer with a wingspan of 12 meters and an armament of four laser-guided bombs. As of the beginning of March, and after losing two TB2s to Russian-backed separatist forces in Lugansk, Ukraine had a complement of 30 of the drones, which were designed and developed in Turkey. These drones are specifically aimed at destroying tanks and as of 24 March had been credited with destroying 26 vehicles, 10 surface-to-air missile systems, and 3 command posts. Various reports have put the cost of a TB2 at anywhere from $1 million to $10 million. It’s much cheaper than the tens of millions fetched for better-known combat drones, such as the MQ-9 Reaper, the backbone of the U.S. Air Force’s fleet of combat drones.

The Ukrainian arsenal also includes the Tu-141 reconnaissance drones, which are large, high-altitude Soviet-era drones that have had little success in the war. At the small end of the Ukraine drone complement are 100 Switchblade drones, which were donated by the United States as part of the $800 million weapons package announced on 16 March. The Switchblades are loitering munitions similar in size and functionality to the Russian Zala Kyb.

The upshot is that on offense, the Ukrainian and Russian militaries are closely matched in the drone war. The difference is on defense: Ukraine has the advantage when it comes to counter-drone technology. A decade ago, counter-drone technology mostly meant using radar to detect drones and surface-to-air missiles to shoot them down. It quickly proved far too costly and ineffective. Drone technology advanced at a brisk pace over the past decade, so counter-drone technology had to move rapidly to keep up. In Russia, it didn’t. Here, again, the Russian military was hampered by technology embargoes and a domestic industrial base that has been somewhat stagnant and lacking in critical capabilities. For contrast, the combined industrial base of the countries supporting Ukraine in this war is massive and has invested heavily in counter-drone technology.

Russia has deployed electronic warfare systems to counter enemy drones and have likely been using the Borisoglebsk 2 MT-LB and R-330Zh Zhitel systems, which use a combination of jamming and spoofing. These systems fill the air with radio-frequency energy, increasing the noise threshold to such a level that the drone cannot distinguish control signals from the remote pilot. Another standard counterdrone technique is sending false signals to the drone, with the most common being fake (“spoofed”) GPS signals, which disorient the flyer. Jamming and spoofing systems are easy to target because they emit radio-frequency waves at fairly high intensities. In fact, open-source images show that Ukrainian forces have already destroyed three of these Russian counterdrone systems.

The exact systems that have been provided to the Ukrainians is not publicly known, but it’s possible to make an educated guess from among the many systems available.

Additionally, some of the newer drones being used by the Ukrainians include features to withstand such electronic attacks. For example, when one of these drones detects a jamming signal, it switches to frequencies that are not being jammed; if it is still unable to reestablish a connection, the drone operates autonomously with a series of preset maneuvers until a connection can be reestablished.

Meanwhile, Ukraine has access to the wide array of NATO counterdrone technologies. The exact systems that have been provided to the Ukrainians is not publicly known, but it’s possible to make an educated guess from among the many systems available. One of the more powerful ones, from Lockheed Martin, repurposes a solid-state, phased-array radar system developed to spot incoming munitions, to detect and identify a drone. The system then tracks the drone and uses high-energy lasers to shoot it down. Raytheon’s counterdrone portfolio includes similar capabilities along with drone-killing drones and systems capable of beaming high-power microwaves that disrupt the drone’s electronics.

While most major Western defense contractors have some sort of counterdrone system, there has also been significant innovation in the commercial sector, given the mass proliferation of commercial drones. While many of these technologies are aimed at smaller drones, some of the technologies, including acoustic sensing and radio-frequency localization, are effective against larger drones as well. Also, a dozen small companies have developed jamming and spoofing systems specifically aimed at countering modern drones.

Although we don’t know specifically which counterdrone systems are being deployed by the Ukrainians, the images of the destroyed drones tell a compelling story. In the drone war, many of the flyers on both sides have been captured or destroyed on the ground, but more than half were disabled while in flight. The destroyed Ukrainian drones often show tremendous damage, including burn marks and other signs that they were shot down by a Russian surface-to-air missile. A logical conclusion is that the Russians’ electronic counterdrone systems were not effective. Meanwhile, the downed Russian drones are typically much more intact, showing relatively minor damage consistent with a precision strike from a laser or electromagnetic pulse. This is exactly what you would expect if the drones had been dispatched by one of the newer Western counterdrone systems.

In the first three weeks of this conflict, Russian drones have failed to achieve the level of success that they did in 2014. The Ukrainians, on the other hand, have logged multiple victories with drone and counterdrone forces assembled in just 8 years. The Russian drones, primarily domestically sourced, have been foiled repeatedly by NATO counterdrone technology. Meanwhile, the Ukrainian drones, such as the TB2s procured from NATO-member Turkey, have had multiple successes against the Russian counterdrone systems.
Match ID: 76 Score: 10.00 source: spectrum.ieee.org age: 247 days
qualifiers: 5.00 sanctions, 2.86 russia, 2.14 energy

Dimming the Sun to Cool the Planet Is a Desperate Idea, Yet We’re Inching Toward It
Tue, 22 Nov 2022 11:00:00 +0000
The scientists who study solar geoengineering don’t want anyone to try it. But climate inaction is making it more likely.
Match ID: 77 Score: 8.57 source: www.newyorker.com age: 5 days
qualifiers: 8.57 energy

House Members Push for U.N. Oversight of Yemen Atrocities
Mon, 21 Nov 2022 20:00:12 +0000

Reps. Ilhan Omar and Joaquin Castro are leading the call to renew international oversight of atrocities committed in the Saudi-led war in Yemen.

The post House Members Push for U.N. Oversight of Yemen Atrocities appeared first on The Intercept.


Match ID: 78 Score: 8.57 source: theintercept.com age: 6 days
qualifiers: 8.57 russia

Empresa com ex-presidente da Petrobras lucra quase meio bilhão com os campos que ele mesmo ajudou a vender
Mon, 21 Nov 2022 09:13:22 +0000

Após deixar estatal, Castello Branco assumiu conselho da 3R, companhia que faturou R$ 469 milhões apenas no terceiro trimestre deste ano.

The post Empresa com ex-presidente da Petrobras lucra quase meio bilhão com os campos que ele mesmo ajudou a vender appeared first on The Intercept.


Match ID: 79 Score: 6.43 source: theintercept.com age: 6 days
qualifiers: 6.43 energy

How the Graphical User Interface Was Invented
Sun, 20 Nov 2022 20:00:00 +0000


Mice, windows, icons, and menus: these are the ingredients of computer interfaces designed to be easy to grasp, simplicity itself to use, and straightforward to describe. The mouse is a pointer. Windows divide up the screen. Icons symbolize application programs and data. Menus list choices of action.

But the development of today’s graphical user interface was anything but simple. It took some 30 years of effort by engineers and computer scientists in universities, government laboratories, and corporate research groups, piggybacking on each other’s work, trying new ideas, repeating each other’s mistakes.


This article was first published as “Of Mice and menus: designing the user-friendly interface.” It appeared in the September 1989 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The photographs and diagrams appeared in the original print version.


Throughout the 1970s and early 1980s, many of the early concepts for windows, menus, icons, and mice were arduously researched at Xerox Corp.’s Palo Alto Research Center (PARC), Palo Alto, Calif. In 1973, PARC developed the prototype Alto, the first of two computers that would prove seminal in this area. More than 1200 Altos were built and tested. From the Alto’s concepts, starting in 1975, Xerox’s System Development Department then developed the Star and introduced it in 1981—the first such user-friendly machine sold to the public.

In 1984, the low-cost Macintosh from Apple Computer Inc., Cupertino, Calif., brought the friendly interface to thousands of personal computer users. During the next five years, the price of RAM chips fell enough to accommodate the huge memory demands of bit-mapped graphics, and the Mac was followed by dozens of similar interfaces for PCs and workstations of all kinds. By now, application programmers are becoming familiar with the idea of manipulating graphic objects.

The Mac’s success during the 1980s spurred Apple Computer to pursue legal action over ownership of many features of the graphical user interface. Suits now being litigated could assign those innovations not to the designers and their companies, but to those who first filed for legal protection on them.

The GUI started with Sketchpad


The grandfather of the graphical user interface was Sketchpad [see photograph]. Massachusetts Institute of Technology student Ivan E. Sutherland built it in 1962 as a Ph.D. thesis at MIT’s Lincoln Laboratory in Lexington, Mass. Sketchpad users could not only draw points, line segments, and circular arcs on a cathode ray tube (CRT) with a light pen—they could also assign constraints to, and relationships among, whatever they drew.

Arcs could have a specified diameter, lines could be horizontal or vertical, and figures could be built up from combinations of elements and shapes. Figures could be moved, copied, shrunk, expanded, and rotated, with their constraints (shown as onscreen icons) dynamically preserved. At a time when a CRT monitor was a novelty in itself, the idea that users could interactively create objects by drawing on a computer was revolutionary.


Man sitting in front of a round cathode ray display with a white square and triangle on a black background

Moreover, to zoom in on objects, Sutherland wrote the first window-drawing program, which required him to come up with the first clipping algorithm. Clipping is a software routine that calculates which part of a graphic object is to be displayed and displays only that part on the screen. The program must calculate where a line is to be drawn, compare that position to the coordinates of the window in use, and prevent the display of any line segment whose coordinates fall outside the window.

Though films of Sketchpad in operation were widely shown in the computer research community, Sutherland says today that there was little immediate fallout from the project. Running on MIT’s TX-2 mainframe, it demanded too much computing power to be practical for individual use. Many other engineers, however, see Sketchpad’s design and algorithms as a primary influence on an entire generation of research into user interfaces.

The origin of the computer mouse


The light pens used to select areas of the screen by interactive computer systems of the 1950s and 1960s—including Sketchpad—had drawbacks. To do the pointing, the user’s arm had to be lifted up from the table, and after a while that got tiring. Picking up the pen required fumbling around on the table or, if it had a holder, taking the time after making a selection to put it back.

Sensing an object with a light pen was straightforward: the computer displayed spots of light on the screen and interrogated the pen as to whether it sensed a spot, so the program always knew just what was being displayed. Locating the position of the pen on the screen required more sophisticated techniques—like displaying a cross pattern of nine points on the screen, then moving the cross until it centered on the light pen.

In 1964, Douglas Engelbart, a research project leader at SRI International in Menlo Park, Calif., tested all the commercially available pointing devices, from the still-popular light pen to a joystick and a Graphicon (a curve-tracing device that used a pen mounted on the arm of a potentiometer). But he felt the selection failed to cover the full spectrum of possible pointing devices, and somehow he should fill in the blanks.

Then he remembered a 1940s college class he had taken that covered the use of a planimeter to calculate area. (A planimeter has two arms, with a wheel on each. The wheels can roll only along their axes; when one of them rolls, the other must slide.)

If a potentiometer were attached to each wheel to monitor its rotation, he thought, a planimeter could be used as a pointing device. Engelbart explained his roughly sketched idea to engineer William English, who with the help of the SRI machine shop built what they quickly dubbed “the mouse.”



This first mouse was big because it used single-turn potentiometers: one rotation of the wheels had to be scaled to move a cursor from one side of the screen to the other. But it was simple to interface with the computer: the processor just read frequent samples of the potentiometer positioning signals through analog-to-digital converters.

The cursor moved by the mouse was easy to locate, since readings from the potentiometer determined the position of the cursor on the screen-unlike the light pen. But programmers for later windowing systems found that the software necessary to determine which object the mouse had selected was more complex than that for the light pen: they had to compare the mouse’s position with that of all the objects displayed onscreen.

The computer mouse gets redesigned—and redesigned again

Engelbart’s group at SRI ran controlled experiments with mice and other pointing devices, and the mouse won hands down. People adapted to it quickly, it was easy to grab, and it stayed where they put it. Still, Engelbart wanted to tinker with it. After experimenting, his group had concluded that the proper ratio of cursor movement to mouse movement was about 2:1, but he wanted to try varying that ratio—decreasing it at slow speeds and raising it at fast speeds—to improve user control of fine movements and speed up larger movements. Some modern mouse-control software incorporates this idea, including that of the Macintosh.

The mouse, still experimental at this stage, did not change until 1971. Several members of Engelbart’s group had moved to the newly established PARC, where many other researchers had seen the SRI mouse and the test report. They decided there was no need to repeat the tests; any experimental systems they designed would use mice.

Said English, “This was my second chance to build a mouse; it was obvious that it should be a lot smaller, and that it should be digital.” Chuck Thacker, then a member of the research staff, advised PARC to hire inventor Jack Hawley to build it.

Hawley decided the mouse should use shaft encoders, which measure position by a series of pulses, instead of potentiometers (both were covered in Engelbart’s 1970 patent), to eliminate the expensive analog-to-digital converters. The basic principle, of one wheel rolling while the other slid, was licensed from SRI.

The ball mouse was the “easiest patent I ever got. It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”
—Ron Rider

In 1972, the mouse changed again. Ron Rider, now vice president of systems architecture at PARC but then a new arrival, said he was using the wheel mouse while an engineer made excuses for its asymmetric operation (one wheel dragging while one turned). “I suggested that they turn a trackball upside down, make it small, and use it as a mouse instead,” Rider told IEEE Spectrum. This device came to be known as the ball mouse. “Easiest patent I ever got,” Rider said. “It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”

Defining terms


Bit map

The pixel pattern that makes up the graphic display on a computer screen.

Clicking

The motion of pressing a mouse button to Initiate an action by software; some actions require double-clicking.

Graphical user interface (GUI)

The combination of windowing displays, menus, icons, and a mouse that is increasingly used on personal computers and workstations.

Icon

An onscreen drawing that represents programs or data.

Menu

A list of command options currently available to the computer user; some stay onscreen, while pop-up or pull-down menus are requested by the user.

Mouse

A device whose motion across a desktop or other surface causes an on-screen cursor to move commensurately; today’s mice move on a ball and have one, two, or three buttons.

Raster display

A cathode ray tube on which Images are displayed as patterns of dots, scanned onto the screen sequentially in a predetermined pattern of lines.

Vector display

A cathode ray tube whose gun scans lines, or vectors, onto the screen phosphor.

Window

An area of a computer display, usually one of several, in which a particular program is executing.


In the PARC ball mouse design, the weight of the mouse is transferred to the ball by a swivel device and on one or two casters at the end of the mouse farthest from the wire “tail.” A prototype was built by Xerox’s Electronics Division in El Segundo, Calif., then redesigned by Hawley. The rolling ball turned two perpendicular shafts, with a drum on the end of each that was coated with alternating stripes of conductive and nonconductive material. As the drum turned, the stripes transmitted electrical impulses through metal wipers.

When Apple Computer decided in 1979 to design a mouse for its Lisa computer, the design mutated yet again. Instead of a metal ball held against the substrate by a swivel, Apple used a rubber ball whose traction depended on the friction of the rubber and the weight of the ball itself. Simple pads on the bottom of the case carried the weight, and optical scanners detected the motion of the internal wheels. The device had loose tolerances and few moving parts, so that it cost perhaps a quarter as much to build as previous ball mice.

How the computer mouse gained and lost buttons

The first, wooden, SRI mouse had only one button, to test the concept. The plastic batch of SRI mice bad three side-by-side buttons—all there was room for, Engelbart said. The first PARC mouse bad a column of three buttons-again, because that best fit the mechanical design. Today, the Apple mouse has one button, while the rest have two or three. The issue is no longer 1950—a standard 6-by-10-cm mouse could now have dozens of buttons—but human factors, and the experts have strong opinions.

Said English, now director of internationalization at Sun Microsystems Inc., Mountain View, Calif.: “Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” He sees two buttons as the minimum because two functions are basic to selecting an object: pointing to its start, then extending the motion to the end of the object.

William Verplank, a human factors specialist in the group that tested the graphical interface at Xerox from 1978 into the early 1980s, concurred. He told Spectrum that with three buttons, Alto users forgot which button did what. The group’s tests showed that one button was also confusing, because it required actions such as double-clicking to select and then open a file.

“We have agonizing videos of naive users struggling” with these problems, Verplank said. They concluded that for most users, two buttons (as used on the Star) are optimal, if a button means the same thing in every application. English experimented with one-button mice at PARC before concluding they were a bad idea.


“Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.”
—William English


A computer monitor with a chunky white keyboard sitting on a desk

But many interface designers dislike multiple buttons, saying that double-clicking a single button to select an item is easier than remembering which button points and which extends. Larry Tesler, formerly a computer scientist at PARC, brought the one-button mouse to Apple, where he is now vice president of advanced technology. The company’s rationale is that to attract novices to its computers one button was as simple as it could get.

More than two million one-button Apple mice are now in use. The Xerox and Microsoft two-button mice are less common than either Apple’s ubiquitous one-button model or the three-button mice found on technical workstations. Dozens of companies manufacture mice today; most are slightly smaller than a pack of cigarettes, with minor variations in shape.

How windows first came to the computer screen


In 1962, Sketchpad could split its screen horizontally into two independent sections. One section could, for example, give a close-up view of the object in the other section. Researchers call Sketchpad the first example of tiled windows, which are laid out side by side. They differ from overlapping windows, which can be stacked on top of each other, or overlaid, obscuring all or part of the lower layers.

Windows were an obvious means of adding functionality to a small screen. In 1969, Engelbart equipped NLS (as the On-Line System he invented at SRI during the 1960s was known, to distinguish it from the Off-Line System known as FLS) with windows. They split the screen into multiple parts horizontally or vertically, and introduced cross-window editing with a mouse.

By 1972, led by researcher Alan Kay, the Smalltalk programming language group at Xerox PARC had implemented their version of windows. They were working with far different technology from Sutherland or Engelbart: by deciding that their images had to be displayed as dots on the screen, they led a move from vector to raster displays, to make it simple to map the assigned memory location of each of those spots. This was the bit map invented at PARC, and made viable during the 1980s by continual performance improvements in processor logic and memory speed.

Experimenting with bit-map manipulation, Smalltalk researcher Dan Ingalls developed the bit-block transfer procedure, known as BitBlt. The BitBlt software enabled application programs to mix and manipulate rectangular arrays of pixel values in on-screen or off-screen memory, or between the two, combining the pixel values and storing the result in the appropriate bit-map location.

BitBlt made it much easier to write programs to scroll a window (move an image through it), resize (enlarge or contract) it, and drag windows (move them from one location to another on screen). It led Kay to create overlapping windows. They were soon implemented by the Smalltalk group, but made clipping harder.

Some researchers question whether overlapping windows offer more benefits than tiled on the grounds that screens with overlapping windows become so messy the user gets lost.

In a tiling system, explained researcher Peter Deutsch, who worked with the Smalltalk group, the clipping borders are simply horizontal or vertical lines from one screen border to another, and software just tracks the location of those lines. But overlapping windows may appear anywhere on the screen, randomly obscuring bits and pieces of other windows, so that quite irregular regions must be clipped. Thus application software must constantly track which portions of their windows remain visible.

Some researchers still question whether overlapping windows offer more benefits than tiled, at least above a certain screen size, on the grounds that screens with overlapping windows become so messy the user gets lost. Others argue that overlapping windows more closely match users’ work patterns, since no one arranges the papers on their physical desktop in neat horizontal and vertical rows. Among software engineers, however, overlapping windows seem to have won for the user interface world.

So has the cut-and-paste editing model that Larry Tesler developed, first for the Gypsy text editor he wrote at PARC and later for Apple. Charles Irby—who worked on Xerox’s windows and is now vice president of development at Metaphor Computer Systems Inc., Mountain View, Calif.—noted, however, that cut-and-paste worked better for pure text-editing than for moving graphic objects from one application to another.

The origin of the computer menu bar


Menus—functions continuously listed onscreen that could be called into action with key combinations—were commonly used in defense computing by the 1960s. But it was only with the advent of BitBlt and windows that menus could be made to appear as needed and to disappear after use. Combined with a pointing device to indicate a user’s selection, they are now an integral part of the user-friendly interface: users no longer need to refer to manuals or memorize available options.

Instead, the choices can be called up at a moment’s notice whenever needed. And menu design has evolved. Some new systems use nested hierarchies of menus; others offer different menu versions—one with the most commonly used commands for novices, another with all available commands for the experienced user.

Among the first to test menus on demand was PARC researcher William Newman, in a program called Markup. Hard on his heels, the Smalltalk group built in pop-up menus that appeared on screen at the cursor site when the user pressed one of the mouse buttons.

Implementation was on the whole straightforward, recalled Deutsch. The one exception was determining whether the menu or the application should keep track of the information temporarily obscured by the menu. In the Smalltalk 76 version, the popup menu saved and restored the screen bits it overwrote. But in today’s multitasking systems, that would not work, because an application may change those bits without the menu’s knowledge. Such systems add another layer to the operating system: a display manager that tracks what is written where.

The production Xerox Star, in 1981, featured a further advance: a menu bar, essentially a row of words indicating available menus that could be popped up for each window. Human factors engineer Verplank recalled that the bar was at first located at the bottom of its window. But the Star team found users were more likely to associate a bar with the window below it, so it was moved to the top of its window.

Apple simplified things in its Lisa and Macintosh with a single bar placed at the top of the screen. This menu bar relates only to the window in use: the menus could be ‘‘pulled down” from the bar, to appear below it. Designer William D. Atkinson received a patent (assigned to Apple Computer) in August 1984 for this innovation.

One new addition that most user interface pioneers consider an advantage is the tear-off menu, which the user can move to a convenient spot on the screen and “pin” there, always visible for ready access.

Many windowing interfaces now offer command-key or keyboard alternatives for many commands as well. This return to the earliest of user interfaces—key combinations—neatly supplements menus, providing both ease of use for novices and for the less experienced, and speed for those who can type faster than they can point to a menu and click on a selection.

How the computer “icon” got its name


Sketchpad had on-screen graphic objects that represented constraints (for example, a rule that lines be the same length), and the Flex machine built in 1967 at the University of Utah by students Alan Kay and Ed Cheadle had squares that represented programs and data (like today’s computer “folders”). Early work on icons was also done by Bell Northern Research, Ottawa, Canada, stemming from efforts to replace the recently legislated bilingual signs with graphic symbols.

But the concept of the computer “icon” was not formalized until 1975. David Canfield Smith, a computer science graduate student at Stanford University in California, began work on his Ph.D. thesis in 1973. His advisor was PARC’s Kay, who suggested that he look at using the graphics power of the experimental Alto not just to display text, but rather to help people program.

David Canfield Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents.

Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents: a Russian icon of a saint is holy and is to be venerated. Smith’s computer icons contained all the properties of the programs and data represented, and therefore could be linked or acted on as if they were the real thing.

After receiving his Ph.D. in 1975, Smith joined Xerox in 1976 to work on Star development. The first thing he did, he said, was to recast his concept of icons in office terms. “I looked around my office and saw papers, folders, file cabinets, a telephone, and bookshelves, and it was an easy translation to icons,” he said.

Xerox researchers developed, tested, and revised icons for the Star interface for three years before the first version was complete. At first they attempted to make the icons look like a detailed photographic rendering of the object, recalled Irby, who worked on testing and refining the Xerox windows. Trading off label space, legibility, and the number of icons that fit on the screen, they decided to constrain icons to a 1-inch (2.5-centimeter) square of 64 by 64 pixels, or 512 eight-bit bytes.

Then, Verplank recalls, they discovered that because of a background pattern based on two-pixel dots, the right-hand side of the icons appeared jagged. So they increased the width of the icons to 65 pixels, despite an outcry from programmers who liked the neat 16-bit breakdown. But the increase stuck, Verplank said, because they had already decided to store 72 bits per side to allow for white space around each icon.

After settling on a size for the icons, the Star developers tested four sets developed by two graphic designers and two software engineers. They discovered that, for example, resizing may cause problems. They shrunk the icon for a person—a head and shoulders—in order to use several of them to represent a group, only to hear one test subject say the screen resolution made the reduced icon look like a cross above a tombstone. Computer graphics artist Norm Cox, now of Cox & Hall, Dallas, Texas, was finally hired to redesign the icons.

Icon designers today still wrestle with the need to make icons adaptable to the many different system configurations offered by computer makers. Artist Karen Elliott, who has designed icons for Microsoft, Apple, Hewlett-Packard Co., and others, noted that on different systems an icon may be displayed in different colors, several resolutions, and a variety of gray shades, and it may also be inverted (light and dark areas reversed).

In the past few years, another concern has been added to icon designers’ tasks: internationalization. Icons designed in the United States often lack space for translations into languages other than English. Elliott therefore tries to leave space for both the longer words and the vertical orientation of some languages.


A square white macintosh computer with a white keyboard, in a separate image below, computer icons and the text address book, address, addresses

The main rule is to make icons simple, clean, and easily recognizable. Discarded objects are placed in a trash can on the Macintosh. On the NeXT Computer System, from NeXT Inc., Palo Alto, Calif.—the company formed by Apple cofounder Steven Jobs after he left Apple—they are dumped into a Black Hole. Elliott sees NeXT’s black hole as one of the best icons ever designed: ”It is distinct; its roundness stands out from the other, square icons, and this is important on a crowded display. It fits my image of information being sucked away, and it makes it clear that dumping something is serious.

English disagrees vehemently. The black hole “is fundamentally wrong,” he said. “You can dig paper out of a wastebasket, but you can’t dig it out of a black hole.” Another critic called the black hole familiar only to “computer nerds who read mostly science fiction and comics,” not to general users.

With the introduction of the Xerox Star in June 1981, the graphical user interface, as it is known today, arrived on the market. Though not a commercial triumph, the Star generated great interest among computer users, as the Alto before it had within the universe of computer designers.

Even before the Star was introduced, Jobs, then still at Apple, had visited Xerox PARC in November 1979 and asked the Smalltalk researchers dozens of questions about the Alto’s internal design. He later recruited Larry Tesler from Xerox to design the user interface of the Apple Lisa.

With the Lisa and then the Macintosh, introduced in January 1983 and January 1984 respectively, the graphical user interface reached the low-cost, high-volume computer market.

At almost $10,000, buyers deemed the Lisa too expensive for the office market. But aided by prizewinning advertising and its lower price, the Macintosh took the world by storm. Early Macs had only 128K bytes of RAM, which made them slow to respond because it was too little memory for heavy graphic manipulation. Also, the time needed for programmers to learn its Toolbox of graphics routines delayed application packages until well into 1985. But the Mac’s ease of use was indisputable, and it generated interest that spilled over into the MS-DOS world of IBM PCs and clones, as well as Unix-based workstations.

Who owns the graphical user interface?


The widespread acceptance of such interfaces, however, has led to bitter lawsuits to establish exactly who owns what. So far, none of several litigious companies has definitively established that it owns the software that implements windows, icons, or early versions of menus. But the suits continue.

Virtually all the companies that make and sell either wheel or ball mice paid license fees to SRI or to Xerox for their patents. Engelbart recalled that SRI patent attorneys inspected all the early work on the interface, but understood only hardware. After looking at developments like the implementation of windows, they told him that none of it was patentable.

At Xerox, the Star development team proposed 12 patents having to do with the user interface. The company’s patent committee rejected all but two on hardware—one on BitBlt, the other on the Star architecture. At the time, Charles Irby said, it was a good decision. Patenting required full disclosure, and no precedents then existed for winning software patent suits.


A computer screen in blue and white with multiple open windows


Three computer windows with greyscale images on a dark grey background


Computer windows tinted blue on a black background partially obscuring a planet and starfield


The most recent and most publicized suit was filed in March 1988, by Apple, against both Microsoft and Hewlett-Packard Co., Palo Alto, Calif. Apple alleges that HP’s New Wave interface, requiring version 2.03 of Microsoft’s Windows program, embodies the copyrighted “audio visual computer display” of the Macintosh without permission; that the displays of Windows 2.03 are illegal copies of the Mac’s audiovisual works; and that Windows 2.03 also exceeds the rights granted in a November 198S agreement in which Microsoft acknowledged that the displays in Windows 1.0 were derivatives of those in Apple’s Lisa and Mac.

In March 1989, U.S. District Judge William W. Schwarzer ruled Microsoft had exceeded the bounds of its license in creating Windows 2.03. Then in July 1989 Schwarzer ruled that all but 11 of the 260 items that Apple cited in its suit were, in fact, acceptable under the 1985 agreement. The larger issue—whether Apple’s copyrights are valid, and whether Microsoft and HP infringed on them—will not now be examined until 1990.

Among those 11 are overlapping windows and movable icons. According to Pamela Samuelson, a noted software intellectual property expert and visiting professor at Emory University Law School, Atlanta, Ga., many experts would regard both as functional features of an interface that cannot be copyrighted, rather than “expressions” of an idea protectable by copyright.

But lawyers for Apple—and for other companies that have filed lawsuits to protect the “look and feel’’ of their screen displays—maintain that if such protection is not granted, companies will lose the economic incentive to market technological innovations. How is Apple to protect its investment in developing the Lisa and Macintosh, they argue, if it cannot license its innovations to companies that want to take advantage of them?

If the Apple-Microsoft case does go to trial on the copyright issues, Samuelson said, the court may have to consider whether Apple can assert copyright protection for overlapping windows-an interface feature on which patents have also been granted. In April 1989, for example, Quarterdeck Office Systems Inc., Santa Monica, Calif., received a patent for a multiple windowing system in its Desq system software, introduced in 1984.

Adding fuel to the legal fire, Xerox said in May 1989 it would ask for license fees from companies that use the graphical user interface. But it is unclear whether Xerox has an adequate claim to either copyright or patent protection for the early graphical interface work done at PARC. Xerox did obtain design patents on later icons, noted human factors engineer Verplank. Meanwhile, both Metaphor and Sun Microsystems have negotiated licenses with Xerox for their own interfaces.

To Probe Further

The September 1989 IEEE Computer contains an article, “The Xerox ‘Star’: A Retrospective,” by Jeff Johnson et al., covering development of the Star. “Designing the Star User Interface,’’ [PDF] by David C. Smith et al., appeared in the April 1982 issue of Byte.

The Sept. 12, 1989, PC Magazine contains six articles on graphical user interfaces for personal computers and workstations. The July 1989 Byte includes ‘‘A Guide to [Graphical User Interfaces),” by Frank Hayes and Nick Baran, which describes 12 current interfaces for workstations and personal computers. “The Interface of Tomorrow, Today,’’ by Howard Reingold, in the July 10, 1989, InfoWorld does the same. “The interface that launched a thousand imitations,” by Richard Rawles, in the March 21, 1989, MacWeek covers the Macintosh interface.

The human factors of user interface design are discussed in The Psychology of Everyday Things, by Donald A. Norman (Basic Books Inc., New York, 1988). The January 1989 IEEE Software contains several articles on methods, techniques, and tools for designing and implementing graphical interfaces. The Way Things Work, by David Macaulay (Houghton Mifflin Co., Boston, 1988), contains a detailed drawing of a ball mouse.

The October 1985 IEEE Spectrum covered Xerox PARC’s history in “Research at Xerox PARC: a founder’s assessment,” by George Pake (pp. 54-61) and “Inside the PARC: the ‘information architects,’“ by Tekla Perry and Paul Wallich (pp. 62-75).

William Atkinson received patent no. 4,464,652 for the pulldown menu system on Aug. 8, 1984, and assigned it to Apple. Gary Pope received patent no. 4,823,108, for an improved system for displaying images in “windows” on a computer screen, on April 18, 1989, and assigned it to Quarterdeck Office Systems.

The wheel mouse patent, no. 3,541,541, “X-Y position indicator for a display system,” was issued to Douglas Engelbart on Nov. 17, 1970, and assigned to SRI International. The ball mouse patent, no. 3,835,464, was issued to Ronald Rider on Sept. 10, 1974, and assigned to Xerox.

The first selection device tests to include a mouse are covered in “Display-Selection Techniques for Text Manipulation,” by William English, Douglas Engelbart, and Melvyn Berman, in IEEE Transactions on Human Factors in Electronics, March 1967.

Sketchpad: A Man-Machine Graphical Communication System, by Ivan E. Sutherland (Garland Publishing Inc., New York City and London, 1980), reprints his 1963 Ph.D. thesis.










Match ID: 80 Score: 5.71 source: spectrum.ieee.org age: 7 days
qualifiers: 5.71 russia

The EV Transition Explained: Battery Challenges
Sat, 19 Nov 2022 19:30:00 +0000


“Energy and information are two basic currencies of organic and social systems,” the economics Nobelist Herb Simon once observed. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.”

Electric vehicles at scale alter the terms of both basic currencies concurrently. Reliable, secure supplies of minerals and software are core elements for EVs, which represent a “shift from a fuel-intensive to a material-intensive energy system,” according to a report by the International Energy Agency (IEA). For example, the mineral requirements for an EV’s batteries and electric motors are six times that of an internal-combustion-engine (ICE) vehicle, which can increase the average weight of an EV by 340 kilograms (750 pounds). For something like the Ford Lightning, the weight can be more than twice that amount.

EVs also create a shift from an electromechanical-intensive to an information-intensive vehicle. EVs offer a virtual clean slate from which to accelerate the design of safe, software-defined vehicles, with computing and supporting electronics being the prime enabler of a vehicle’s features, functions, and value. Software also allows for the decoupling of the internal mechanical connections needed in an ICE vehicle, permitting an EV to be controlled remotely or autonomously. An added benefit is that the loss of the ICE power train not only reduces the components a vehicle requires but also frees up space for increased passenger comfort and storage.

The effects of Simon’s profound changes are readily apparent, forcing a 120-year-old industry to fundamentally reinvent itself. EVs require automakers to design new manufacturing processes and build plants to make both EVs and their batteries. Ramping up the battery supply chain is the automakers’ current “most challenging topic,” according to VW chief financial officer Arno Antlitz.

It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.

These plants are also very expensive. Ford and its Korean battery supplier SK Innovation are spending US $5.6 billion to produce F-Series EVs and batteries in Stanton, Tenn., for example, while GM is spending $2 billion to produce its new Cadillac Lyriq EVs in Spring Hill, Tenn. As automakers expand their lines of EVs, tens of billions more will need to be invested in both manufacturing and battery plants. It is little wonder that Tesla CEO Elon Musk calls EV factories “gigantic money furnaces.”

Furthermore, Kristin Dziczek a policy analyst with the Federal Reserve Bank of Chicago adds, there are scores of new global EV competitors actively seeking to replace the legacy automakers. The “simplicity” of EVs in comparison with ICE vehicles allows these disruptors to compete virtually from scratch with legacy automakers, not only in the car market itself but for the material and labor inputs as well.

Batteries and the supply-chain challenge

Another critical question is whether all the planned battery-plant output will support expected EV production demands. For instance, the United States will require 8 million EV batteries annually by 2030 if its target to make EVs half of all new-vehicle sales is met, with that number rising each year after. As IEA executive director Fatih Birol observes, “Today, the data shows a looming mismatch between the world’s strengthened climate ambitions and the availability of critical minerals that are essential to realizing those ambitions.”

This mismatch worries automakers. GM, Ford, Tesla, and others have moved to secure batteries through 2025, but it could be tricky after that. Rivian Automotive chief executive RJ Scaringe was recently quoted in the Wall Street Journal as saying that “90 to 95 percent of the (battery) supply chain does not exist,” and that the current semiconductor chip shortage is “a small appetizer to what we are about to feel on battery cells over the next two decades.”

The competition for securing raw materials, along with the increased consumer demand, has caused EV prices to spike. Ford has raised the price of the Lightning $6,000 to $8,500, and CEO Jim Farley bluntly states that in regard to material shortages in the foreseeable future, “I don’t think we should be confident in any other outcomes than an increase in prices.”

Stiff Competition for Engineering Talent


One critical area of resource competition is over the limited supply of software and systems engineers with the mechatronics and robotics expertise needed for EVs. Major automakers have moved aggressively to bring more software and systems-engineering expertise on board, rather than have it reside at their suppliers, as they have traditionally done. Automakers feel that if they're not in control of the software, they're not in control of their product.

Volvo’s CEO Jim Rowan stated earlier this year that increasing the computing power in EVs will be harder and more altering of the automotive industry than switching from ICE vehicles to EVs. This means that EV winners and losers will in great part be separated by their “relative strength in their cyberphysical systems engineering,” states Clemson’s Paredis.

Even for the large auto suppliers, the transition to EVs will not be an easy road. For instance, automakers are demanding these suppliers absorb more cost cuts because automakers are finding EVs so expensive to build. Not only do automakers want to bring cutting-edge software expertise in-house, they want greater inside expertise in critical EV supply-chain components, especially batteries.

Automakers, including Tesla, are all scrambling for battery talent, with bidding wars reportedly breaking out to acquire top candidates. With automakers planning to spend more than $13 billion to build at least 13 new EV battery plants in North America within the next five to seven years, experienced management and production-line talent will likely be in extremely short supply. Tesla’s Texas Gigafactory needs some 10,000 workers alone, for example. With at least 60 new battery plants planned to be in operation globally by 2030, and scores needed soon afterward, major battery makers are already highlighting their expected skill shortages.


The underlying reason for the worry: Supplying sufficient raw materials to existing and planned battery plants as well as to the manufacturers of other renewable energy sources and military systems—who are competing for the same materials—has several complications to overcome. Among them is the need for more mines to provide the metals required, which have spiked in price as demand has increased. For example, while demand for lithium is growing rapidly, investment in mines has significantly lagged the investment that has been aimed toward EVs and battery plants. It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.

Mining the raw materials, of course, assumes that there is sufficient refining capability to process them, which, outside of China, is limited. This is especially true in the United States, which, according to a Biden Administration special supply-chain investigative report, has “limited raw material production capacity and virtually no processing capacity.” Consequently, the report states, the United States “exports the limited raw materials produced today to foreign markets.” For example, output from the only nickel mine in the United States, the Eagle mine in Minnesota, is sent to Canada for smelting.

“Energy and information are two basic currencies of organic and social systems. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.” —Herb Simon

One possible solution is to move away from lithium-ion batteries and nickel metal hydride batteries to other battery chemistries such as lithium-iron phosphate, lithium-ion phosphate, lithium-sulfur, lithium-metal, and sodium-ion, among many others, not to mention solid-state batteries, as a way to alleviate some of the material supply and cost problems. Tesla is moving toward the use of lithium-iron phosphate batteries, as is Ford for some of its vehicles. These batteries are cobalt free, which alleviates several sourcing issues.

Another solution may be recycling both EV batteries as well as the waste and rejects from battery manufacturing, which can run between 5 to 10 percent of production. Effective recycling of EV batteries “has the potential to reduce primary demand compared to total demand in 2040, by approximately 25 percent for lithium, 35 percent for cobalt and nickel, and 55 percent for copper,” according to a report by the University of Sidney’s Institute for Sustainable Futures.



While investments into creating EV battery recycling facilities have started, there is a looming question of whether there will be enough battery factory scrap and other lithium-ion battery waste for them to remain operational while they wait for sufficient numbers of batteries to make them profitable. Lithium-ion battery-pack recycling is very time-consuming and expensive, making mining lithium often cheaper than recycling it, for example. Recycling low or no-cobalt lithium batteries, which is the direction many automakers are taking, may also make it unprofitable to recycle them.

An additional concern is that EV batteries, once no longer useful for propelling the EV, have years of life left in them. They can be refurbished, rebuilt, and reused in EVs, or repurposed into storage devices for homes, businesses, or the grid. Whether it will make economic sense to do either at scale versus recycling them remains to be seen.

Howard Nusbaum, the administrator of the National Salvage Vehicle Reporting Program (NSVRP), succinctly puts it, “There is no recycling, and no EV-recycling industry, if there is no economic basis for one.”

In the next article in the series, we will look at whether the grid can handle tens of millions of EVs.


Match ID: 81 Score: 5.00 source: spectrum.ieee.org age: 8 days
qualifiers: 2.86 korea, 2.14 energy

The Dangers of Democrats Boosting MAGA Republicans
Sat, 19 Nov 2022 12:00:35 +0000

The midterm strategy to fund Republican extremists in the primaries worked. Some fear it will strengthen the far right in the long term.

The post The Dangers of Democrats Boosting MAGA Republicans appeared first on The Intercept.


Match ID: 82 Score: 3.57 source: theintercept.com age: 8 days
qualifiers: 3.57 trump

Was the Killing of a Migrant by a Former ICE Warden a Hate Crime or a Terrible Accident?
Sat, 19 Nov 2022 11:00:49 +0000

At Fivemile Tank, a watering hole in the bleak desert of West Texas, two men pulled up in a truck. One aimed a gun into the brush.

The post Was the Killing of a Migrant by a Former ICE Warden a Hate Crime or a Terrible Accident? appeared first on The Intercept.


Match ID: 83 Score: 3.57 source: theintercept.com age: 8 days
qualifiers: 3.57 trump

How Trump could sabotage the GOP
Fri, 18 Nov 2022 12:20:29 EST
The obvious smart play for the GOP is to move on. Actually doing that is another matter, given Trump's demonstrated threats and capacity for vengefulness.
Match ID: 84 Score: 3.57 source: www.washingtonpost.com age: 9 days
qualifiers: 3.57 trump

Could Trump's legal issues derail his 2024 presidential bid? – video explainer
Thu, 17 Nov 2022 16:09:08 GMT

Donald Trump has announced his candidacy for the Republican presidential nomination in 2024, probably sparking another period of tumult in US politics and especially his own political party. His third candidacy comes as he faces intensifying legal troubles, including investigations by the justice department into the removal of hundreds of classified documents from the White House to his Florida estate and into his role in the January 6 attack. But could they derail his bid? The Guardian US politics correspondent Hugo Lowell explains what Trump is facing and whether he still stands a chance

Continue reading...
Match ID: 85 Score: 3.57 source: www.theguardian.com age: 10 days
qualifiers: 3.57 trump

The Twitter Bubble Let Democrats Defy Political Gravity
Thu, 17 Nov 2022 15:15:50 +0000
The midterm elections showed that the far-right's manufactured narrative about trans kids doomed the GOP when they made it policy.
Match ID: 86 Score: 3.57 source: www.wired.com age: 10 days
qualifiers: 3.57 trump

MOXIE Shows How to Make Oxygen on Mars
Thu, 08 Sep 2022 15:27:59 +0000


Planning for the return journey is an integral part of the preparations for a crewed Mars mission. Astronauts will require a total mass of about 50 tonnes of rocket propellent for the ascent vehicle that will lift them off the planet’s surface, including 31 tonnes of oxygen approximately. The less popular option is for crewed missions to carry the required oxygen themselves. But scientists are optimistic that it could instead be produced from the carbon dioxide–rich Martian atmosphere itself, using a system called MOXIE.

The Mars Oxygen ISRU (In-Situ Resource Utilization) Experiment is an 18-kilogram unit housed within the Perseverance rover on Mars. The unit is “the size of a toaster,” adds Jeffrey Hoffman, professor of aerospace engineering at MIT. Its job is to electrochemically break down carbon dioxide collected from the Martian atmosphere into oxygen and carbon monoxide. It also tests the purity of the oxygen.

Between February 2021, when it arrived on Mars aboard the Perseverance, and the end of the year, MOXIE has had several successful test runs. According to a review of the system by Hoffman and colleagues, published in Science Advances, it has demonstrated its ability to produce oxygen during both night and day, when temperatures can vary by over 100 ºC. The generation and purity rates of oxygen also meet requirements to produce rocket propellent and for breathing. The authors assert that a scaled-up version of MOXIE could produce the required oxygen for lift-off as well as for the astronauts to breathe.

Next question: How to power any oxygen-producing factories that NASA can land on Mars? Perhaps via NASA’s Kilopower fission reactors?

MOXIE is a first step toward a much larger and more complex system to support the human exploration of Mars. The researchers estimate a required generation rate of 2 to 3 kilograms per hour, compared with the current MOXIE rate of 6 to 8 grams per hour, to produce enough oxygen for lift-off for a crew arriving 26 months later. “So we’re talking about a system that’s a couple of hundred times bigger than MOXIE,” Hoffman says.

They calculate this rate accounting for eight months to get to Mars, followed by some time to set up the system. “We figure you'd probably have maybe 14 months to make all the oxygen.” Further, he says, the produced oxygen would have to be liquefied to be used a rocket propellant, something the current version of MOXIE doesn’t do.

MOXIE also currently faces several design constraints because, says Hoffman, a former astronaut, “our only ride to Mars was inside the Perseverance rover.” This limited the amount of power available to operate the unit, the amount of heat they could produce, the volume and the mass.

“MOXIE does not work nearly as efficiently as a stand-alone system that was specifically designed would,” says Hoffman. Most of the time, it’s turned off. “Every time we want to make oxygen, we have to heat it up to 800 ºC, so most of the energy goes into heating it up and running the compressor, whereas in a well-designed stand-alone system, most of the energy will go into the actual electrolysis, into actually producing the oxygen.”

However, there are still many kinks to iron out for the scaling-up process. To begin with, any oxygen-producing system will need lots of power. Hoffman thinks nuclear power is the most likely option, maybe NASA’s Kilopower fission reactors. The setup and the cabling would certainly be challenging, he says. “You’re going to have to launch to all of these nuclear reactors, and of course, they’re not going to be in exactly the same place as the [other] units,” he says. "So, robotically, you’re going to have to connect to the electrical cables to bring power to the oxygen-producing unit.”

Then there is the solid oxide electrolysis units, which Hoffman points out are carefully machined systems. Fortunately, the company that makes them, OxEon, has already designed, built, and tested a full-scale unit, a hundred times bigger than the one on MOXIE. “Several of those units would be required to produce oxygen at the quantities that we need,” Hoffman says.

He also adds that at present, there is no redundancy built into MOXIE. If any part fails, the whole system dies. “If you’re counting on a system to produce oxygen for rocket propellant and for breathing, you need very high reliability, which means you’re going to need quite a few redundant units.”

Moreover, the system has to be pretty much autonomous, Hoffman says. “It has to be able to monitor itself, run itself.” For testing purposes, every time MOXIE is powered up, there is plenty of time to plan. A full-scale MOXIE system, though, would have to run continuously, and for that it has to be able to adjust automatically to changes in the Mars atmosphere, which can vary by a factor of two over a year, and between nighttime and daytime temperature differences.


Match ID: 87 Score: 3.57 source: spectrum.ieee.org age: 80 days
qualifiers: 2.14 energy, 1.43 nuclear

Pentagon Aims to Demo a Nuclear Spacecraft Within 5 Years
Thu, 09 Jun 2022 16:44:41 +0000


In the latest push for nuclear power in space, the Pentagon’s Defense Innovation Unit (DIU) awarded a contract in May to Seattle-based Ultra Safe Nuclear to advance its nuclear power and propulsion concepts. The company is making a soccer ball–size radioisotope battery it calls EmberCore. The DIU’s goal is to launch the technology into space for demonstration in 2027.

Ultra Safe Nuclear’s system is intended to be lightweight, scalable, and usable as both a propulsion source and a power source. It will be specifically designed to give small-to-medium-size military spacecraft the ability to maneuver nimbly in the space between Earth orbit and the moon. The DIU effort is part of the U.S. military’s recently announced plans to develop a surveillance network in cislunar space.

Besides speedy space maneuvers, the DIU wants to power sensors and communication systems without having to worry about solar panels pointing in the right direction or batteries having enough charge to work at night, says Adam Schilffarth, director of strategy at Ultra Safe Nuclear. “Right now, if you are trying to take radar imagery in Ukraine through cloudy skies,” he says, “current platforms can only take a very short image because they draw so much power.”

Radioisotope power sources are well suited for small, uncrewed spacecraft, adds Christopher Morrison, who is leading EmberCore’s development. Such sources rely on the radioactive decay of an element that produces energy, as opposed to nuclear fission, which involves splitting atomic nuclei in a controlled chain reaction to release energy. Heat produced by radioactive decay is converted into electricity using thermoelectric devices.

Radioisotopes have provided heat and electricity for spacecraft since 1961. The Curiosity and Perseverance rovers on Mars, and deep-space missions including Cassini, New Horizons, and Voyager all use radioisotope batteries that rely on the decay of plutonium-238, which is nonfissile—unlike plutonium-239, which is used in weapons and power reactors.

For EmberCore, Ultra Safe Nuclear has instead turned to medical isotopes such as cobalt-60 that are easier and cheaper to produce. The materials start out inert, and have to be charged with neutrons to become radioactive. The company encapsulates the material in a proprietary ceramic for safety.

Cobalt-60 has a half-life of five years (compared to plutonium-238’s 90 years), which is enough for the cislunar missions that the DOD and NASA are looking at, Morrison says. He says that EmberCore should be able to provide 10 times as much power as a plutonium-238 system, providing over 1 million kilowatt-hours of energy using just a few pounds of fuel. “This is a technology that is in many ways commercially viable and potentially more scalable than plutonium-238,” he says.

One downside of the medical isotopes is that they can produce high-energy X-rays in addition to heat. So Ultra Safe Nuclear wraps the fuel with a radiation-absorbing metal shield. But in the future, the EmberCore system could be designed for scientists to use the X-rays for experiments. “They buy this heater and get an X-ray source for free,” says Schilffarth. “We’ve talked with scientists who right now have to haul pieces of lunar or Martian regolith up to their sensor because the X-ray source is so weak. Now we’re talking about a spotlight that could shine down to do science from a distance.”

Ultra Safe Nuclear’s contract is one of two awarded by the DIU—which aims to speed up the deployment of commercial technology through military use—to develop nuclear power and propulsion for spacecraft. The other contract was awarded to Avalanche Energy, which is making a lunchbox-size fusion device it calls an Orbitron. The device will use electrostatic fields to trap high-speed ions in slowly changing orbits around a negatively charged cathode. Collisions between the ions can result in fusion reactions that produce energetic particles.

Both companies will use nuclear energy to power high-efficiency electric propulsion systems. Electric propulsion technologies such as ion thrusters, which use electromagnetic fields to accelerate ions and generate thrust, are more efficient than chemical rockets, which burn fuel. Solar panels typically power the ion thrusters that satellites use today to change their position and orientation. Schilffarth says that the higher power from EmberCore should give a greater velocity change of 10 kilometers per second in orbit than today’s electric propulsion systems.

Ultra Safe Nuclear is also one of three companies developing nuclear fission thermal propulsion systems for NASA and the Department of Energy. Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is seeking companies to develop a fission-based nuclear thermal rocket engine, with demonstrations expected in 2026.

This article appears in the August 2022 print issue as “Spacecraft to Run on Radioactive Decay.”


Match ID: 88 Score: 3.57 source: spectrum.ieee.org age: 171 days
qualifiers: 2.14 energy, 1.43 nuclear

Outside the Box: The Trump administration wants to discourage your 401(k) from including ESG investment options
Tue, 08 Sep 2020 22:11:24 GMT
Two proposed rules imply that investment managers promote social goals over sound investment analysis — but that’s not what actually happens.
Match ID: 89 Score: 3.57 source: www.marketwatch.com age: 810 days
qualifiers: 3.57 trump

A Destabilizing Hack-and-Leak Operation Hits Moldova
Sat, 19 Nov 2022 14:00:00 +0000
Plus: Google’s location snooping ends in a $391 million settlement, Russian code sneaks into US government apps, and the World Cup apps set off alarms.
Match ID: 90 Score: 2.86 source: www.wired.com age: 8 days
qualifiers: 2.86 russia

Russian Software Company Pretending to Be American
2022-11-16T11:03:27Z

Computer code developed by a company called Pushwoosh is in about 8,000 Apple and Google smartphone apps. The company pretends to be American when it is actually Russian.

According to company documents publicly filed in Russia and reviewed by Reuters, Pushwoosh is headquartered in the Siberian town of Novosibirsk, where it is registered as a software company that also carries out data processing. It employs around 40 people and reported revenue of 143,270,000 rubles ($2.4 mln) last year. Pushwoosh is registered with the Russian government to pay taxes in Russia...


Match ID: 91 Score: 2.86 source: www.schneier.com age: 11 days
qualifiers: 2.86 russia

‘Dark Ships’ Emerge From the Shadows of the Nord Stream Mystery
Fri, 11 Nov 2022 12:00:00 +0000
Satellite monitors discovered two vessels with their trackers turned off in the area of the pipeline prior to the suspected sabotage in September.
Match ID: 92 Score: 2.86 source: www.wired.com age: 16 days
qualifiers: 2.86 russia

Russia’s Sway Over Criminal Ransomware Gangs Is Coming Into Focus
Thu, 10 Nov 2022 19:40:12 +0000
Questions about the Kremlin’s relationships with these groups remain. But researchers are finally getting some answers.
Match ID: 93 Score: 2.86 source: www.wired.com age: 17 days
qualifiers: 2.86 russia

Russia’s New Cyberwarfare in Ukraine Is Fast, Dirty, and Relentless
Thu, 10 Nov 2022 15:15:56 +0000
Security researchers see updated tactics and tools—and a tempo change—in the cyberattacks Russia’s GRU military intelligence agency is inflicting on Ukraine.
Match ID: 94 Score: 2.86 source: www.wired.com age: 17 days
qualifiers: 2.86 russia

NASA’s Artemis I Revives the Moonshot
Sun, 28 Aug 2022 13:00:00 +0000



Update 5 Sept.: For now, NASA’s giant Artemis I remains on the ground after two launch attempts scrubbed by a hydrogen leak and a balky engine sensor. Mission managers say Artemis will fly when everything's ready—but haven't yet specified whether that might be in late September or in mid-October.

“When you look at the rocket, it looks almost retro,” said Bill Nelson, the administrator of NASA. “Looks like we’re looking back toward the Saturn V. But it’s a totally different, new, highly sophisticated—more sophisticated—rocket, and spacecraft.”

Artemis, powered by the Space Launch System rocket, is America’s first attempt to send astronauts to the moon since Apollo 17 in 1972, and technology has taken giant leaps since then. On Artemis I, the first test flight, mission managers say they are taking the SLS, with its uncrewed Orion spacecraft up top, and “stressing it beyond what it is designed for”—the better to ensure safe flights when astronauts make their first landings, currently targeted to begin with Artemis III in 2025.

But Nelson is right: The rocket is retro in many ways, borrowing heavily from the space shuttles America flew for 30 years, and from the Apollo-Saturn V.

Much of Artemis’s hardware is refurbished: Its four main engines, and parts of its two strap-on boosters, all flew before on shuttle missions. The rocket’s apricot color comes from spray-on insulation much like the foam on the shuttle’s external tank. And the large maneuvering engine in Orion’s service module is actually 40 years old—used on 19 space shuttle flights between 1984 and 1992.

“I have a name for missions that use too much new technology—failures.”
—John Casani, NASA

Perhaps more important, the project inherits basic engineering from half a century of spaceflight. Just look at Orion’s crew capsule—a truncated cone, somewhat larger than the Apollo Command Module but conceptually very similar.

Old, of course, does not mean bad. NASA says there is no need to reinvent things engineers got right the first time.

“There are certain fundamental aspects of deep-space exploration that are really independent of money,” says Jim Geffre, Orion vehicle-integration manager at the Johnson Space Center in Houston. “The laws of physics haven’t changed since the 1960s. And capsule shapes happen to be really good for coming back into the atmosphere at Mach 32.”

Roger Launius, who served as NASA’s chief historian from 1990 to 2002 and as a curator at the Smithsonian Institution from then until 2017, tells of a conversation he had with John Casani, a veteran NASA engineer who managed the Voyager, Galileo, and Cassini probes to the outer planets.

“I have a name for missions that use too much new technology,” he recalls Casani saying. “Failures.”

The Artemis I flight is slated for about six weeks. (Apollo 11 lasted eight days.) The ship roughly follows Apollo’s path to the moon’s vicinity, but then puts itself in what NASA calls a distant retrograde orbit. It swoops within 110 kilometers of the lunar surface for a gravity assist, then heads 64,000 km out—taking more than a month but using less fuel than it would in closer orbits. Finally, it comes home, reentering the Earth’s atmosphere at 11 km per second, slowing itself with a heatshield and parachutes, and splashing down in the Pacific not far from San Diego.

If all four, quadruply redundant flight computer modules fail, there is a fifth, entirely separate computer onboard, running different code to get the spacecraft home.

“That extra time in space,” says Geffre, “allows us to operate the systems, give more time in deep space, and all those things that stress it, like radiation and micrometeoroids, thermal environments.”

There are, of course, newer technologies on board. Orion is controlled by two vehicle-management computers, each composed of two flight computer modules (FCMs) to handle guidance, navigation, propulsion, communications, and other systems. The flight control system, Geffre points out, is quad-redundant; if at any point one of the four FCMs disagrees with the others, it will take itself offline and, in a 22-second process, reset itself to make sure its outputs are consistent with the others’. If all four FCMs fail, there is a fifth, entirely separate computer running different code to get the spacecraft home.

Guidance and navigation, too, have advanced since the sextant used on Apollo. Orion uses a star tracker to determine its attitude, imaging stars and comparing them to an onboard database. And an optical navigation camera shoots Earth and the moon so that guidance software can determine their distance and position and keep the spacecraft on course. NASA says it’s there as backup, able to get Orion to a safe splashdown even if all communication with Earth has been lost.

But even those systems aren’t entirely new. Geffre points out that the guidance system’s architecture is derived from the Boeing 787. Computing power in deep space is limited by cosmic radiation, which can corrupt the output of microprocessors beyond the protection of Earth’s atmosphere and magnetic field.

Beyond that is the inevitable issue of cost. Artemis is a giant project, years behind schedule, started long before NASA began to buy other launches from companies like SpaceX and Rocket Lab. NASA’s inspector general, Paul Martin, testified to Congress in March that the first four Artemis missions would cost US $4.1 billion each—“a price tag that strikes us as unsustainable.”

Launius, for one, rejects the argument that government is inherently wasteful. “Yes, NASA’s had problems in managing programs in the past. Who hasn’t?” he says. He points out that Blue Origin and SpaceX have had plenty of setbacks of their own—they’re just not obliged to be public about them. “I could go on and on. It’s not a government thing per se and it’s not a NASA thing per se.”

So why return to the moon with—please forgive the pun—such a retro rocket? Partly, say those who watch Artemis closely, because it’s become too big to fail, with so much American money and brainpower invested in it. Partly because it turns NASA’s astronauts outward again, exploring instead of maintaining a space station. Partly because new perspectives could come of it. And partly because China and Russia have ambitions in space that threaten America’s.

“Apollo was a demonstration of technological verisimilitude—to the whole world,” says Launius. “And the whole world knew then, as they know today, that the future belongs to the civilization that can master science and technology.”

Update 7 Sept.: Artemis I has been on launchpad 39B, not 39A as previously reported, at Kennedy Space Center.


Match ID: 95 Score: 2.86 source: spectrum.ieee.org age: 91 days
qualifiers: 2.86 russia

DARPA Wants a Better, Badder Caspian Sea Monster
Thu, 19 May 2022 19:31:02 +0000


Arguably, the primary job of any military organization is moving enormous amounts of stuff from one place to another as quickly and efficiently as possible. Some of that stuff is weaponry, but the vast majority are things that support that weaponry—fuel, spare parts, personnel, and so on. At the moment, the U.S. military has two options when it comes to transporting large amounts of payload. Option one is boats (a sealift), which are efficient, but also slow and require ports. Option two is planes (an airlift), which are faster by a couple of orders of magnitude, but also expensive and require runways.

To solve this, the Defense Advanced Research Projects Agency (DARPA) wants to combine traditional sealift and airlift with the Liberty Lifter program, which aims to “design, build, and flight test an affordable, innovative, and disruptive seaplane” that “enables efficient theater-range transport of large payloads at speeds far exceeding existing sea lift platforms.”

DARPA

DARPA is asking for a design like this to take advantage of ground effect, which occurs when an aircraft’s wing deflects air downward and proximity to the ground generates a cushioning effect due to the compression of air between the bottom of the wing and the ground. This boosts lift and lowers drag to yield a substantial overall improvement in efficiency. Ground effect works on both water and land, but you can take advantage of it for only so long on land before your aircraft runs into something. Which is why oceans are the ideal place for these aircraft—or ships, depending on your perspective.

During the late 1980s, the Soviets (and later the Russians) leveraged ground effect in the design of a handful of awesomely bizarre ships and aircraft. There’s the VVA-14, which was also an airplane, along with the vehicle shown in DARPA’s video above, the Lun-class ekranoplan, which operated until the late 1990s. The video clip really does not do this thing justice, so here’s a better picture, taken a couple of years ago:

Oblique overhead view of a huge grey seaplane on the water Instagram

The Lun (only one was ever made) had a wingspan of 44 meters and was powered by eight turbojet engines. It flew about 4 meters above the water at speeds of up to 550 kilometers per hour, and could transport almost 100,000 kilograms of cargo for 2,000 km. It was based on an earlier, even larger prototype (the largest aircraft in the world at the time) that the CIA spotted in satellite images in 1967 and which seems to have seriously freaked them out. It was nicknamed the Caspian Sea Monster, and it wasn’t until the 1980s that the West understood what it was and how it worked.

In the mid 1990s, DARPA itself took a serious look at a stupendously large ground-effect vehicle of its own, the Aerocon Dash 1.6 wingship. The concept image below is of a 4.5-million-kg vehicle, 175 meters long with a 100-meter wingspan, powered by 20 (!) jet engines:

A black and white wireframe drawing of a huge streamlined aircraft Wikipedia

With a range of almost 20,000 km at over 700 km/h, the wingship could have carried 3,000 passengers or 1.4 million kg of cargo. By 1994, though, DARPA had decided that the potential billion-dollar project to build a wingship like this was too risky, and canceled the whole thing.

A concept image of a massive grey seaplane skimming over the ocean

Less than 10 years later, Boeing’s Phantom Works started exploring an enormous ground-effect aircraft, the Pelican Ultra Large Transport Aircraft. The Pelican would have been even larger than the Aerocon wingship, with a wingspan of 152 meters and a payload of 1.2 million kg—that’s about 178 shipping containers’ worth. Unlike the wingship, the Pelican would take advantage of ground effect to boost efficiency only in transit above water, but would otherwise use runways like a normal aircraft and be able to reach flight altitudes of 7,500 meters. Operating as a traditional aircraft and with an optimal payload, the Pelican would have a range of about 12,000 km. In ground effect, however, the range would have increased to 18,500 km, illustrating the appeal of designs like these. But Boeing dropped the project in 2005 to focus on lower cost, less risky options.

We’d be remiss if we didn’t at least briefly mention two other massive aircraft: the H-4 Hercules, the cargo seaplane built by Hughes Aircraft Co. in the 1940s, and the Stratolaunch carrier aircraft, which features a twin-fuselage configuration that DARPA seems to be favoring in its concept video for some reason.

From the sound of DARPA’s announcement, they’re looking for something a bit more like the Pelican than the Aerocon Dash or the Lun. DARPA wants the Liberty Lifter to be able to sustain flight out of ground effect if necessary, although it’s expected to spend most of its time over water for efficiency. It won’t use runways on land at all, though, and should be able to stay out on the water for 4 to 6 weeks at a time, operating even in rough seas—a significant challenge for ground-effect aircraft.

DARPA is looking for an operational range of 7,500 km, with a maximum payload of at least 90,000 kg, including the ability to launch and recover amphibious vehicles. The hardest thing DARPA is asking for could be that, unlike most other X-planes, the Liberty Lifter should incorporate a “low cost design and construction philosophy” inspired by the mass-produced Liberty ships of World War II.

With US $15 million to be awarded to up to two Liberty Lifter concepts, DARPA is hoping that at least one of those concepts will pass a system-level critical design review in 2025. If everything goes well after that, the first flight of a full-scale prototype vehicle could happen as early as 2027.

This article appears in the September 2022 print issue as “DARPA Reincarnates Soviet-Era Sea Monster.”


Match ID: 96 Score: 2.86 source: spectrum.ieee.org age: 192 days
qualifiers: 2.86 russia

U.N. Kills Any Plans to Use Mercury as a Rocket Propellant
Tue, 19 Apr 2022 18:00:01 +0000


A recent United Nations provision has banned the use of mercury in spacecraft propellant. Although no private company has actually used mercury propellant in a launched spacecraft, the possibility was alarming enough—and the dangers extreme enough—that the ban was enacted just a few years after one U.S.-based startup began toying with the idea. Had the company gone through with its intention to sell mercury propellant thrusters to some of the companies building massive satellite constellations over the coming decade, it would have resulted in Earth’s upper atmosphere being laced with mercury.

Mercury is a neurotoxin. It’s also bio-accumulative, which means it’s absorbed by the body at a faster rate than the body can remove it. The most common way to get mercury poisoning is through eating contaminated seafood. “It’s pretty nasty,” says Michael Bender, the international coordinator of the Zero Mercury Working Group (ZMWG). “Which is why this is one of the very few instances where the governments of the world came together pretty much unanimously and ratified a treaty.”

Bender is referring to the 2013 Minamata Convention on Mercury, a U.N. treaty named for a city in Japan whose residents suffered from mercury poisoning from a nearby chemical factory for decades. Because mercury pollutants easily find their way into the oceans and the atmosphere, it’s virtually impossible for one country to prevent mercury poisoning within its borders. “Mercury—it’s an intercontinental pollutant,” Bender says. “So it required a global treaty.”

Today, the only remaining permitted uses for mercury are in fluorescent lighting and dental amalgams, and even those are being phased out. Mercury is otherwise found as a by-product of other processes, such as the burning of coal. But then a company hit on the idea to use it as a spacecraft propellant.

In 2018, an employee at Apollo Fusion approached the Public Employees for Environmental Responsibility (PEER), a nonprofit that investigates environmental misconduct in the United States. The employee—who has remained anonymous—alleged that the Mountain View, Calif.–based space startup was planning to build and sell thrusters that used mercury propellant to multiple companies building low Earth orbit (LEO) satellite constellations.

Four industry insiders ultimately confirmed that Apollo Fusion was building thrusters that utilized mercury propellant. Apollo Fusion, which was acquired by rocket manufacturing startup Astra in June 2021, insisted that the composition of its propellant mixture should be considered confidential information. The company withdrew its plans for a mercury propellant in April 2021. Astra declined to respond to a request for comment for this story.

Apollo Fusion wasn’t the first to consider using mercury as a propellant. NASA originally tested it in the 1960s and 1970s with two Space Electric Propulsion Tests (SERT), one of which was sent into orbit in 1970. Although the tests demonstrated mercury’s effectiveness as a propellant, the same concerns over the element’s toxicity that have seen it banned in many other industries halted its use by the space agency as well.

“I think it just sort of fell off a lot of folks’ radars,” says Kevin Bell, the staff counsel for PEER. “And then somebody just resurrected the research on it and said, ‘Hey, other than the environmental impact, this was a pretty good idea.’ It would give you a competitive advantage in what I imagine is a pretty tight, competitive market.”

That’s presumably why Apollo Fusion was keen on using it in their thrusters. Apollo Fusion as a startup emerged more or less simultaneously with the rise of massive LEO constellations that use hundreds or thousands of satellites in orbits below 2,000 kilometers to provide continual low-latency coverage. Finding a slightly cheaper, more efficient propellant for one large geostationary satellite doesn’t move the needle much. But doing the same for thousands of satellites that need to be replaced every several years? That’s a much more noticeable discount.

Were it not for mercury’s extreme toxicity, it would actually make an extremely attractive propellant. Apollo Fusion wanted to use a type of ion thruster called a Hall-effect thruster. Ion thrusters strip electrons from the atoms that make up a liquid or gaseous propellant, and then an electric field pushes the resultant ions away from the spacecraft, generating a modest thrust in the opposite direction. The physics of rocket engines means that the performance of these engines increases with the mass of the ion that you can accelerate.

Mercury is heavier than either xenon or krypton, the most commonly used propellants, meaning more thrust per expelled ion. It’s also liquid at room temperature, making it efficient to store and use. And it’s cheap—there’s not a lot of competition with anyone looking to buy mercury.

Bender says that ZMWG, alongside PEER, caught wind of Apollo Fusion marketing its mercury-based thrusters to at least three companies deploying LEO constellations—One Web, Planet Labs, and SpaceX. Planet Labs, an Earth-imaging company, has at least 200 CubeSats in low Earth orbit. One Web and SpaceX, both wireless-communication providers, have many more. One Web plans to have nearly 650 satellites in orbit by the end of 2022. SpaceX already has nearly 1,500 active satellites aloft in its Starlink constellation, with an eye toward deploying as many as 30,000 satellites before its constellation is complete. Other constellations, like Amazon’s Kuiper constellation, are also planning to deploy thousands of satellites.

In 2019, a group of researchers in Italy and the United States estimated how much of the mercury used in spacecraft propellant might find its way back into Earth’s atmosphere. They figured that a hypothetical LEO constellation of 2,000 satellites, each carrying 100 kilograms of propellant, would emit 20 tonnes of mercury every year over the course of a 10-year life span. Three quarters of that mercury, the researchers suggested, would eventually wind up in the oceans.

That amounts to 1 percent of global mercury emissions from a constellation only a fraction of the size of the one planned by SpaceX alone. And if multiple constellations adopted the technology, they would represent a significant percentage of global mercury emissions—especially, the researchers warned, as other uses of mercury are phased out as planned in the years ahead.

Fortunately, it’s unlikely that any mercury propellant thrusters will even get off the ground. Prior to the fourth meeting of the Minamata Convention, Canada, the European Union, and Norway highlighted the dangers of mercury propellant, alongside ZMWG. The provision to ban mercury usage in satellites was passed on 26 March 2022.

The question now is enforcement. “Obviously, there aren’t any U.N. peacekeepers going into space to shoot down” mercury-based satellites, says Bell. But the 137 countries, including the United States, who are party to the convention have pledged to adhere to its provisions—including the propellant ban.

The United States is notable in that list because as Bender explains, it did not ratify the Minamata Convention via the U.S. Senate but instead deposited with the U.N. an instrument of acceptance. In a 7 November 2013 statement (about one month after the original Minamata Convention was adopted), the U.S. State Department said the country would be able to fulfill its obligations “under existing legislative and regulatory authority.”

Bender says the difference is “weedy” but that this appears to mean that the U.S. government has agreed to adhere to the Minamata Convention’s provisions because it already has similar laws on the books. Except there is still no existing U.S. law or regulation banning mercury propellant. For Bender, that creates some uncertainty around compliance when the provision goes into force in 2025.

Still, with a U.S. company being the first startup to toy with mercury propellant, it might be ideal to have a stronger U.S. ratification of the Minamata Convention before another company hits on the same idea. “There will always be market incentives to cut corners and do something more dangerously,” Bell says.

Update 19 April 2022: In an email, a spokesperson for Astra stated that the company's propulsion system, the Astra Spacecraft Engine, does not use mercury. The spokesperson also stated that Astra has no plans to use mercury propellant and that the company does not have anything in orbit that uses mercury.

Updated 20 April 2022 to clarify that Apollo Fusion was building thrusters that used mercury, not that they had actually used them.


Match ID: 97 Score: 2.86 source: spectrum.ieee.org age: 222 days
qualifiers: 2.86 italy

New Records for the Biggest and Smallest AI Computers
Thu, 17 Nov 2022 16:28:08 +0000


The machine-learning consortium MLCommons released the latest set of benchmark results last week, offering a glimpse at the capabilities of new chips and old as they tackled executing lightweight AI on the tiniest systems and training neural networks at both server and supercomputer scales. The benchmark tests saw the debut of new chips from Intel and Nvidia as well as speed boosts from software improvements and predictions that new software will play a role in speeding the new chips in the years after their debut.


Training Servers

Training AI has been a problem that’s driven billions of dollars in investment, and it seems to be paying off. “A few years ago we were talking about training these networks in days or weeks, now we’re talking about minutes,” says Dave Salvator, director of product marketing at Nvidia.

There are eight benchmarks in the MLPerf training suite, but here I’m showing results from just two—image classification and natural-language processing—because although they don’t give a complete picture, they’re illustrative of what’s happening. Not every company puts up benchmark results every time; in the past, systems from Baidu, Google, Graphcore, and Qualcomm have made marks, but none of these were on the most recent list. And there are companies whose goal is to train the very biggest neural networks, such as Cerebras and SambaNova, that have never participated.

Another note about the results I’m showing—they are incomplete. To keep the eye glazing to a minimum, I’ve listed only the fastest system of each configuration. There were already four categories in the main “closed” contest: cloud (self-evident), on premises (systems you could buy and install in-house right now), preview (systems you can buy soon but not now), and R&D (interesting but odd, so I excluded them). I then listed the fastest training result for each category for each configuration—the number of accelerators in a computer. If you want to see the complete list, it’s at the MLCommons website.

A casual glance shows that machine-learning training is still very much Nvidia’s house. It can bring a supercomputer-scale number of GPUs to the party to smash through training problems in mere seconds. Its A100 GPUs have dominated the MLPerf list for several iterations now, and it powers Microsoft’s Azure cloud AI offerings as well as systems large and small from partners including Dell, HPE, and Fujitsu. But even among the A100 gang there’s real competition, particularly between Dell and HPE.

But perhaps more important was Azure’s standing. On image classification, the cloud systems were essentially a match for the best A100 on-premises computers. The results strengthen Microsoft’s case that renting resources in the cloud is as good as buying your own. And that case might might be even stronger soon. This week Nvidia and Microsoft announced a multiyear collaboration that would see the inclusion of Nvidia’s upcoming GPU, the H100, in the Azure cloud.

This was the first peek at training abilities for the H100. And Nivida’s Dave Salvator emphasized how much progress happens—largely due to software improvements—in the years after a new chip comes out. On a per-chip basis, the A100 delivers 2.5 times the average performance today versus its first run at the MLPerf benchmarks in 2020. Compared to A100’s debut scores, H100 delivered 6.7 times the speed. But compared to A100 with today’s software, the gain is only 2.6-fold.

In a way, H100 seems a bit overpowered for the MLPerf benchmarks, tearing through most of them in minutes using a fraction of the A100 hardware needed to match it. And in truth, it is meant for bigger things. “H100 is our solution for the most advanced models where we get into the millions, even billions of hyperparameters,” says Salvator.

Salvator says a lot of the gain is from the H100’s “transformer engine.” Essentially, it’s the intelligent use of low-precision—efficient but less accurate—computations whenever possible. The scheme is particularly designed for neural networks called transformers, of which the natural language processing benchmark BERT is an example. Transformers are in the works for many other machine learning tasks. “Transformer-based networks have been literally transformative to AI,” says Salvator. “It’s a horrible pun.“

Memory is a bottleneck for all sorts of AI, but it’s particularly limiting in BERT and other transformer models. Such neural networks rely on a quality called “attention.” You can think of it as how many words a language processor is aware of at once. It doesn’t scale up well, largely because it leads to a huge increase in writing to system memory. Earlier this year Hazy Research (the name for Chris Re’s lab at Stanford) deployed an algorithm to an Azure cloud system that shaved 10 percent of the training time off Microsoft’s best effort. For this round, Azure and Hazy Research worked together to demonstrate the algorithm—called Flash Attention.

Both the image-classification and natural-language-processing tables show Intel’s competitive position. The company showed results for the Habana Gaudi2, its second generation AI accelerator, and the Sapphire Rapids Xeon CPU, which will be commercially available in the coming months. For the latter, the company was out to prove that you can do a lot of machine-learning training without a GPU.

A setup with 32 CPUs landed well behind a Microsoft Azure cloud-based system with only four GPUs on object recognition, but it still finished in less than an hour and a half, and for natural-language processing, it nearly matched that Azure system. In fact, none of the training took longer than 90 minutes, even on much more modest CPU-only computers.

“This is for customers for whom training is part of the workload, but it’s not the workload,” says Jordan Plawner, an Intel senior director and AI product manager. Intel is reasoning that if a customer is retraining only once a week, whether the work takes 30 minutes or 5 minutes is of too little importance for them to spend on a GPU accelerator they don’t need for the rest of the week.

Habana Gaudi2 is a different story. As the company’s dedicated machine-learning accelerator, the 7-nanometer chip goes up against Nvidia’s A100 (another 7-nm chip) and soon will face the 5-nm H100. In that light, it performed well on certain tests. On image classification, an eight-chip system landed only a couple of minutes behind an eight-chip H100. But the gap was much wider with the H100 at the natural-language-processing task, though it still narrowly bested an equal-size and Hazy-Research-enhanced A100 system.

“We’re not done with Gaudi 2,” says Habana’s Eitan Medina. Like others, Habana is hoping to speed learning by strategically using low-precision computations on certain layers of neural networks. The chip has 8-bit floating-point capabilities, but so far the smallest precision the company has engaged on the chip for MLPerf training purposes is bfloat 16.


Training Supercomputers

MLCommons released results for training high-performance computers—supercomputers and other big systems—at the same time as those for training servers. The HPC benchmarks are not as established and have fewer participants, but they still give a snapshot of how machine learning is done in the supercomputing space and what the goals are. There are three benchmarks: CosmoFlow estimates physical quantities from cosmological image data; DeepCAM spots hurricanes and atmospheric rivers in climate simulation data; and OpenCatalyst predicts the energy levels of molecular configurations.

There are two ways to measure systems on these benchmarks. One is to run a number of instances of the same neural network on the supercomputer, and the other is to just throw a bunch of resources at a single instance of the problem and see how long it takes. The table below is the latter and just for CosmoFlow, because it’s much simpler to read. (Again, feel free to view the whole schemozzle at MLCommons.)

The CosmoFlow results show four supercomputers powered by as many different types of CPU architectures and two types of GPU. Three of the four were accelerated by Nvidia GPUs, but Fugaku, the second most powerful computer in the world, used only its own custom-built processor, the Fujitsu A64FX.

The MLPerf HPC benchmarks came out only the week before Supercomputing 2022, in Dallas, one of the two conferences at which new Top500 rankings of supercomputers are announced.

A separate benchmark for supercomputing AI has also been developed. Instead of training particular neural networks, it solves “a system of linear equations using novel, mixed-precision algorithms that exploit modern hardware.” Although results from the two benchmarks don’t line up, there is overlap between the HPL-MxP list and the CosmoFlow results including: Nvidia’s Selene, Riken’s Fugaku, and Germany’s JUWELS.


Tiny ML systems

The latest addition to the MLPerf effort is a suite of benchmarks designed to test the speed and energy efficiency of microcontrollers and other small chips that execute neural networks that do things like spotting keywords and other low-power, always-on tasks. MLPerf Tiny, as it’s called, is too new for real trends to have emerged in the data. But the results released so far show a couple of standouts. The table here shows the fastest “visual wakewords” results for each type of processor, and shows that Syntiant and Greenwave Technologies have an edge over the competition.


Match ID: 98 Score: 2.14 source: spectrum.ieee.org age: 10 days
qualifiers: 2.14 energy

Top500: Frontier Still No. 1. Where’s China?
Tue, 15 Nov 2022 21:23:54 +0000


The latest list of the world’s most powerful supercomputers reveals that Frontier, at Oak Ridge National Lab, in Tennessee, has stayed on top. The newly released Top500 list could arguably be seen as a temporary object lesson in stasis, while still pointing toward future aspirants and aspiring countries who could one day challenge Frontier’s crown.

With a performance of 1.1 exaflops, or 1.1 quintillion floating-point operations per second, Frontier was the first machine to break the exascale barrier, a threshold of a billion billion calculations per second. It is still the only exascale supercomputer announced to date, according to this week’s ranking of the world’s fastest supercomputers.

Frontier, which is based on the latest HPE Cray EX235a architecture and boasts more than 8.7 million AMD cores, remains powerful enough to perform more than twice as well as the No. 2 machine, Fugaku, at the Riken Center for Computational Science, in Japan. Fugaku had led the Top500 list for two years until Frontier ousted it in June. Running more than 7.6 million Fujitsu cores, Fugaku’s performance, at 442 petaflops, joins Frontier in posting an unchanged compute speed since June’s Top500 list.

“Frontier is a first-of-a-kind supercomputer comprised of a hybrid architecture to run calculations at an unprecedented speed,” says Justin Whitt, program director for the U.S. Department of Energy’s Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory. “Our experienced team of technical staff and vendor partners worked tirelessly for Frontier to achieve the world’s first exascale performance on the Linpack benchmark as reported in May 2022.” Whitt cited three projects that have used Frontier since its unveiling to earn finalist status for the Gordon Bell prize, to be awarded at the Supercomputing 2022 conference in Dallas this week.

At third place on the Top500 list is the Lumi system in Finland, which uses an HPE Cray EX235a architecture and harnesses some 2.2 million AMD cores. LUMI has doubled in power since June—with a performance of 309 petaflops—and remains the most powerful supercomputer in Europe.

Judging by the Top500 list alone, China’s fastest entry, Sunway TaihuLight, trails far behind Frontier, clocking in at just 93 petaflops across its more than 10 million cores. (That’s just 8 percent of Frontier’s speed.) However, China’s HPC ambitions appear to be less than fully expressed on the otherwise gold-standard Top500 list.

Unentered as a Top500 contender (though still vying for the Gordon Bell prize) is China’s OceanLight system, which by all available measures at least seems to aspire to exaflop-sized performance—albeit one that remains inscrutable to international standards, typically measured by placement on the Top500 list. In March, the tech website The Next Platform used a paper published by coauthors from institutions such as the Alibaba Group and Tsinghua University to conclude that OceanLight is at least theoretically capable of attaining peak speeds of 2.3 exaflops.

However, that is only an estimate and remains merely a tantalizing glimpse, at best, into supercomputing’s competitive exascale future. That is also a future the U.S. Commerce Department is hoping to have a hand in curtailing. On 7 October, the Commerce Department’s Bureau of Industry and Security promulgated an export control restriction that seeks to ratchet down all HPC chips sold to the People’s Republic. Nvidia, for one, has already released a chip for possible use in Chinese supercomputers that meets the U.S.’s scaled-back restrictions, intended to quell China’s highest supercomputing and AI ambitions.

“Our actions will protect U.S. national security and foreign policy interests while also sending a clear message that U.S. technological leadership is about values as well as innovation,” said Assistant Secretary of Commerce for Export Administration Thea D. Rozman Kendler, in a prepared statement at the time of the export control ban.

Meanwhile, back on the Top500 list, the only newcomer in the top 10 is Leonardo, at the Italian supercomputing consortium Cineca. Leonardo is based on the Atos BullSequana XH2000 architecture and has more than 1.4 million Intel Xeon cores. With a performance of 174.6 petaflops, Leonardo is the fourth-fastest supercomputer in the world, knocking the bottom seven entries in the previous top 10 list down a peg.

Frontier was ranked first on the last Green500 list, which measures supercomputing energy efficiency. However, it now ranks second to Henri at the Flatiron Institute, in New York. Whereas Frontier achieves 62.68 gigaflops per watt, Henri reaches about 65 gigaflops per watt. However, Henri is a far more modest machine, with only 5,920 Intel Xeon cores.

The world’s most powerful supercomputers continue to get faster. The entry point for the top 100 increased to about 10 petaflops, up from 5.39 petaflops as of June. The last system on the newest Top500 list sat at position 460 five months ago.

Intel continues to provide the processors for the largest share of Top500 computers—75.8 percent of systems, down from 81.6 percent a year ago. In contrast, AMD is making gains, underlying 20.2 percent of the systems on the current list, up from 14.6 percent a year ago.


Match ID: 99 Score: 2.14 source: spectrum.ieee.org age: 12 days
qualifiers: 2.14 energy

New Ethernet Cyberattack Crunches Critical Systems
Tue, 15 Nov 2022 16:00:01 +0000


Ethernet, like all communication networks, continues to get faster with the passing years. In recent years, a subset of ethernet networks, called Time-Triggered Ethernet (TTE), has emerged. TTE networks are built around precise signal timing, using redundant pathways and careful switching to ensure that information arrives precisely when it needs to. Since its introduction, TTE has found a home in flight-critical aircraft systems, NASA spacecraft, and power-generating wind turbines, to name a few applications.

The consequences for such systems of a TTE network falling out of sync would be understandably catastrophic. Now, researchers have described a first-of-its-kind attack, called PCspooF, that can do just that. The group will be presenting their work at the IEEE Symposium on Security and Privacy conference in May 2023.


IEEE Spectrum spoke with Andrew Loveless and Baris Kasikci, two of the researchers who discovered the attack, about how PCspooF works, what it means for timing-sensitive networks, and some general ways in which the attack can be blocked. The following conversation has been lightly edited for clarity.

Loveless and Kasikci on:

To start from the beginning: Time-Triggered Ethernet has been around for a few years at this point. So where did it come from and what is it used for?

A smiling man in a burgundy hoodie. Andrew Loveless

Andrew Loveless: The commercial TTE technology used in practice today evolved from the academic TT-Ethernet project conducted at TU Wien in the early 2000s. The goal of the project was to take the concept of “time-triggered” communication used in earlier bus-based protocols like TTP/C and apply them to switched Ethernet. That way users got the benefits of both worlds—the determinism of time-triggered communication and the high data rates and widespread availability of Ethernet.

To our knowledge, the protocol was first commercialized by TTTech and Honeywell around 2008. GE Fanuc also started developing TTE products for the aviation domain around that time.

It’s hard to identify the first use of TTE for a mission-critical system. One of the earliest seems to be by Sikorsky, which started working with TTE technology in 2008 and used it in the S-97 Raider helicopter, which first flew in 2015. As of 2009, there were also articles published reporting that NASA and Lockheed Martin were using TTE for the Orion Crew Exploration Vehicle, which was originally part of the Constellation Program and is now being used for Artemis. Vestas seemingly started working with TTE for wind turbine control applications around 2010 and has since used TTE in deployed turbines.

Today, TTE is used in a range of mission-critical and safety-critical systems and vehicles.

Back to top

How do these TTE networks differ from the kinds of “best effort” networks that people are likely more familiar with?

Loveless: For context, there is a push in industry right now for large embedded and cyberphysical systems to adopt mixed-criticality networks. This means that instead of using separate networks and buses for noncritical devices and critical devices, there is a push to have one network that both critical and noncritical devices can share. This approach has a lot of benefits, including lower size, weight, and power—in general, there are fewer cables and switches—and lower development time and costs, because engineers can focus on using just one technology.

Time-Triggered Ethernet (TTE) is one networking technology that is a part of this trend. Some others include Time Sensitive Networking, SpaceWire and SpaceFibre, RapidIO, AFDX, and more. TTE has multiple different traffic classes that are used over the same network. The traffic class with the highest criticality and priority is time-triggered, which means that the traffic is exactly prescheduled ahead of time to have specific timing properties—for example, to avoid contention in the switch and guarantee a certain worst-case latency and jitter—and is sent simultaneously over redundant network paths. That way the designer can be sure their messages are guaranteed to arrive at the destination successfully and on time. So the TTE technology has this time-triggered traffic class, which is also commonly referred to as TTE.

In addition, TTE technology supports a best-effort traffic class, which just means that the traffic follows the rules of standard Ethernet. So here the traffic is not redundant, and is not scheduled in time to have certain timing properties. The traffic is referred to as best effort because the TTE network does not provide any guarantees for it—it simply forwards best-effort traffic when possible between the critical TTE messages. Best-effort traffic is primarily used for allowing noncritical COTS devices to share the TTE network with the critical systems.

You mentioned the industry push for mixed-criticality networks and the benefits they bring (in cost, development time, energy consumption, and so on). It seems that in an ideal world, TTE networks wouldn’t share infrastructure with best-effort traffic. Is it as simple as that’s just not practical to do in most cases?

Loveless: Lots of COTS devices do not have the ability to generate time-triggered traffic. So, if one wants to include COTS devices in their system, they generally need to support standard (best-effort) Ethernet. In that case, I think the ideal is to have one network that provides perfect isolation between TT and BE traffic flows and allows all TT traffic to meet their timing requirements. In other words, that single network provides the illusion that each TT traffic flow is being carried over a private point-to-point channel from the sender to the receivers. That way, one gets all the safety and security benefits of having completely isolated networks, while only having the cost of a single network.

What exactly is PCspooF, and how does it affect TTE networks?

Loveless: PCspooF is a new attack on TTE networks. It allows a single Ethernet device, such as a best-effort device, with a small amount of malicious circuitry to tear down synchronization of the TTE network for a small amount of time. While this happens, it prevents the critical TTE devices from being able to communicate. [As a result], messages that were sent get dropped. The duration of this effect after each successful attack is around one-half second to one full second.

Importantly, the attack allows the attacker to tear down synchronization on all redundant TTE network planes, even if the attacker is connected to only one of the planes. Also, the attack can be successfully repeated at a high rate (as often as every 10 to 15 seconds).

Back to top

You mentioned PCspooF is a new attack. Is it also accurate that it’s the first attack to exploit TTE networks?

Loveless: Yes. To our knowledge, PCspooF is the first attack to compromise any of TTE’s guarantees.

So how does PCspooF work? What vulnerability or flaw is it exploiting?

Loveless: There are two main vulnerabilities that PCspooF exploits. The first is a vulnerability in Ethernet itself, which is that if an Ethernet switch experiences a sudden reset while a frame is in the middle of being forwarded, the front of that frame can be cut off, and the remainder of the frame will still be sent. PCspooF uses electromagnetic interference to cause this to happen in TTE switches. The attacker stores a malicious message inside a benign frame, sends the frame, then conducts EMI into the switch. The switch then strips the header off the frame and reveals the malicious frame. This mechanism allows the attacker to send a malicious frame that they otherwise should not be allowed to send.

The second vulnerability PCspooF exploits is in the TTE synchronization protocol, which was standardized in SAE AS6802. In that protocol, there is an important step where a select number of switches, called compression masters, send a special synchronization message called a protocol control frame to the TTE end nodes, which they use to correct their clocks. If the contents of this messages are malicious, it will cause the TTE devices to lose sync. This is exactly what PCspooF does—it uses the above electromagnetic interference mechanism to spoof this specific protocol control frame (PCF) in order to cause devices to lose sync. Hence the name PCspooF.

How did you discover PCspooF? Was it through practical experimentation, theoretical work and verification, or some other method?

Loveless: From studying the SAE AS6802 standard, we determined that the right protocol control frame from a switch could temporarily disrupt sync. From there, the main challenge was to determine how to get that frame into the network—which we determined through practical experimentation and by building on top of related work on packet-in-packet attacks.

One example you give of how PCspooF can lead to a catastrophic outcome is through simulations you ran modeled after NASA’s (now-shelved) Asteroid Redirect Mission, which would have aimed to move an asteroid into lunar orbit so that astronauts could pay it a visit. How did PCspooF impact the simulations?

Baris Kasikci

Baris Kasikci: We tested our attack at NASA Johnson Space Center on a test-bed with several real TTE switches and end systems. That gave us good information about how difficult it is to actually perform the attack and what the impact is on the network itself. However, we also wanted to determine what the impact would be in a real system. For example, if someone executed this attack during a real spaceflight mission, what would the damage be?

For this purpose, we set up a NASA simulation in the test-bed, where a representative capsule was flying into space to dock with another spacecraft. We ran the mission multiple times successfully to get a feel for the expected behavior. Then we hooked up a small malicious device to one of the TTE switches to execute the attack while the mission was in progress. What happened is that as the mission progressed, the PCspooF attack repeatedly caused multiple message drops and late message arrivals, which caused the spacecraft to navigate off course. So rather than going up and docking with the other spacecraft, the capsule actually came in at the wrong angle and flew right past it. So we were able to show that the attack has a significant effect on a “real” system (as real as we could get) running real TTE hardware.

A room with large computer module and desk with a chair The ANVIL at the NASA Johnson Space Center.NASA

Back to top

In your paper, you list several possible strategies to thwart PCspooF attacks. Broadly speaking, how would those strategies help?

Kasikci: We identified several different mitigations that are effective against our attack. In general, they fit into two basic categories. The first category is to block a device from conducting electromagnetic interference into your TTE switch. So one way someone could do that would be using fiber Ethernet cables instead of copper, since fiber cannot conduct an electrical signal. Another option would be to use some sort of optical isolator on the cable between your untrusted device and the switch.

Another option is to make it so that, even if the attacker does inject electromagnetic interference into the switch and causes this malicious protocol control frame to go out, the system won’t be affected by it. You can do this by altering the topology of your network, so that the spoofed PCFs never follow the same path as legitimate PCFs. That way, devices can always tell that spoofed PCFs are not legitimate and drop them. Both of these are things designers can do today to protect themselves from this attack.

Back to top


Match ID: 100 Score: 2.14 source: spectrum.ieee.org age: 12 days
qualifiers: 2.14 energy

The EV Transition Explained
Sun, 13 Nov 2022 14:17:59 +0000


From the outside, there is little to tell a basic Ford XL ICE F-150 from the electric Ford PRO F-150 Lightning. Exterior changes could pass for a typical model-year refresh. While there are LED headlight and rear-light improvements along with a more streamlined profile, the Lightning’s cargo box is identical to that of an ICE F-150, complete with tailgate access steps and a jobsite ruler. The Lightning’s interior also has a familiar feel.

But when you pop the Lightning’s hood, you find that the internal combustion engine has gone missing. In its place is a front trunk (“frunk”), while concealed beneath is the new skateboard frame with its dual electric motors (one for each axle) and a big 98-kilowatt-hour standard (and 131-kWh extended-range) battery pack. The combination permits the Lightning to travel 230 miles (370 kilometers) without recharging and go from 0 to 60 miles per hour in 4.5 seconds, making it the fastest F-150 available despite its much heavier weight.

Invisible, too, are the Lightning’s sophisticated computing and software systems. The 2016 ICE F-150 reportedly had about 150 million lines of code. The Lightning’s software suite may even be larger than its ICE counterpart (Ford will not confirm this). The Lightning replaces the Ford F-150 ICE-related software in the electronic control units (ECUs) with new “intelligent” software and systems that control the main motors, manage the battery system, and provide charging information to the driver.

The EV Transition Explained


This is the first in a series of articles presenting just some of the technological and social challenges in moving from vehicles with internal-combustion engines to electric vehicles. These must be addressed at scale before EVs can happen. Each challenge entails a multitude of interacting systems, subsystems, sub-subsystems, and so on. In reviewing each article, readers should bear in mind Nobel Prize–winning physicist Richard Feynman’s admonition: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”

Ford says the Lightning’s software will identify nearby public charging stations and tell drivers when to recharge. To increase the accuracy of the range calculation, the software will draw upon similar operational data communicated from other Lightning owners that Ford will dynamically capture, analyze, and feed back to the truck.

For executives, however, Lightning’s software is not only a big consumer draw but also among the biggest threats to its success. Ford CEO Jim Farley told the New York Times that software bugs worry him most. To mitigate the risk, Ford has incorporated an over-the-air (OTA) software-update capability for both bug fixes and feature upgrades. Yet with an incorrect setting in the Lightning’s tire pressure monitoring system requiring a software fix only a few weeks after its initial delivery, and with some new Ford Mustang Mach-Es recalled because of misconfigured software caused by a “service update or as an over-the-air update,” Farley’s worries probably won’t be soothed for some time.

Ford calls the Lightning a “Model T moment for the 21st century” and the company's US $50 billion investment in EVs is a bet-the-company proposition. Short-term success looks likely, as Ford closed Lightning preorders after reaching 200,000 and with sales expectations of 150,000 a year by 2024.

A construction crew working on a site with a Ford F-150's frunk open showing tools. The F-150 Lightning's front trunk (also known as a frunk) helps this light-duty electric pickup haul even more. Ford

However, long-term success is not guaranteed. “Ford is walking a tightrope, trying at the same time to convince everyone that EVs are the same as ICE vehicles yet different,” says University of Michigan professor emeritus John Leslie King, who has long studied the auto industry. Ford and other automakers will need to convince tens of millions of customers to switch to EVs to meet the Biden Administration’s decarbonization goals of 50 percent new auto sales being non-ICE vehicles by 2030.

King points out that neither Ford nor other automakers can forever act like EVs are merely interchangeable with—but more ecofriendly than—their ICE counterparts. As EVs proliferate at scale, they operate in a vastly different technological, political, and social ecosystem than ICE vehicles. The core technologies and requisite expertise, supply-chain dependencies, and political alliances are different. The expectations of and about EV owners, and their agreement to change their lifestyles, also differ significantly.

Indeed, the challenges posed by the transition from ICE vehicles to EVs at scale are significantly larger in scope and more complex than the policymakers setting the regulatory timeline appreciate. The systems-engineering task alone is enormous, with countless interdependencies that are outside policymakers' control, and resting on optimistic assumptions about promising technologies and wished-for changes in human behavior. The risk of getting it wrong, and the resulting negative environmental and economic consequences created, are high. In this series, we will break down the myriad infrastructure, policy, and social challenges involved learned from discussions with numerous industry insiders and industry watchers. Let's take a look at some of the elemental challenges blocking the road ahead for EVs.

The soft car

For Ford and the other automakers that have shaped the ICE vehicle ecosystem for more than a century, ultimate success is beyond the reach of the traditional political, financial, and technological levers they once controlled. Renault chief executive Luca de Meo, for example, is quoted in the Financial Times as saying that automakers must recognize that “the game has changed,” and they will “have to play by new rules” dictated by the likes of mining and energy companies.

One reason for the new rules, observes professor Deepak Divan, the director of the Center for Distributed Energy at Georgia Tech, is that the EV transition is “a subset of the energy transition” away from fossil fuels. On the other hand, futurist Peter Schwartz contends that the entire electric system is part of the EV supply chain. These alternative framings highlight the strong codependencies involved. Consequently, automakers will be competing against not only other EV manufacturers but also numerous players involved in the energy transition aiming to grab the same scarce resources and talent.

“Ford is walking a tightrope, trying at the same time to convince everyone that EVs are the same as ICE vehicles yet different.” —John Leslie King

EVs represent a new class of cyberphysical systems that unify the physical with information technology, allowing them to sense, process, act, and communicate in real time within a large transportation ecosystem, as I have noted in detail elsewhere. While computing in ICE vehicles typically optimizes a car’s performance at the time of sale, EV-based cyberphysical systems are designed to evolve as they are updated and upgraded, postponing their obsolescence.

“As an automotive company, we’ve been trained to put vehicles out when they’re perfect,” Ford’s Farley told the New York Times. “But with software, you can change it with over-the-air updates.” This allows new features to be introduced in existing models instead of waiting for next year’s model to appear. Farley sees Ford spending much less effort on changing vehicles’ physical properties and devoting more to upgrading their software capabilities in the future.

Systems engineering for holistic solutions

EV success at scale depends on as much, if not more, on political decisions as technical ones. Government decision-makers in the United States at both the state and federal level, for instance, have created EV market incentives and set increasingly aggressive dates to sunset ICE vehicle sales, regardless of whether the technological infrastructure needed to support EVs at scale actually exists. While passing public policy can set a direction, it does not guarantee that engineering results will be available when needed.

“A systems-engineering approach towards managing the varied and often conflicting interests of the many stakeholders involved will be necessary to find a workable solution.” —Chris Paredis

Having committed $1.2 trillion through 2030 so far toward decarbonizing the planet, automakers are understandably wary not only of the fast reconfiguration of the auto industry but of the concurrent changes required in the energy, telecom, mining, recycling, and transportation industries that must succeed for their investments to pay off.

The EV transition is part of an unprecedented, planetary-wide, cyberphysical systems-engineering project with massive potential benefits as well as costs. Considering the sheer magnitude, interconnectedness, and uncertainties presented by the concurrent technological, political, and social changes necessary, the EV transition will undoubtedly be messy.

This chart from the Global EV Outlook 2021, IEA, Paris shows 2020 EV sales in the first column; in the second column, projected sales under current climate-mitigation policies; in the third column, projected sales under accelerated climate-mitigation policies.

“There is a lot that has to go right. And it won’t all go right,” observes Kristin Dziczek, former vice president of research at the Center for Automotive Research and now a policy analyst with the Federal Reserve Bank of Chicago. “We will likely stumble forward in some fashion,” but, she stresses, “it’s not a reason not to move forward.”

How many stumbles and how long the transition will take depend on whether the multitude of challenges involved are fully recognized and realistically addressed.

“Everyone needs to stop thinking in silos. It is the adjacency interactions that are going to kill you.” —Deepak Divan

“A systems-engineering approach towards managing the varied and often conflicting interests of the many stakeholders involved will be necessary to find a workable solution,” says Chris Paredis, the BMW Endowed Chair in Automotive Systems Integration at Clemson University. The range of engineering-infrastructure improvements needed to support EVs, for instance, “will need to be coordinated at a national/international level beyond what can be achieved by individual companies,” he states.

If the nitty gritty but hard-to-solve issues are glossed over or ignored, or if EV expectations are hyped beyond the market’s capability to deliver, no one should be surprised by a backlash against EVs, making the transition more difficult.

Until Tesla proved otherwise, EVs—especially battery EVs (BEVs)—were not believed by legacy automakers to be a viable, scalable approach to transport decarbonization even a decade ago. Tesla’s success at producing more than 3 million vehicles to date has shown that EVs are both technologically and economically feasible, at least for the luxury EV niche.

What has not yet been proven, but is widely assumed, is that BEVs can rapidly replace the majority of the current 1.3 billion-plus light-duty ICE vehicles. The interrelated challenges involving EV engineering infrastructure, policy, and societal acceptance, however, will test how well this assumption holds true.

Therefore, the successful transition to EVs at scale demands a “holistic approach,” emphasizes Georgia Tech’s Deepak Divan. “Everyone needs to stop thinking in silos. It is the adjacency interactions that are going to kill you.”

These adjacency issues involve numerous social-infrastructure obstacles that need to be addressed comprehensively along with the engineering issues, including the interactions and contradictions among them. These issues include the value and impacts of government EV incentives, the EV transition impacts on employment, and the public’s willingness to change its lifestyle behavior when it realizes converting to EVs will not be enough to reach future decarbonization goals.

“We cannot foresee all the details needed to make the EV transition successful,” John Leslie King says. “While there’s a reason to believe we will get there, there’s less reason to believe we know the way. It is going to be hard.”

In the next article in the series, we will look at the complexities introduced by trading our dependence on oil for our dependence on batteries.


Match ID: 101 Score: 2.14 source: spectrum.ieee.org age: 14 days
qualifiers: 2.14 energy

COP27: What Can Business Leaders Do to Fight Climate Change Now?
2022-11-09T00:00:00Z
The US government plans to spend $370 billion to cut greenhouse gases and expand renewable energy—its biggest investment yet. In the wake of COP27, we asked Harvard Business School faculty members how executives could seize this moment.
Match ID: 102 Score: 2.14 source: hbswk.hbs.edu age: 19 days
qualifiers: 2.14 energy

What is COP27 and why is it important?
Mon, 07 Nov 2022 13:25:26 GMT
This year's climate summit comes amid spiralling energy costs and diplomatic tensions.
Match ID: 103 Score: 2.14 source: www.bbc.co.uk age: 20 days
qualifiers: 2.14 energy

How gas is being rebranded as green – video
Thu, 03 Nov 2022 09:05:21 GMT

Is natural gas renewable? Is it a fossil fuel? A casual google search for natural gas gives the impression that these questions are somehow up for debate. And while natural gas has helped reduce carbon emissions as it was widely adopted as a replacement for coal, it is now up against zero-emission energy such as wind and solar. So how did natural gas end up in the same bracket as renewables? Josh Toussaint-Strauss explores the lengths fossil fuel companies have gone to in order to try to convince consumers, voters and lawmakers that natural gas is somehow a clean energy source

Continue reading...
Match ID: 104 Score: 2.14 source: www.theguardian.com age: 24 days
qualifiers: 2.14 energy

Home insulation: How can it cut energy bills?
Fri, 07 Oct 2022 11:40:59 GMT
Better home insulation could save some households hundreds of pounds a year on their energy bills.
Match ID: 105 Score: 2.14 source: www.bbc.co.uk age: 51 days
qualifiers: 2.14 energy

Dow, S&P 500 and Nasdaq snap 3-session skid as stocks eke out gains
Wed, 24 Aug 2022 16:01:33 -0500
U.S. stocks finished modestly higher Wednesday, with all three major stock benchmarks ending a 3-session skid, as investors picked up shares after a sharp market selloff earlier this week. The Dow Jones Industrial Average rose about 61 points, or 0.2%, ending near 32,970, while the S&P 500 index closed up 0.3% and the Nasdaq Composite Index advanced 0.4%. Stocks booked modest gains as investors remained focused on the Federal Reserve's inflation fight and Fed Chairman Jerome Powell's speech at the Jackson Hole, Wyo. symposium on Friday. Recession worries also were in focus, with pending homes sales falling in July, a sign that rate hikes have been helping cool demand, even through shelter costs, specifically rents, have contributed to high U.S. inflation. Benchmark lending climbed Wednesday, with the 10-year Treasury rate climbing to 3.105% Wednesday, the highest since June 28, according to Dow Jones Market Data based on 3 p.m. Eastern levels. The S&P 500's energy sector helped lead the way higher, up 1.2%, while financial rose 0.5%, according to FactSet
Match ID: 106 Score: 2.14 source: www.marketwatch.com age: 95 days
qualifiers: 2.14 energy

Solar-to-Jet-Fuel System Readies for Takeoff
Wed, 03 Aug 2022 17:00:00 +0000


As climate change edges from crisis to emergency, the aviation sector looks set to miss its 2050 goal of net-zero emissions. In the five years preceding the pandemic, the top four U.S. airlines—American, Delta, Southwest, and United—saw a 15 percent increase in the use of jet fuel. Despite continual improvements in engine efficiencies, that number is projected to keep rising.

A glimmer of hope, however, comes from solar fuels. For the first time, scientists and engineers at the Swiss Federal Institute of Technology (ETH) in Zurich have reported a successful demonstration of an integrated fuel-production plant for solar kerosene. Using concentrated solar energy, they were able to produce kerosene from water vapor and carbon dioxide directly from air. Fuel thus produced is a drop-in alternative to fossil-derived fuels and can be used with existing storage and distribution infrastructures, and engines.

Fuels derived from synthesis gas (or syngas)—an intermediate product that is a specific mixture of carbon monoxide and hydrogen—is a known alternative to conventional, fossil-derived fuels. Syngas is produced by Fischer-Tropsch (FT) synthesis, in which chemical reactions convert carbon monoxide and water vapor into hydrocarbons. The team of researchers at ETH found that a solar-driven thermochemical method to split water and carbon dioxide using a metal oxide redox cycle can produce renewable syngas. They demonstrated the process in a rooftop solar refinery at the ETH Machine Laboratory in 2019.

Close-up of a spongy looking material Reticulated porous structure made of ceria used in the solar reactor to thermochemically split CO2 and H2O and produce syngas, a specific mixture of H2 and CO.ETH Zurich

The current pilot-scale solar tower plant was set up at the IMDEA Energy Institute in Spain. It scales up the solar reactor of the 2019 experiment by a factor of 10, says Aldo Steinfeld, an engineering professor at ETH who led the study. The fuel plant brings together three subsystems—the solar tower concentrating facility, solar reactor, and gas-to-liquid unit.

First, a heliostat field made of mirrors that rotate to follow the sun concentrates solar irradiation into a reactor mounted on top of the tower. The reactor is a cavity receiver lined with reticulated porous ceramic structures made of ceria (or cerium(IV) oxide). Within the reactor, the concentrated sunlight creates a high-temperature environment of about 1,500 °C which is hot enough to split captured carbon dioxide and water from the atmosphere to produce syngas. Finally, the syngas is processed to kerosene in the gas-to-liquid unit. A centralized control room operates the whole system.

Fuel produced using this method closes the fuel carbon cycle as it only produces as much carbon dioxide as has gone into its manufacture. “The present pilot fuel plant is still a demonstration facility for research purposes,” says Steinfeld, “but it is a fully integrated plant and uses a solar-tower configuration at a scale that is relevant for industrial implementation.”

“The solar reactor produced syngas with selectivity, purity, and quality suitable for FT synthesis,” the authors noted in their paper. They also reported good material stability for multiple consecutive cycles. They observed a value of 4.1 percent solar-to-syngas energy efficiency, which Steinfeld says is a record value for thermochemical fuel production, even though better efficiencies are required to make the technology economically competitive.

Schematic of the solar tower fuel plant.  A heliostat field concentrates solar radiation onto a solar reactor mounted on top of the solar tower. The solar reactor cosplits water and carbon dioxide and produces a mixture of molecular hydrogen and carbon monoxide, which in turn is processed to drop-in fuels such as kerosene.ETH Zurich

“The measured value of energy conversion efficiency was obtained without any implementation of heat recovery,” he says. The heat rejected during the redox cycle of the reactor accounted for more than 50 percent of the solar-energy input. “This fraction can be partially recovered via thermocline heat storage. Thermodynamic analyses indicate that sensible heat recovery could potentially boost the energy efficiency to values exceeding 20 percent.”

To do so, more work is needed to optimize the ceramic structures lining the reactor, something the ETH team is actively working on, by looking at 3D-printed structures for improved volumetric radiative absorption. “In addition, alternative material compositions, that is, perovskites or aluminates, may yield improved redox capacity, and consequently higher specific fuel output per mass of redox material,” Steinfeld adds.

The next challenge for the researchers, he says, is the scale-up of their technology for higher solar-radiative power inputs, possibly using an array of solar cavity-receiver modules on top of the solar tower.

To bring solar kerosene into the market, Steinfeld envisages a quota-based system. “Airlines and airports would be required to have a minimum share of sustainable aviation fuels in the total volume of jet fuel that they put in their aircraft,” he says. This is possible as solar kerosene can be mixed with fossil-based kerosene. This would start out small, as little as 1 or 2 percent, which would raise the total fuel costs at first, though minimally—adding “only a few euros to the cost of a typical flight,” as Steinfeld puts it

Meanwhile, rising quotas would lead to investment, and to falling costs, eventually replacing fossil-derived kerosene with solar kerosene. “By the time solar jet fuel reaches 10 to 15 percent of the total jet-fuel volume, we ought to see the costs for solar kerosene nearing those of fossil-derived kerosene,” he adds.

However, we may not have to wait too long for flights to operate solely on solar fuel. A commercial spin-off of Steinfeld’s laboratory, Synhelion, is working on commissioning the first industrial-scale solar fuel plant in 2023. The company has also collaborated with the airline SWISS to conduct a flight solely using its solar kerosene.


Match ID: 107 Score: 2.14 source: spectrum.ieee.org age: 116 days
qualifiers: 2.14 energy

Inside the Universe Machine: The Webb Space Telescope’s Chilly Sun Shield
Thu, 07 Jul 2022 13:48:57 +0000


For a deep dive into the engineering behind the James Webb Space Telescope, see our collection of posts here.

When Apollo astronauts sent back the first pictures of Earth as a disk in space, the poet Archibald MacLeish wrote of it as “that bright loveliness in the eternal cold.” He was not far off. Deep space has a temperature of 2.7 kelvins—just 2.7 degrees above absolute zero.

If the James Webb Space Telescope is to work—looking so far out and therefore so far back in time that it can see the first galaxies forming after the big bang—it will have to image objects so faint that they barely stand out from the cold around them. The world will begin finding out how well the observatory works as soon as next week, when JWST is expected to release its first trove of scientific images and spectroscopic data.

So, for argument’s sake, let’s assume all indications so far do in fact point to a successful kickoff of the (hopefully long and storied) scientific data-gathering phase of Webb’s mission. How then did the engineers and designers of this massive telescope ever make it possible to cool the telescope down enough—all at a remove of nearly four times the distance from Earth to the moon—to possibly do its job?

After more than 25 years’ work and technological hurdles beyond counting, the Webb team has launched and stationed its mammoth observatory in solar orbit—and brought its instruments below 40 kelvins (-233 °C), cold enough to see the early universe more than 13.5 billion years ago. Remarkably, most of the cooling has been done passively, by shielding the telescope from the sun and letting physics take care of the rest.

“Webb is not just the product of a group of people. It’s not the product of some smart astronomers—Webb is truly the product of our entire world’s capability,” says Keith Parrish, a leader on the Webb team at NASA’s Goddard Space Flight Center in Maryland. “Taken as a whole, Webb is truly the result of our entire know-how of how to build complex machines.”

Parrish joined the project in 1997, ultimately becoming its commissioning manager through the years of design, assembly, testing, delay and, finally, launch on 25 December 2021. He says almost everything about it—its shape and location, the materials from which it’s made—was dictated by the need to have an observatory that would survive for years at supercold temperatures.

Photo of clean room with five giant foil-like sheets stacked atop one another, with three scientists in the distance, inspecting the sunshield In this photo, the five-layered JWST sunshield is being unfurled and inspected in a clean room. The layers of coated Kapton E never touch, minimizing the transmission of heat from one layer to the next. Alex Evers/Northrop Grumman

The Webb is an infrared observatory for many reasons, not the least of which is that as the universe expands, the wavelength of light from distant objects is lengthened, causing dramatic redshift. Infrared is also good for seeing through cosmic dust and gas, and for imaging cold things such as comets, Kuiper Belt objects, and perhaps planets orbiting other stars.

But infrared radiation is often best measured as heat, which is why it’s important for the Webb to be so cold. If, like the Hubble Telescope, it were in low Earth orbit, and it had no shielding from the sun, most of its targets would be drowned out by the sun and ground, and by heat in the telescope itself.

“If my signal is heat—and infrared is heat—then what I can’t have is other heat sources that are noise in the system,” says Jim Flynn, the sunshield manager at Northrop Grumman, the prime contractor for the Webb.

So the Webb has been sent to circle a spot in space called L2, 1.5 million kilometers away, opposite the sun, one of the locations known as Lagrange points. These "L" points are where the gravity of Earth and the sun exactly conspire to keep it in a stable and relatively "fixed" orbit with respect to the Earth as it makes its way around its 365.256-day course circling the sun. It’s a good compromise: Earth is distant enough that it doesn’t interfere with observations, but close enough that communication with the spacecraft can be relatively fast. And since the ship isn’t flying from day to night and back on every orbit, its temperature is relatively stable. All it needs is a really, really good sunshade.

“Four [layers of sunshield] would have probably done the job. Five gave us a little bit of an insurance policy. I’d like to say it was way more sophisticated than that, but that’s really not what it was at all.”
—Keith Parrish, NASA Goddard Space Flight Center

“The engineering was pushed above and beyond to meet the scientific goals,” says Alexandra Lockwood, a project scientist at the Space Telescope Science Institute, which operates the Webb. “It is specifically designed the way that it is because they wanted to do intensive infrared science.”

It makes for an ungainly-looking ship in many renderings, with the telescope assembly, intentionally open to space to prevent heat buildup, attached to its silvery sunshield, about 14 meters wide and 21 meters long, with five layers of insulating film to keep the telescope in almost total darkness.

From its sunlit side the sunshield roughly resembles a kite. The elongated shape, engineers found, would be the most efficient way to keep the Webb’s optics out of the sun. They considered a square or octagon, but the final version covers more area without much more mass.

“It’s no larger than it needs to be to meet the science field-of-view requirements, and that unique kite shape is the result,” says Parrish. “Any larger than it is now, it just makes everything more complex.”

The shield’s five layers are made of Kapton E, a plastic film first developed by DuPont in the 1960s and used for spacecraft insulation and printed circuits. The layers are coated in aluminum and silicon. Each is thinner than a human hair. But engineers say they are, together, very effective in blocking the sun’s heat. The first layer reduces its strength by about an order of magnitude (or 90 percent), the second layer removes another order of magnitude, and so on. The layers never touch, and they’re slightly flared as one gets away from the center of the shield, so that heat will escape out the sides.

The result: Temperatures on the sunny side of the shield approach 360 K (87 °C), but on the dark side they’re below that all-important 40 K (-233 °C). Or, put another way: More than 200 kilowatts of solar energy fall on the first layer, but only 23 milliwatts make it all the way through the fifth.

The Two Sides of the Webb Telescope


illustration depicting features of JWST's sunshield

Why five layers? There was a lot of computer modeling, but it was hard to simulate the shield’s thermal behavior before flight. “Four would have probably done the job. Five gave us a little bit of an insurance policy,” says Parrish. “I’d like to say it was way more sophisticated than that, but that’s really not what it was at all.”

The ability to cool the telescope naturally, first calculated in the 1980s to be possible, was a major advance. It meant the Webb would not have to rely on a heavy, complex cryogenic apparatus, with refrigerants that could leak and shorten the mission. Of its four main scientific instruments, only one, a midinfrared detector called MIRI, needs to be cooled to 6.7 K. It’s chilled by a multistage cryocooler, which pumps cold helium gas through pulse tubes to draw heat away from the instrument’s sensor. It uses the Joule-Thomson effect, reducing the temperature of the helium by making it expand after it’s forced through a 1-millimeter valve. Pressure comes from two pistons—the cryocooler system’s only moving parts—facing opposite directions so their movements will cancel each other out and not disturb observations.

Building the telescope proved immensely complicated; it fell years behind while its budget ballooned toward US $10 billion. The sunshield needed lengthy redesign after testing, when Kapton tore and fasteners came loose.

“We just bit off way more than we could chew,” Parrish says now. “That’s exactly what NASA should be doing. It should be pushing the envelope. The problem is that eventually Webb got too big to fail.”

But it’s finally deployed, sending data, and surprising engineers who expected at least some failures as it began to operate. Keith Parrish, his work done, is moving on to other projects at Goddard.

“I think Webb,” he says, “is just a great product of what it means to be an advanced civilization.”


Update: 26 July 2022: The story was updated to clarify that the gravity at Lagrange point L2 does not "cancel" (as the story had previously stated) but in fact adds to keep an object at L2 orbiting at the precise same orbital period as, in this case, the Earth—i.e. at 365.256 days.


Match ID: 108 Score: 2.14 source: spectrum.ieee.org age: 143 days
qualifiers: 2.14 energy

Inside the Universe Machine: The Webb Space Telescope’s Staggering Vision
Wed, 06 Jul 2022 13:00:00 +0000


For a deep dive into the engineering behind the James Webb Space Telescope, see our collection of posts here.

“Build something that will absolutely, positively work.” This was the mandate from NASA for designing and building the James Webb Space Telescope—at 6.5 meters wide the largest space telescope in history. Last December, JWST launched famously and successfully to its observing station out beyond the moon. And now according to NASA, as soon as next week, the JWST will at long last begin releasing scientific images and data.

Mark Kahan, on JWST’s product integrity team, recalls NASA’s engineering challenge as a call to arms for a worldwide team of thousands that set out to create one of the most ambitious scientific instruments in human history. Kahan—chief electro-optical systems engineer at Mountain View, Calif.–based Synopsys—and many others in JWST’s “pit crew” (as he calls the team) drew hard lessons from three decades ago, having helped repair another world-class space telescope with a debilitating case of flawed optics. Of course the Hubble Space Telescope is in low Earth orbit, and so a special space-shuttle mission to install corrective optics ( as happened in 1993) was entirely possible.

Not so with the JWST.

The meticulous care NASA demanded of JWST’s designers is all the more a necessity because Webb is well out of reach of repair crews. Its mission is to study the infrared universe, and that requires shielding the telescope and its sensors from both the heat of sunlight and the infrared glow of Earth. A good place to do that without getting too far from Earth is an empty patch of interplanetary space 1.5 million kilometers away (well beyond the moon’s orbit) near a spot physicists call the second Lagrange point, or L2.

The pit crew’s job was “down at the detail level, error checking every critical aspect of the optical design,” says Kahan. Having learned the hard way from Hubble, the crew insisted that every measurement on Webb’s optics be made in at least two different ways that could be checked and cross-checked. Diagnostics were built into the process, Kahan says, so that “you could look at them to see what to kick” to resolve any discrepancies. Their work had to be done on the ground, but their tests had to assess how the telescope would work in deep space at cryogenic temperatures.

Three New Technologies for the Main Mirror

Superficially, Webb follows the design of all large reflecting telescopes. A big mirror collects light from stars, galaxies, nebulae, planets, comets, and other astronomical objects—and then focuses those photons onto a smaller secondary mirror that sends it to a third mirror that then ultimately directs the light to instruments that record images and spectra.

Webb’s 6.5-meter primary mirror is the first segmented mirror to be launched into space. All the optics had to be made on the ground at room temperature but were deployed in space and operated at 30 to 55 degrees above absolute zero. “We had to develop three new technologies” to make it work, says Lee D. Feinberg of the NASA Goddard Space Flight Center, the optical telescope element manager for Webb for the past 20 years.

The longest wavelengths that Hubble has to contend with were 2.5 micrometers, whereas Webb is built to observe infrared light that stretches to 28 μm in wavelength. Compared with Hubble, whose primary mirror is a circle of an area 4.5 square meters, “[Webb’s primary mirror] had to be 25 square meters,” says Feinberg. Webb also “needed segmented mirrors that were lightweight, and its mass was a huge consideration,” he adds. No single-component mirror that could provide the required resolution would have fit on the Ariane 5 rocket that launched JWST. That meant the mirror would have to be made in pieces, assembled, folded, secured to withstand the stress of launch, then unfolded and deployed in space to create a surface that was within tens of nanometers of the shape specified by the designers.

Images of the James Webb Space Telescope and Hubble Space Telescope to scale, compared to a human figure, who is dwarfed by their size The James Webb Space Telescope [left] and the Hubble Space Telescope side by side—with Hubble’s 2.4-meter-diameter mirror versus Webb’s array of hexagonal mirrors making a 6.5-meter-diameter light-collecting area. NASA Goddard Space Flight Center

NASA and the U.S. Air Force, which has its own interests in large lightweight space mirrors for surveillance and focusing laser energy, teamed up to develop the technology. The two agencies narrowed eight submitted proposals down to two approaches for building JWST’s mirrors: one based on low-expansion glass made of a mixture of silicon and titanium dioxides similar to that used in Hubble and the other the light but highly toxic metal beryllium. The most crucial issue came down to how well the materials could withstand temperature changes from room temperature on the ground to around 50 K in space. Beryllium won because it could fully release stress after cooling without changing its shape, and it’s not vulnerable to the cracking that can occur in glass. The final beryllium mirror was a 6.5-meter array of 18 hexagonal beryllium mirrors, each weighing about 20 kilograms. The weight per unit area of JWST’s mirror was only 10 percent of that in Hubble. A 100-nanometer layer of pure gold makes the surface reflect 98 percent of incident light from JWST’s main observing band of 0.6 to 28.5 μm. “Pure silver has slightly higher reflectivity than pure gold, but gold is more robust,” says Feinberg. A thin layer of amorphous silica protects the metal film from surface damage.

In addition, a wavefront-sensing control system keeps mirror segment surfaces aligned to within tens of nanometers. Built on the ground, the system is expected to keep mirror alignment stabilized throughout the telescope’s operational life. A backplane kept at a temperature of 35 K holds all 2.4 tonnes of the telescope and instruments rock-steady to within 32 nm while maintaining them at cryogenic temperatures during observations.

Metal superstructure of cages and supports stands on a giant platform in a warehouse-sized clean-room. A man in a cleanroom suit watches the operations. The JWST backplane, the “spine” that supports the entire hexagonal mirror structure and carries more than 2,400 kg of hardware, is readied for assembly to the rest of the telescope. NASA/Chris Gunn

Hubble’s amazing, long-exposure images of distant galaxies are possible through the use of gyroscopes and reaction wheels. The gyroscopes are used to sense unwanted rotations, and reaction wheels are used to counteract them.

But the gyroscopes used on Hubble have had a bad track record and have had to be replaced repeatedly. Only three of Hubble’s six gyros remain operational today, and NASA has devised plans for operating with one or two gyros at reduced capability. Hubble also includes reaction wheels and magnetic torquers, used to maintain its orientation when needed or to point at different parts of the sky.

Webb uses reaction wheels similarly to turn across the sky, but instead of using mechanical gyros to sense direction, it uses hemispherical resonator gyroscopes, which have no moving parts. Webb also has a small fine-steering mirror in the optical path, which can tilt over an angle of just 5 arc seconds. Those very fine adjustments of the light path into the instruments keep the telescope on target. “It’s a really wonderful way to go,” says Feinberg, adding that it compensates for small amounts of jitter without having to move the whole 6-tonne observatory.

Instruments

Other optics distribute light from the fine-steering mirror among four instruments, two of which can observe simultaneously. Three instruments have sensors that observe wavelengths of 0.6 to 5 μm, which astronomers call the near-infrared. The fourth, called the Mid-InfraRed Instrument (MIRI), observes what astronomers call the mid-infrared spectrum, from 5 to 28.5 μm. Different instruments are needed because sensors and optics have limited wavelength ranges. (Optical engineers may blanch slightly at astronomers’ definitions of what constitutes the near- and mid-infrared wavelength ranges. These two groups simply have differing conventions for labeling the various regimes of the infrared spectrum.)

Mid-infrared wavelengths are crucial for observing young stars and planetary systems and the earliest galaxies, but they also pose some of the biggest engineering challenges. Namely, everything on Earth and planets out to Jupiter glow in the mid-infrared. So for JWST to observe distant astronomical objects, it must avoid recording extraneous mid-infrared noise from all the various sources inside the solar system. “I have spent my whole career building instruments for wavelengths of 5 μm and longer,” says MIRI instrument scientist Alistair Glasse of the Royal Observatory, in Edinburgh. “We’re always struggling against thermal background.”

Mountaintop telescopes can see the near-infrared, but observing the mid-infrared sky requires telescopes in space. However, the thermal radiation from Earth and its atmosphere can cloud their view, and so can the telescopes themselves unless they are cooled far below room temperature. An ample supply of liquid helium and an orbit far from Earth allowed the Spitzer Space Telescope’s primary observing mission to last for five years, but once the last of the cryogenic fluid evaporated in 2009, its observations were limited to wavelengths shorter than 5 μm.

Webb has an elaborate solar shield to block sunlight, and an orbit 1.5 million km from Earth that can keep the telescope to below 55 K, but that’s not good enough for low-noise observations at wavelengths longer than 5 μm. The near-infrared instruments operate at 40 K to minimize thermal noise. But for observations out to 28.5 μm, MIRI uses a specially developed closed-cycle, helium cryocooler to keep MIRI cooled below 7 K. “We want to have sensitivity limited by the shot noise of astronomical sources,” says Glasse. (Shot noise occurs when optical or electrical signals are so feeble that each photon or electron constitutes a detectable peak.) That will make MIRI 1,000 times as sensitive in the mid-infrared as Spitzer.

Another challenge is the limited transparency of optical materials in the mid-infrared. “We use reflective optics wherever possible,” says Glasse, but they also pose problems, he adds. “Thermal contraction is a big deal,” he says, because the instrument was made at room temperature but is used at 7 K. To keep thermal changes uniform throughout MIRI, they made the whole structure of gold-coated aluminum lest other metals cause warping.

Detectors are another problem. Webb’s near-infrared sensors use mercury cadmium telluride photodetectors with a resolution of 2,048 x 2,048 pixels. This resolution is widely used at wavelengths below 5 μm, but sensing at MIRI’s longer wavelengths required exotic detectors that are limited to offering only 1,024 x 1,024 pixels.

Glasse says commissioning “has gone incredibly well.” Although some stray light has been detected, he says, “we are fully expecting to meet all our science goals.”

NIRCam Aligns the Whole Telescope

The near-infrared detectors and optical materials used for observing at wavelengths shorter than 5 μm are much more mature than those for the mid-infrared, so the Near-Infrared Camera (NIRCam) does double duty by both recording images and aligning all the optics in the whole telescope. That alignment was the trickiest part of building the instrument, says NIRCam principal investigator Marcia Rieke of the University of Arizona.

Alignment means getting all the light collected by the primary mirror to get to the right place in the final image. That’s crucial for Webb, because it has 18 separate segments that have to overlay their images perfectly in the final image, and because all those segments were built on the ground at room temperature but operate at cryogenic temperatures in space at zero gravity. When NASA recorded a test image of a single star after Webb first opened its primary mirror, it showed 18 separate bright spots, one from each segment. When alignment was completed on 11 March, the image from NIRcam showed a single star with six spikes caused by diffraction.

Image of a star with six-pointed spikes caused by diffraction Even when performing instrumental calibration tasks, JWST couldn’t help but showcase its stunning sensitivity to the infrared sky. The central star is what telescope technicians used to align JWST’s mirrors. But notice the distant galaxies and stars that photobombed the image too!NASA/STScI

Building a separate alignment system would have added to both the weight and cost of Webb, Rieke realized, and in the original 1995 plan for the telescope she proposed designing NIRCam so it could align the telescope optics once it was up in space as well as record images. “The only real compromise was that it required NIRCam to have exquisite image quality,” says Rieke, wryly. From a scientific point, she adds, using the instrument to align the telescope optics “is great because you know you’re going to have good image quality and it’s going to be aligned with you.” Alignment might be just a tiny bit off for other instruments. In the end, it took a team at Lockheed Martin to develop the computational tools to account for all the elements of thermal expansion.

Escalating costs and delays had troubled Webb for years. But for Feinberg, “commissioning has been a magical five months.” It began with the sight of sunlight hitting the mirrors. The segmented mirror deployed smoothly, and after the near-infrared cameras cooled, the mirrors focused one star into 18 spots, then aligned them to put the spots on top of each other. “Everything had to work to get it to [focus] that well,” he says. It’s been an intense time, but for Feinberg, a veteran of the Hubble repair mission, commissioning Webb was “a piece of cake.”

NASA announced that between May 23rd and 25th, one segment of the primary mirror had been dinged by a micrometeorite bigger than the agency had expected when it analyzed the potential results of such impacts. “Things do degrade over time,” Feinberg said. But he added that Webb had been engineered to minimize damage, and NASA said the event had not affected Webb’s operation schedule.

Corrections 26-28 July 2022: The story was updated a) to reflect the fact that the Lagrange point L2 where Webb now orbits is not that of the "Earth-moon system" (as the story had originally reported) but rather the Earth-sun system
and b) to correct misstatements in the original posting about Webb's hardware for controlling its orientation.

Corrections 12 Aug. 2022: Alistair Glasse's name was incorrectly spelled in a previous version of this story, as was NIRCam (which we'd spelled as NIRcam); Webb's tertiary mirror (we'd originally reported only its primary and secondary mirrors) was also called out in this version.

This article appears in the September 2022 print issue as “Inside the Universe Machine.”


Match ID: 109 Score: 2.14 source: spectrum.ieee.org age: 144 days
qualifiers: 2.14 energy

Andrew Ng: Unbiggen AI
Wed, 09 Feb 2022 15:31:12 +0000


Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

Andrew Ng on...

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”


Match ID: 110 Score: 2.14 source: spectrum.ieee.org age: 291 days
qualifiers: 2.14 energy

How AI Will Change Chip Design
Tue, 08 Feb 2022 14:00:01 +0000


The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

How is AI currently being used to design the next generation of chips?

Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

Portrait of a woman with blonde-red hair smiling at the camera Heather GorrMathWorks

Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

What are the benefits of using AI for chip design?

Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

So it’s like having a digital twin in a sense?

Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

So, it’s going to be more efficient and, as you said, cheaper?

Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

We’ve talked about the benefits. How about the drawbacks?

Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years.

Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

How can engineers use AI to better prepare and extract insights from hardware or sensor data?

Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

What should engineers and designers consider when using AI for chip design?

Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

How do you think AI will affect chip designers’ jobs?

Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

How do you envision the future of AI and chip design?

Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.


Match ID: 111 Score: 2.14 source: spectrum.ieee.org age: 292 days
qualifiers: 2.14 energy

Outside the Box: After underperforming the stock market for years, alternative energy is red hot
Fri, 25 Sep 2020 14:39:17 GMT
Is clean tech finally a viable investment?
Match ID: 112 Score: 2.14 source: www.marketwatch.com age: 793 days
qualifiers: 2.14 energy

Stocks to Watch: Wisconsin Energy, Oracle, GE are stocks to watch
Mon, 23 Jun 2014 12:42:35 GMT
Wisconsin Energy Corp., Oracle Corp., and General Electric Co. may all see active trading after deals, or talk of deals.
Match ID: 113 Score: 2.14 source: www.marketwatch.com age: 3079 days
qualifiers: 2.14 energy

Industrial Functional Safety Training from UL Solutions
Wed, 16 Nov 2022 20:41:08 +0000


This is a sponsored article brought to you by UL Solutions.

Invest in building your team’s excellence with functional safety training and certification services from UL Solutions, a global safety science leader.

Our UL Certified Functional Safety Certification programs provide your team opportunities to learn about — or deepen their existing knowledge and skills in — functional safety to achieve professional credentials in this space.

We offer personnel certification at both the professional and expert levels in automotive, autonomous vehicles, electronics and semiconductors, machinery, industrial automation, and cybersecurity.

You can now register for any of the offerings listed below. All our instructor-led, virtual courses provide a deep dive into key functional safety standards.


IEC 61511

UL Certified Functional Safety Professional in IEC 61511 Class with Exam - Virtual

This three-day course provides a comprehensive overview of the IEC 61511:2016 and ANSI/ISA 61511:2018 standards for the process industry. Participants who complete all three days of training can take a two-hour certification exam on the morning of the fourth day. Those who pass the exam earn individual certification as a UL Certified Functional Safety Professional in IEC 61511 or UL-CFSP.

Purchase training→

IEC 61508

Functional Safety Overview and Designing Safety-Related Electronic Control Systems in Accordance with IEC 61508 Standard Class with Exam - Virtual (English)

This three-day course helps engineers, developers and managers successfully apply IEC 61508 to their safety-related electrical systems. IEC 61508 serves as the base functional safety standard for various industries, including process, nuclear and machinery, among others. This course includes a one-hour follow-up Q&A session (scheduled at a later date) with one of UL Solutions’ functional safety experts.

Purchase training→

UL 4600

UL Certified Autonomy Safety Professional Training in UL 4600 2nd Edition Class with Exam - Virtual (English)

This 2.5-day course highlights modern-day autonomous robotics, industrial automation, sensors and semi-automated technologies and how they can apply to safety. The course focuses on UL 4600, the Standard for Evaluation of Autonomous Products, and includes information on related safety standards.

Purchase training→

Functional Safety Training for Earth-Moving Machinery in Agricultural Tractor and Construction Control Systems Per ISO 25119, ISO 13849 and ISO 19014

UL Certified Functional Safety Professional Training in Agriculture and Construction Machinery Class with Exam - Virtual (English)

This 2.5-day course will cover functional safety standards and concepts related to agricultural and construction earth-moving machinery. Applicable standards covered in this training include the EU Machinery Directive; ISO 19014:2018, Earth-Moving Machinery — Functional Safety — Part 1: Methodology to Determine Safety-Related Parts of the Control System and Performance Requirements; and ISO 25119:2018, Tractors and Machinery for Agriculture and Forestry — Safety-Related Parts of Control Systems. UL Solutions’ experts will cover topics such as hazard identification and risk assessment per ISO 12100:2010, Safety of Machinery — General Principles for Design — Risk Assessment and Risk Reduction. Case studies on a range of topics, including motor drives and safety product life cycles, will also help provide examples of how the requirements and concepts of the standards apply.

Purchase training→

ISO 13849, IEC 62061, IEC 61800-5-2, 25119, and the EU Machinery Directive

UL Certified Functional Safety Professional Training in Machinery Class with Exam - Virtual (English)

This 2.5-day course is for engineers working on programmable machinery and control systems. The training course will cover functional safety standards and concepts related to the EU Machinery Directive, including ISO 13849, Safety of Machinery - Safety-Related Parts of Control Systems; IEC 61800-5-2, Adjustable Speed Electrical Power Drive Systems - Part 5-2: Safety Requirements - Functional; and IEC 62061, Safety of Machinery - Functional Safety of Safety-Related Electrical, Electronic and Programmable Electronic Control Systems.

Purchase training→


Match ID: 114 Score: 1.43 source: spectrum.ieee.org age: 11 days
qualifiers: 1.43 nuclear

NASA’s DART Mission Aims to Save the World
Fri, 23 Sep 2022 15:52:53 +0000


Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.

Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.

Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.

DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.

“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.

“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”

So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.

The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.

An infographic showing the orientation of Didymos,  Dimorphos, DART, and LICIACube. The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA

But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.

NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.

The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.

A golden cubesat with a bright light and lines The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA

What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”

So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.

“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”

There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.

“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”


Match ID: 115 Score: 1.43 source: spectrum.ieee.org age: 65 days
qualifiers: 1.43 nuclear

Filter efficiency 85.071 (116 matches/777 results)

ABOUT THE PROJECT

RSS Rabbit links users to publicly available RSS entries.
Vet every link before clicking! The creators accept no responsibility for the contents of these entries.

Relevant

Fresh

Convenient

Agile

CONTACT

We're not prepared to take user feedback yet. Check back soon!

rssRabbit quadric