logo RSS Rabbit quadric
News that matters, fast.
Good luck, have news.
Happy scrolling!

Categories



Date/Time of Last Update: Wed Nov 30 03:00:31 2022 UTC




********** WORLD AFFAIRS **********
return to top



Australia politics live: national anti-corruption commission legislation passes unanimously; parliament censures Morrison
Wed, 30 Nov 2022 02:36:50 GMT

The government has passed a key election promise. Follow all the day’s news

Milton Dick also makes these recommendations for things he believes may improve the parliament:

Offensive words or reflections on Members Standing Orders 88 to 90, for example, provide that Members should not use offensive words or cast adverse reflections on Members, but they do not refer specifically to language or behaviour that is sexist or otherwise exclusionary or discriminatory. Revisions to these Standing Orders, so that they explicitly include that this type of conduct is highly disorderly, would be advantageous to the Chair in ruling on such matters.

Education and procedural support

Sanctions against disorder Under Standing Order 94{a), the Speaker may direct a Member to leave the Chamber for one hour if the Member’s conduct is considered disorderly. At times, this direction to leave can be advantageous to a Member or be worn as a ‘badge of honour’.

If a Member’s conduct is grossly disorderly, the Speaker can choose to name the Member in accordance with Standing Order 94{b), but in practice this option is not often used and not used for ordinary offences. It would assist the Speaker to have additional options to sanction a Member for disorderly conduct. For example, choices for increased penalties of time, and/or the introduction of cascading penalties of time for continued disorder, would be a disincentive to Members to be ejected.

Continue reading...
Match ID: 0 Score: 50.00 source: www.theguardian.com age: 0 days
qualifiers: 35.00 sanctions, 15.00 energy

Macron to promote nuclear energy in U.S., as industry faces crisis in France
Tue, 29 Nov 2022 12:12:09 EST
As Europe turned away from Russian energy, the French nuclear industry missed its chance to demonstrate the promise President Emmanuel Macron has talked about.
Match ID: 1 Score: 45.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 russia, 15.00 energy, 10.00 nuclear

In a Wisconsin Trump County, and Across the U.S., Progressive Health Care Initiatives Coasted Through
Tue, 29 Nov 2022 20:01:25 +0000

Initiatives to expand health care access and reduce costs won big on election night, rattling for-profit industry lobbyists.

The post In a Wisconsin Trump County, and Across the U.S., Progressive Health Care Initiatives Coasted Through appeared first on The Intercept.


Match ID: 2 Score: 40.00 source: theintercept.com age: 0 days
qualifiers: 25.00 trump, 15.00 trump

Philly’s Reform Prosecutor Reacts to His Impeachment
Tue, 29 Nov 2022 11:00:22 +0000

Pennsylvania lawmakers accused Philadelphia District Attorney Larry Krasner of causing a crime “crisis.”

The post Philly’s Reform Prosecutor Reacts to His Impeachment appeared first on The Intercept.


Match ID: 3 Score: 40.00 source: theintercept.com age: 0 days
qualifiers: 25.00 trump, 10.00 nuclear, 5.00 pompeo

Uneasy calm grips Ukraine as West prepares winter aid
Tue, 29 Nov 2022 17:22:34 EST
An uneasy calm is hanging over Kyiv as residents of the Ukrainian capital prepare for Russian missile attacks aiming to take out more energy infrastructure as winter approaches
Match ID: 4 Score: 35.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 russia, 15.00 energy

Ukraine live briefing: U.S. announces aid for Ukraine’s battered energy infrastructure
Tue, 29 Nov 2022 17:10:06 EST
The United States is expected to announce steps to help Ukraine withstand Russian attacks on its energy infrastructure, as NATO diplomats gather in Romania.
Match ID: 5 Score: 35.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 russia, 15.00 energy

U.S. unveils plan to rebuild Ukraine energy grid after Russian assault
Tue, 29 Nov 2022 16:16:50 EST
Secretary of State Antony Blinken said the United States will put $53 million toward helping Ukraine procure transformers, circuit breakers and other hardware.
Match ID: 6 Score: 35.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 russia, 15.00 energy

Germany agrees 15-year liquid gas supply deal with Qatar
Tue, 29 Nov 2022 18:55:39 GMT

Racing to wean itself off Russian gas supplies, Germany is set to buy 2m tonnes of liquid gas from Gulf state

German firms have signed a 15-year deal to buy 2m tonnes of liquid gas from Qatar, sending out mixed signals over the priority Germany places on human rights in the Gulf and its commitment to a carbon neutral energy supply.

The deal was announced by state-owned Qatar Energy and deliveries will start from 2026. The gas will be sold by Qatar to the US company ConocoPhillips, which will then deliver it to the LNG terminal in Brunsbüttel, Qatar’s energy minister said in the capital, Doha.

Continue reading...
Match ID: 7 Score: 35.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia, 15.00 energy

Ukraine pleads for help to fix energy grid hit by Russian bomb raids
Tue, 29 Nov 2022 17:10:39 GMT

European firms urged to donate spare parts after sustained Russian strikes on power plants and substations

Ukraine’s supplies of spare parts for its battered electricity grid are running out amid sustained Russian bombing, and European companies are being asked to urgently donate surplus kit to help the country get through the winter.

There were power cuts lasting 48 hours or more across the country last week after a fresh wave of Russian missile attacks, the latest step in a wave of bombings targeting power plants and substations which route energy around the network.

Continue reading...
Match ID: 8 Score: 35.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia, 15.00 energy

Onset of winter in Ukraine – a photo essay
Tue, 29 Nov 2022 08:00:22 GMT

After Russian attacks on energy infrastructure, this is expected to be the hardest winter in Ukraine since the second world war

The defining image of the current state of the war in Ukraine – and at the same time a metaphor for the entire country – is the snow-covered rubble of a residential building split in two by a Russian bomb in the town of Borodianka.

Some people who lived in that building have died as a result of the conflict. Others have managed to escape, joining the great exodus of refugees toward Europe. Some are now living in a dormitory on the outskirts of the town, struggling, like millions of other Ukrainians, with the nightmare of what is expected to be the hardest winter in the country’s history since the second world war.

Top left: residents of Borodianka disembark from a bus in a street heavily bombed by Russia. Top right: the rubble of a house in Borodianka. Above: an elderly woman walks by the snow-covered rubble of a residential building in Borodianka

Continue reading...
Match ID: 9 Score: 35.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia, 15.00 energy

Lobbyist for Saudi Alfalfa Company Desiccating Arizona Was Elected to Maricopa County Board of Supervisors
Mon, 28 Nov 2022 17:24:10 +0000

Thomas Galvin lobbied on behalf of a Saudi company soaking up Arizona’s groundwater. He is now mediating an ongoing water dispute in neighboring Maricopa County.

The post Lobbyist for Saudi Alfalfa Company Desiccating Arizona Was Elected to Maricopa County Board of Supervisors appeared first on The Intercept.


Match ID: 10 Score: 35.00 source: theintercept.com age: 1 day
qualifiers: 20.00 russia, 15.00 energy

Oath Keepers founder Stewart Rhodes found guilty of seditious conspiracy
Wed, 30 Nov 2022 00:26:42 GMT

Jury convicts leader of rightwing group which supported Trump’s attempt to overturn 2020 election

Stewart Rhodes, the founder of the rightwing Oath Keepers militia, has been found guilty of seditious conspiracy, a charge arising from the attack on the US Capitol by supporters of Donald Trump.

Rhodes and co-defendant Kelly Meggs are the first people in nearly three decades to be found guilty of the rarely used civil war-era charge at trial. The trial was the biggest test yet for the US justice department in its efforts to hold accountable those responsible for the attack that shook the foundations of US democracy.

Continue reading...
Match ID: 11 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 trump

White Nationalists Give Mar-a-Lago Restaurant Disappointing One-Star Reviews on Tripadvisor
Tue, 29 Nov 2022 23:30:07 +0000
Neo-Nazis complained that the restaurant had “slow service,” “drab décor,” and “too many ethnic dishes.”
Match ID: 12 Score: 25.00 source: www.newyorker.com age: 0 days
qualifiers: 25.00 trump

Republican leaders rebuke Trump over dinner with white supremacist
Tue, 29 Nov 2022 23:28:44 GMT

Mitch McConnell and Kevin McCarthy break silence over meeting and say no room in party for antisemitism or white supremacy

The top two Republicans in Congress have broken their silence about Donald Trump’s dinner last week with the rightwing extremist Nick Fuentes, saying the Republican party has no place for antisemitism or white supremacy.

The Senate Republican leader, Mitch McConnell, and Kevin McCarthy, who may become House speaker in January, had not commented previously on the 22 November meeting.

Continue reading...
Match ID: 13 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 trump

McConnell: Anyone meeting with antisemites is ‘unlikely to ever be elected president’
Tue, 29 Nov 2022 17:24:38 EST
The top two Republicans in Congress condemned former president Donald Trump's dinner with Ye and Nick Fuentes, who have both espoused antisemitic views.
Match ID: 14 Score: 25.00 source: www.washingtonpost.com age: 0 days
qualifiers: 25.00 trump

Trump continues to be plagued by legal woes
Tue, 29 Nov 2022 17:15:13 EST
For Donald Trump, it’s one legal battle after another. Let’s check in on the status of several key investigations involving the former president as he mounts his 2024 campaign to return to the White House.
Match ID: 15 Score: 25.00 source: www.washingtonpost.com age: 0 days
qualifiers: 25.00 trump

Nick who? Fox News has barely mentioned the Nick Fuentes-Trump dinner
Tue, 29 Nov 2022 16:53:51 EST
This is how it tends to work with the right-leaning network.
Match ID: 16 Score: 25.00 source: www.washingtonpost.com age: 0 days
qualifiers: 25.00 trump

McCarthy’s brazen revisionism on the GOP and Nick Fuentes
Tue, 29 Nov 2022 14:54:37 EST
Rep. Kevin McCarthy claimed both Donald Trump and Rep. Marjorie Taylor Greene (R-Ga.) have denounced white nationalist Nick Fuentes. In reality, neither has.
Match ID: 17 Score: 25.00 source: www.washingtonpost.com age: 0 days
qualifiers: 25.00 trump

The Future of the Transistor Is Our Future
Tue, 29 Nov 2022 17:45:32 +0000


This is a guest post in recognition of the 75th anniversary of the invention of the transistor. It is adapted from an essay in the July 2022 IEEE Electron Device Society Newsletter. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

On the 75th anniversary of the invention of the transistor, a device to which I have devoted my entire career, I’d like to answer two questions: Does the world need better transistors? And if so, what will they be like?

I would argue, that yes, we are going to need new transistors, and I think we have some hints today of what they will be like. Whether we’ll have the will and economic ability to make them is the question.

I believe the transistor is and will remain key to grappling with the impacts of global warming. With its potential for societal, economic, and personal upheaval, climate change calls for tools that give us humans orders-of-magnitude more capability.

Semiconductors can raise the abilities of humanity like no other technology. Almost by definition, all technologies increase human abilities. But for most of them, natural resources and energy constrains make orders-of magnitude improvements questionable. Transistor-enabled technology is a unique exception for the following reasons.

  1. As transistors improve, they enable new abilities such as computing and high-speed communication, the Internet, smartphones, memory and storage, robotics, artificial intelligence, and other things no one has thought of yet.
  2. These abilities have wide applications, and they transform all technologies, industries, and sciences.
    a. Semiconductor technology is not nearly as limited in growth by its material and energy usages as other technologies. ICs use relatively small amounts of materials. As a result, they’re being made smaller, and the less materials they use, the faster, more energy efficient, and capable they become.
  3. Theoretically, the energy required for information processing can still be reduced to less than one-thousandth of what is required today . Although we do not yet know exactly how to approach such theoretical efficiency, we know that increasing energy efficiency a thousandfold would not violate physical laws. In contrast, the energy efficiencies of most other technologies, such as motors and lighting, are already at 30 to 80 percent of their theoretical limits.

Transistors: past, present, and future

How we’ll continue to improve transistor technology is relatively clear in the short term, but it gets murkier the farther out you go from today. In the near term, you can glimpse the transistor’s future by looking at its recent past.

The basic planar (2D) MOSFET structure remained unchanged from 1960 until around 2010, when it became impossible to further increase transistor density and decrease the device’s power consumption. My lab at the University of California, Berkeley, saw that point coming more than a decade earlier. We reported the invention of the FinFET, the planar transistor’s successor, in 1999. FinFET, the first 3D MOSFET, changed the flat and wide transistor structure to a tall and narrow one. The benefit is better performance in a smaller footprint, much like the benefit of multistory buildings over single-story ones in a crowded city.

The FinFET is also what’s called a thin-body MOSFET, a concept that continues to guide the development of new devices. It arose from the insight that current will not leak through a transistor within several nanometers of the silicon surface because the surface potential there is well controlled by the gate voltage. FinFETs take this thin-body concept to heart. The device’s body is the vertical silicon fin, which is covered by oxide insulator and gate metal, leaving no silicon outside the range of strong gate control. FinFETs reduced leakage current by orders of magnitude and lowered transistor operating voltage. It also pointed toward the path for further improvement: reducing the body thickness even more.

The fin of the FinFET has become thinner and taller with each new technology node. But this progress has now become too difficult to maintain. So industry is adopting a new 3D thin-body CMOS structure, called gate-all-around (GAA). Here, a stack of ribbons of semiconductor make up the thin body.

Three different configurations of rectangles have blue, yellow, and pink portions. Each evolution of the MOSFET structure has been aimed at producing better control over charge in the silicon by the gate [pink]. Dielectric [yellow] prevents charge from moving from the gate into the silicon body [blue].

The 3D thin-body trend will continue from these 3D transistors to 3D-stacked transistors, 3D monolithic circuits, and multichip packaging. In some cases, this 3D trend has already reached great heights. For instance, the regularity of the charge-trap memory-transistor array allowed NAND flash memory to be the first IC to transition from 2D circuits to 3D circuits. Since the first report of 3D NAND by Toshiba in 2007, the number of stacked layers has grown from 4 to beyond 200.

Monolithic 3D logic ICs will likely start modestly, with stacking the two transistors of a CMOS inverter to reduce all logic gates’ footprints [see “3D-Stacked CMOS Takes Moore’s Law to New Heights”]. But the number of stacks may grow. Other paths to 3D ICs may employ the transfer or deposition of additional layers of semiconductor films, such as silicon, silicon germanium, or indium gallium arsenide onto a silicon wafer.

The thin-body trend might meet its ultimate endpoint in 2D semiconductors, whose thickness is measured in atoms. Molybdenum disulfide molecules, for example, are both naturally thin and relatively large, forming a 2D semiconductor that may be no more than three atoms wide yet have very good semiconductor properties. In 2016, engineers in California and Texas used a film of the 2D-semiconductor molecule molybdenum disulfide and a carbon nanotube to demonstrate a MOSFET with a critical dimension: a gate length just 1 nanometer across. Even with a gate as short as 1 nm, the transistor leakage current was only 10 nanoamperes per millimeter, comparable with today’s best production transistor.

“The progress of transistor technology has not been even or smooth.”

One can imagine that in the distant future, the entire transistor may be prefabricated as a single molecule. These prefabricated building blocks might be brought to their precise locations in an IC through a process called directed-self-assembly (DSA). To understand DSA, it may be helpful to recall that a COVID virus uses its spikes to find and chemically dock itself onto an exact spot at the surface of particular human cells. In DSA, the docking spots, the “spikes,” and the transistor cargo are all carefully designed and manufactured. The initial docking spots may be created with lithography on a substrate, but additional docking spots may be brought in as cargo in subsequent steps. Some of the cargo may be removed by heat or other means if they are needed only during the fabrication process but not in the final product.

Besides making transistors smaller, we’ll have to keep reducing their power consumption. Here we could see an order-of-magnitude reduction through the use of what are called negative-capacitance field-effect transistors (NCFET). These require the insertion of a nanometer-thin layer of ferroelectric material, such as hafnium zirconium oxide, in the MOSFET’s gate stack. Because the ferroelectric contains its own internal electric field, it takes less energy to switch the device on or off. An additional advantage of the thin ferroelectric is the possible use of the ferroelectric’s capacity to store a bit as the state of its electric field, thereby integrating memory and computing in the same device.

Two smiling men in suits. The man on the left wears a large golden medal around his neck. The author [left] received the U.S. National Medal of Technology and Innovation from President Barack Obama [right] in 2016. Kevin Dietsch/UPI/Alamy

To some degree the devices I’ve described arose out of existing trends. But future transistors may have very different materials, structures, and operating mechanisms from those of today’s transistor. For example, the nanoelectromechanical switch is a return to the mechanical relays of decades past rather than an extension of the transistor. Rather than relying on the physics of semiconductors, it uses only metals, dielectrics, and the force between closely spaced conductors with different voltages applied to them.

All these examples have been demonstrated with experiments years ago. However, bringing them to production will require much more time and effort than previous breakthroughs in semiconductor technology.

Getting to the future

Will we be able to achieve these feats? Some lessons from the past indicate that we could.

The first lesson is that the progress of transistor technology has not been even or smooth. Around 1980, the rising power consumption per chip reached a painful level. The adoption of CMOS, replacing NMOS and bipolar technologies—and later, the gradual reduction of operation voltage from 5 volts to 1—gave the industry 30 years of more or less straightforward progress. But again, power became an issue. Between 2000 and 2010, the heat generated per square centimeter of IC was projected by thoughtful researchers to soon reach that of the nuclear-reactor core. The adoption of 3D thin-body FinFET and multicore processor architectures averted the crisis and ushered in another period of relatively smooth progress.

The history of transistor technology may be described as climbing one mountain after another. Only when we got to the top of one were we able see the vista beyond and map a route to climb the next taller and steeper mountain.

The second lesson is that the core strength of the semiconductor industry—nanofabrication—is formidable. History proves that, given sufficient time and economic incentives, the industry has been able to turn any idea into reality, as long as that idea does not violate scientific laws.

But will the industry have sufficient time and economic incentives to continue climbing taller and steeper mountains and keep raising humanity’s abilities?

It’s a fair question. Even as the fab industry’s resources grow, the mountains of technology development grow even faster. A time may come when no one fab company can reach the top of the mountain to see the path ahead. What happens then?

The revenue of all semiconductor fabs (both independent and those, like Intel, that are integrated companies) is about one-third of the semiconductor industry revenue. But fabs make up just 2 percent of the combined revenues of the IT, telecommunications, and consumer-electronics industries that semiconductor technology enables. Yet the fab industry bears most of the growing burden of discovering, producing, and marketing new transistors and nanofabrication technologies. That needs to change.

For the industry to survive, the relatively meager resources of the fab industry must be prioritized in favor of fab building and shareholder needs over scientific exploration. While the fab industry is lengthening its research time horizon, it needs others to take on the burden too. Humanity’s long-term problem-solving abilities deserve targeted public support. The industry needs the help of very-long-term exploratory research, publicly funded, in a Bell Labs–like setting or by university researchers with career-long timelines and wider and deeper knowledge in physics, chemistry, biology, and algorithms than corporate research currently allows. This way, humanity will continue to find new transistors and gain the abilities it will need to face the challenges in the centuries ahead.


Match ID: 18 Score: 25.00 source: spectrum.ieee.org age: 0 days
qualifiers: 15.00 energy, 10.00 nuclear

Philly DA Larry Krasner: In Midterms, Democrats Went “Republican-Lite” on Crime
Tue, 29 Nov 2022 16:10:49 +0000

The embattled progressive district attorney said election victories show how Democrats can win big — by leaning into criminal justice reforms.

The post Philly DA Larry Krasner: In Midterms, Democrats Went “Republican-Lite” on Crime appeared first on The Intercept.


Match ID: 19 Score: 25.00 source: theintercept.com age: 0 days
qualifiers: 25.00 trump

Republican officials turn to election rejection
Tue, 29 Nov 2022 10:24:39 EST
Donald Trump's push to reject election results catches on at the county level.
Match ID: 20 Score: 25.00 source: www.washingtonpost.com age: 0 days
qualifiers: 25.00 trump

Pence, other Republicans issue rare rebuke of Trump over dinner with Fuentes and Ye
Mon, 28 Nov 2022 21:56:12 EST
Trump's former vice president is the highest-profile Republican to criticize the meal with someone known to espouse antisemitic views.
Match ID: 21 Score: 25.00 source: www.washingtonpost.com age: 1 day
qualifiers: 25.00 trump

Trump Says He Did Not Know Identity of Dinner Guest, Owing to White Hood
Mon, 28 Nov 2022 20:34:17 +0000
“I didn’t think it was very smart to be wearing white to dinner with me,” the former President said. “When I eat, a lot of ketchup goes flying.”
Match ID: 22 Score: 25.00 source: www.newyorker.com age: 1 day
qualifiers: 25.00 trump

Jill Biden’s holiday decorations strike a homier note than Melania Trump’s
Mon, 28 Nov 2022 15:31:00 EST
Homespun touches, including students' self-portraits, handwritten recipe cards and yes, the Bidens' pets, play a starring role in this year's decorations.
Match ID: 23 Score: 25.00 source: www.washingtonpost.com age: 1 day
qualifiers: 25.00 trump

A Special Prosecutor Found Kevin Johnson’s Case Was Tainted by Racism. Missouri Is About to Kill Him Anyway.
Sun, 27 Nov 2022 15:15:12 +0000

Kevin Johnson is facing execution for killing a cop when he was 19. A special prosecutor says his sentence should be vacated.

The post A Special Prosecutor Found Kevin Johnson’s Case Was Tainted by Racism. Missouri Is About to Kill Him Anyway. appeared first on The Intercept.


Match ID: 24 Score: 25.00 source: theintercept.com age: 2 days
qualifiers: 25.00 trump

Bodies-in-suitcases suspect appears in New Zealand court
Tue, 29 Nov 2022 21:25:20 EST
A woman who was extradited from South Korea this week after the bodies of her two children were found in abandoned suitcases has made her first court appearance in New Zealand
Match ID: 25 Score: 20.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 korea

Finland says it must ensure Ukraine wins war against Russia
Tue, 29 Nov 2022 20:16:18 EST
Finland’s leader says it must give more weapons and support to Ukraine to ensure it wins its war against Russia
Match ID: 26 Score: 20.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 russia

Russia-Ukraine war at a glance: what we know on day 280 of the invasion
Wed, 30 Nov 2022 00:58:55 GMT

Nato doubles down on promise to one day include Ukraine in alliance; supplies to restore Ukraine’s battered electricity grid running low

Continue reading...
Match ID: 27 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

Ukraine war: The surrender hotline for Russian soldiers
Wed, 30 Nov 2022 00:06:16 GMT
The "I Want To Live" project says it is getting up to 100 enquiries a day from invading troops.
Match ID: 28 Score: 20.00 source: www.bbc.co.uk age: 0 days
qualifiers: 20.00 russia

Ukraine war: Nato pledges to provide more weapons and fix power grid
Tue, 29 Nov 2022 20:56:30 GMT
Millions of Ukrainians remain without power and water in freezing temperatures after Russian strikes.
Match ID: 29 Score: 20.00 source: www.bbc.co.uk age: 0 days
qualifiers: 20.00 russia

It took nearly 500 years for researchers to crack Charles V’s secret code
Tue, 29 Nov 2022 20:48:28 +0000
Holy Roman Emperor wrote of being targeted by Italian mercenary, among other topics.
Match ID: 30 Score: 20.00 source: arstechnica.com age: 0 days
qualifiers: 20.00 italy

Ukrainian refugees and hosts petition UK government for housing support
Tue, 29 Nov 2022 18:32:09 GMT

More than 4,500 people sign petition asking for help for those struggling to find independent accommodation

Ukrainian refugees and their hosts have delivered a petition to Downing Street signed by more than 4,500 people calling on the government to provide housing support for those struggling or unable to find independent accommodation.

Krish Kandiah, the founder of Sanctuary Foundation, the charity behind the petition, was accompanied by Ukrainian refugees including nine-year-old Danylo from Kyiv, who knocked on 10 Downing Street to deliver the petition on Tuesday.

Continue reading...
Match ID: 31 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

Ukraine needs tanks, and the west should supply them. They could finish off Putin and Russia | Frank Ledwidge
Tue, 29 Nov 2022 18:26:01 GMT

In 1941, Churchill said to the US: ‘give us the tools and we will finish the job’. Zelenskiy is saying the same to us – and we should listen

In a 1941 speech on a Royal Navy ship, Winston Churchill directed his final comments to the US: “Give us the tools, and we will finish the job.” After a significant victory in Kherson, and standing at the gates of Crimea facing a Russian army desperately trying to shore up its ramshackle defences, Ukraine has the troops and morale to defend what it has. However, despite some western assistance, the Ukrainians lack the tools – tanks, missiles and aircraft – to retake their land and impose strategic defeat on the Russians. If the west, and especially the US, is serious about helping to protect Ukraine, decisions on stepping up military assistance need to be made now. If Ukraine is to be able to secure its future after victory – assuming that is what the west truly wants – its forces need to begin to transition to Nato-standard equipment.

The US has not yet declared a political or military objective. However, in April the US secretary of defence, Lloyd Austin, said he wanted “to see Russia weakened to the point where it can’t do things like invade Ukraine”. Is it the intention of the United States genuinely to support military efforts to return Ukraine’s internationally recognised borders? Or does it instead suit US and western decision-makers to support a long war with Ukrainian forces used as proxies to bleed out Russia’s armed forces? Obviously, these are not at all the same thing. Decisions need to be made very soon about increasing military support, and those decisions will tell us which objective is being pursued.

Frank Ledwidge is a barrister and former military officer who has served in the Balkans, Iraq and Afghanistan

Continue reading...
Match ID: 32 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

‘The perfect gateway’: are Broadway audiences ready for a K-pop musical?
Tue, 29 Nov 2022 14:56:51 GMT

The first Broadway show celebrating Korean culture hopes to show there’s ‘more to K-pop than just Gangnam Style’

With her ice blond hair, kaleidoscopic costumes and melismatic high notes, the South Korean solo artist MwE (pronounced mu-WEE) looks the part of a bona fide K-pop idol, the model of a hyper-visible cipher. Fellow girl group RTMIS (pronounced Artemis) boasts similarly convincing stage confidence, while boy band F8 (pronounced Fate) approaches the delivery of English and Korean lyrics and kinetic choreography with enough militaristic precision to draw whoops from a crowd of about 600 at New York’s Circle in the Square theater on a recent Wednesday.

The trio are, on one level, the stable of acts meant to introduce a Korean pop label to American audiences in a one-night only concert debut. They are also, in one of many meta moments, the fictional backbone of KPOP, a new musical introducing the chart-dominating genre to Broadway. The eardrum-shaking show, which opened last week after a long pandemic delay, straddles the line between Technicolor bilingual concert and musical theater, blurring Broadway conventions with arena pop adrenaline; four of the 18 cast members, including Luna as MwE, double as real-life K-pop idols.

Continue reading...
Match ID: 33 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 korea

‘Racist’ interview with Pope Francis causes fury in Russia
Tue, 29 Nov 2022 14:54:53 GMT

Francis says Chechen and Buryat minorities in Russia’s army more cruel in Ukraine than other soldiers

Pope Francis has sparked fury in Russia over an interview in which he suggested that Chechen and Buryat members of its armed forces showed more cruelty in Ukraine than ethnic Russian soldiers.

In an interview with the Catholic magazine America published Monday, the pope said that soldiers from Buryatia, where Buddhism is a major religion, and the Muslim-majority Chechnya republic, were “the cruellest” while fighting in Ukraine.

Continue reading...
Match ID: 34 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 russia

From a bunker, an acting mayor keeps her front-line Ukraine town alive
Tue, 29 Nov 2022 07:08:04 EST
Russia has shelled city hall so incessantly that Svitlana Mandrych had to move her office underground, where she answers pleas for help from desperate residents.
Match ID: 35 Score: 20.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 russia

Ukraine live briefing: G-7 to discuss assault on Ukraine’s power grid; Zelensky warns citizens to brace for more Russian airstrikes
Mon, 28 Nov 2022 21:09:16 EST
In Brussels Monday, European diplomats will try to move forward with U.S.-backed plans to cap the price of Russian oil after talks last week stalled.
Match ID: 36 Score: 20.00 source: www.washingtonpost.com age: 1 day
qualifiers: 20.00 russia

Ukraine war: Chornobaivka airbase, symbol of Russian defeat
Tue, 29 Nov 2022 00:37:33 GMT
Chornobaivka has earned its status as one of the most important battlefields of the war.
Match ID: 37 Score: 20.00 source: www.bbc.co.uk age: 1 day
qualifiers: 20.00 russia

Democratic and Republican Senators Demand Transfer of Gray Eagle Drone to Ukraine
Mon, 28 Nov 2022 16:23:30 +0000

Joe Manchin, Lindsey Graham, and 14 other U.S. senators demand that Biden give Ukraine a top-tier U.S. drone.

The post Democratic and Republican Senators Demand Transfer of Gray Eagle Drone to Ukraine appeared first on The Intercept.


Match ID: 38 Score: 20.00 source: theintercept.com age: 1 day
qualifiers: 20.00 russia

The big picture: Bruno Barbey captures life on the road in 1960s Palermo
Sun, 27 Nov 2022 07:00:14 GMT

The Magnum photographer’s image of a family in Sicily recalls Fellini and Visconti in its romantic depiction of everyday Italian life

Bruno Barbey chanced upon this family defying gravity on their dad’s scooter in Palermo in 1963. The French-Moroccan photographer had been travelling in Italy for a couple of years by then, restless for exactly this kind of image, with its seductive mix of humour and authenticity. Has there ever been a better articulation of contrasting roles in the patriarchal family? Father sitting comfortably in his jacket and cap and smiling for the camera, while behind him his possibly pregnant wife sees trouble ahead, as she and their three kids and their big checked bag compete for precarious discomfort.

Barbey, then 22, had gone to Italy to try to find pictures that captured “a national spirit” as the country sought to rediscover the dolce vita in cities still recovering from war. He travelled in an old VW van and in Palermo in particular he located scenes that might have been choreographed for the working-class heroes of the Italian neorealist films, the self-absorbed dreamers of Fellini and Visconti (The Leopard, the latter’s Hollywood epic set in Sicily was released in the same year). Barbey’s camera with its wide angle lens picked up the detail of vigorous crowd scenes among street children and barflies and religious processions. His book, The Italians, now republished, is a time capsule of that already disappearing black-and-white world of priests and mafiosi and nightclub girls and nuns.

Les Italiens (French edition) by Bruno Barbey is republished by delpire & co

Continue reading...
Match ID: 39 Score: 20.00 source: www.theguardian.com age: 2 days
qualifiers: 20.00 italy

I Experienced Jack Smith's Zeal Firsthand. Will Trump Get the Same Treatment?
Wed, 23 Nov 2022 16:15:01 +0000

His handling of the ex-president will show whether Smith really is an aggressive prosecutor — or just aggressive against the powerless.

The post I Experienced Jack Smith’s Zeal Firsthand. Will Trump Get the Same Treatment? appeared first on The Intercept.


Match ID: 40 Score: 19.29 source: theintercept.com age: 6 days
qualifiers: 10.71 trump, 8.57 russia

Can America’s Aging Leadership Deliver the Future?
Fri, 25 Nov 2022 11:00:00 +0000
The Political Scene’s Washington roundtable discusses whether the United States is a gerontocracy, and what that means for the country’s politics.
Match ID: 41 Score: 17.86 source: www.newyorker.com age: 4 days
qualifiers: 17.86 trump

The State of the Transistor in 3 Charts
Sat, 26 Nov 2022 16:00:01 +0000


The most obvious change in transistor technology in the last 75 years has been just how many we can make. Reducing the size of the device has been a titanic effort and a fantastically successful one, as these charts show. But size isn’t the only feature engineers have been improving.


In 1947, there was only one transistor. According to TechInsight’s forecast, the semiconductor industry is on track to produce almost 2 billion trillion (1021) devices this year. That’s more transistors than were cumulatively made in all the years prior to 2017. Behind that barely conceivable number is the continued reduction in the price of a transistor, as engineers have learned to integrate more and more of them into the same area of silicon.


Scaling down transistors in the 2D space of the plane of the silicon has been a smashing success: Transistor density in logic circuits has increased more than 600,000-fold since 1971. Reducing transistor size requires using shorter wavelengths of light, such as extreme ultraviolet, and other lithography tricks to shrink the space between transistor gates and between metal interconnects. Going forward, it’s the third dimension, where transistors will be built atop one another, that counts. This trend is more than a decade old in flash memory, but it’s still in the future for logic (see “Taking Moore’s Law to New Heights.”)


Perhaps the crowning achievement of all this effort is the ability to integrate millions, even billions, of transistors into some of the most complex systems on the planet: CPUs. Here’s a look at some of the high points along the way.

What Transistors Have Become


Besides making them tiny and numerous, engineers have devoted their efforts to enhancing the device’s other qualities. Here is a small sampling of what transistors have become in the last 75 years:


Icon of a series of circles.

Ephemeral:

Researchers in Illinois developed circuits that dissolve in the body using a combination of ultrathin silicon membranes, magnesium conductors, and magnesium oxide insulators. Five minutes in water was enough to turn the first generation to mush. But recently researchers used a more durable version to make temporary cardiac pacemakers that release an anti-inflammatory drug as they disappear.


An icon of lightning bolt over a circle.

Fast:

The first transistor was made for radio frequencies, but there are now devices that operate at about a billion times those frequencies. Engineers in South Korea and Japan reported the invention of an indium gallium arsenide high-electron mobility transistor, or HEMT, that reached a maximum frequency of 738 gigahertz. Seeking raw speed, engineers at Northrop Grumman made a HEMT that passed 1 terahertz.



An icon of an iron with a line underneath.

Flat:

Today’s (and yesterday’s) transistors depend on the semiconducting properties of bulk (3D) materials. Tomorrow’s devices might rely on 2D semiconductors, such as molybdenum disulfide and tungsten disulfide. These transistors might be built in the interconnect layers above a processor’s silicon, researchers say. So 2D semiconductors could help lead to 3D processors.


An icon of a circle with a series of lines on it

Flexible:

The world is not flat, and neither are the places transistors need to operate. Using indium gallium arsenide, engineers in South Korea recently made high-performance logic transistors on plastic that hardly suffered when bent around a radius of just 4 millimeters. And engineers in Illinois and England have made microcontrollers that are both affordable and bendable.



Icon of a eye with a question mark in the center.

Invisible:

When you need to hide your computing in plain sight, turn to transparent transistors. Researchers in Fuzhou, China, recently made a see-through analogue of flash memory using organic semiconductor thin-film transistors. And researchers in Japan and Malaysia produced transparent diamond devices capable of handling more than 1,000 volts.


Icon of a brain made out of square icons

Mnemonic:

NAND flash memory cells can store multiple bits in a single device. Those on the market today store either 3 or 4 bits each. Researchers at Kioxia Corp. built a modified NAND flash cell and dunked it in 77-kelvin liquid nitrogen. A single superchilled transistor could store up to 7 bits of data, or 128 different values.



Icon of a circle with a star inside.

Talented:

In 2018, engineers in Canada used an algorithm to generate all the possible unique and functional elementary circuits that can be made using just two metal-oxide field-effect transistors. The number of circuits totaled an astounding 582. Increasing the scope to three transistors netted 56,280 circuits, including several amplifiers previously unknown to engineering.


Icon of a shield

Tough:

Some transistors can take otherworldly punishment. NASA Glenn Research Center built 200-transistor silicon carbide ICs and operated them for 60 days in a chamber that simulates the environment on the surface of Venus—460 °C heat, a planetary-probe-crushing 9.3 megapascals of pressure, and the hellish planet’s corrosive atmosphere.

This article appears in the December 2022 print issue as “The State of the Transistor.”


Match ID: 42 Score: 17.14 source: spectrum.ieee.org age: 3 days
qualifiers: 17.14 korea

Scott Morrison becomes first former Australian prime minister to be censured by parliament
Wed, 30 Nov 2022 02:39:16 GMT

Former PM claims he would have answered truthfully if asked about any of the secret ministries he held and repeated past defence of arrangements

Scott Morrison has been censured by the House of Representatives after offering fresh defences for his failure to disclose extra ministerial appointments and accusing the government of pursuing the “politics of retribution”.

Morrison told the lower house it was “false” to equate his decision to administer colleagues’ departments with appointments as minister, and claimed if he had been asked he “would have responded truthfully about the arrangements”.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...
Match ID: 43 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Dip in Australia’s inflation rate in October raises hopes price rises may have peaked
Wed, 30 Nov 2022 02:37:11 GMT

Headline CPI rate eases from 7.3% to 6.9%, lower than economists’ expectations, but fuel prices still rose as full excise rate returned

Australia’s inflation rate eased in October, helped by smaller increases for food, adding to expectations that the rate of price rises may be nearing their peak.

The headline consumer price index for last month was 6.9%, slowing from the 7.3% pace reported for September, the Australian Bureau of Statistics said. Some economists, such as CBA, had predicted the October CPI rate to come in at 7.4%.

Continue reading...
Match ID: 44 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Body of 19-year-old man found in NSW four days after he vanished in flood waters
Wed, 30 Nov 2022 02:12:21 GMT

Death is third this month linked to NSW flooding crisis after Ljubisa ‘Les’ Vugec, 85, and Dianne Smith, 60, died in Eugowra

The body of a young man who went missing while swimming in flood waters in southern New South Wales has been found.

The discovery was made in Balranald about 8.30am on Wednesday as police began a fifth day of searching for signs of the 19-year-old.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...
Match ID: 45 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Modern Slavery Is a Global Problem in All Renewable Energy Supply Chains: New Report
2022-11-30T01:47:19+00:00
Modern Slavery Is a Global Problem in All Renewable Energy Supply Chains: New Report submitted by /u/chrisdh79
[link] [comments]

Match ID: 46 Score: 15.00 source: www.reddit.com age: 0 days
qualifiers: 15.00 energy

Australia to consider tougher nicotine e-cigarette import and labelling laws to tackle teen vaping
Wed, 30 Nov 2022 01:24:33 GMT

Therapeutic Goods Administration to investigate border control laws, as well as tougher regulation of nicotine products and advertising

The federal government will crackdown on children accessing e-cigarettes, with the regulator to consider key changes including tightening importation rules and tougher labelling laws.

As rates of teenage vaping soar, Australia’s drugs regulator, the Therapeutic Goods Administration (TGA) will begin public consultation in four areas: changes to importation and border control laws required to stop illegal products entering Australia; pre-market assessments of vapes to create a regulated source of products for pharmacists and doctors to prescribe; labelling, advertising and flavouring of vapes that make them attractive to children; and stronger identification and regulation of nicotine-containing products.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...
Match ID: 47 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

‘They’ll swim, they’ll climb trees’: experts hissue warning over snakes on the flood plains
Wed, 30 Nov 2022 00:45:18 GMT

As flood waters surge into South Australia, ecologists say locals should give fleeing snakes a wide berth

Snakes on the plains will seek shelter in back yards and homes as the flood waters from the eastern states surge down the system into South Australia.

The state’s flood plains and relatively flat topography will slow the advance of the water from New South Wales and Victoria, before it is channelled into the River Murray.

Continue reading...
Match ID: 48 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

UK food price inflation hits new high of 12.4%
Wed, 30 Nov 2022 00:01:01 GMT

Rise in cost of essentials will hit poorer households, already struggling with higher energy bills, hardest

UK food price inflation hit a new high of 12.4% in November as the price of basics such as eggs, dairy products and coffee shot up.

Fresh foods led the increase in prices – with inflation rising to 14.3% from 13.3% in October – with rises expected to continue into next year according to the latest data from the British Retail Consortium trade body, which represents most big retailers, and the market research firm NielsenIQ.

Continue reading...
Match ID: 49 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Online shoppers 'held out' for Cyber Week this year, Salesforce says
Tue, 29 Nov 2022 23:59:42 GMT

Online sales during this year's Cyber Week rose 2% year-over-year to a record $281 billion worldwide, Salesforce said on Tuesday, as customers zeroed in on discounts amid decades-high inflation during the holiday shopping period, which runs from Thanksgiving through Cyber Monday. The company, which analyzed shopping data from 1.5 billion consumers, said that after "lackluster" deals earlier in the season, retailers poured on the markdowns more aggressively as Cyber Week began. "The data showed a departure from Cyber Weeks in 2020 and 2021, when online shopping was increasingly spread throughout the month of November," Salesforce said in a release. "In comparison, 2022 data show both shoppers and retailers held out for Cyber Week as retailers worked to limit discounts that increased margin pressures and consumers searched for better deals - contributing to the largest spike in U.S. digital sales growth in the last six weeks." While Salesforce noted weakness in the UK and Europe, which are feeling the pain of an energy crisis, Cyber Week sales in the U.S. rose 9% from 2021 to $68 billion.

Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.


Match ID: 50 Score: 15.00 source: www.marketwatch.com age: 0 days
qualifiers: 15.00 energy

Thurrock council admits disastrous investments caused £500m deficit
Tue, 29 Nov 2022 20:18:58 GMT

Tory-led Essex authority is on brink of bankruptcy and has appealed to government for emergency bailout

A Tory-led council has admitted a series of disastrous investments in risky commercial projects caused it to run up an unprecedented deficit of nearly £500m and brought it to the brink of bankruptcy.

The staggering scale of the catastrophe at Thurrock council in Essex – one of the biggest ever financial disasters in local government – is contained in an internal report made to the council’s cabinet, which reveals it has lost £275m on investments it made in solar energy and other businesses, and has set aside a further £130m this year to pay back investment debts.

Continue reading...
Match ID: 51 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Octopus takeover of Bulb faces delay after rivals seek judicial review
Tue, 29 Nov 2022 18:34:23 GMT

Three companies complain over lack of transparency around deal which could be biggest government bailout since RBS and Lloyds

The takeover of collapsed bailed-out energy supplier Bulb by rival Octopus faces further delays after three rival companies launched judicial review proceedings, arguing that there are “significant concerns” over a possible £1bn government-funded “dowry”.

Octopus agreed to buy Bulb out of a special government-handled administration last month after a year-long process which could cost the taxpayer up to £6.5bn, according to a government estimate.

Continue reading...
Match ID: 52 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Waiting for Superbatteries
Tue, 29 Nov 2022 16:00:00 +0000


If grain must be dragged to market on an oxcart, how far can it go before the oxen eat up all the cargo? This, in brief, is the problem faced by any transportation system in which the vehicle must carry its own fuel. The key value is the density of energy, expressed with respect to either mass or volume.

The era of large steam-powered ocean liners began during the latter half of the 19th century, when wood was still the world’s dominant fuel. But no liners fired their boilers with wood: There would have been too little space left for passengers and cargo. Soft wood, such as spruce or pine, packs less than 10 megajoules per liter, whereas bituminous coal has 2.5 times as much energy by volume and at least twice as much by mass. By comparison, gasoline has 34 MJ/L and diesel about 38 MJ/L.


But in a world that aspires to leave behind all fuels (except hydrogen or maybe ammonia) and to electrify everything, the preferred measure of stored energy density is watt-hours per liter. By this metric, air-dried wood contains about 3,500 Wh/L, good steam coal around 6,500, gasoline 9,600, aviation kerosene 10,300, and natural gas (methane) merely 9.7—less than 1/1,000 the density of kerosene.


How do batteries compare with the fuels they are to displace? The first practical battery, Gaston Planté’s lead-acid cell introduced in 1859, has gradually improved from less than 60 Wh/L to about 90 Wh/L. The nickel-cadmium battery, invented by Waldemar Jungner in 1899, now frequently stores more than 150 Wh/L, and today’s best mass-manufactured performers are lithium-ion batteries, the first commercial versions of which came out in 1991. The best energy density now commercially available in very large quantities for lithium-ion batteries is at 750 Wh/L, which is widely seen in electric cars. In 2020 Panasonic promised it would reach about 850 Wh/L by 2025 (and do so without the expensive cobalt). Eventually, the company aims to reach a 1,000-Wh/L product.

Over the past 50 years, the highest energy density of mass-produced batteries has roughly quintupled

Claims of new energy-density records for lithium-ion batteries appear regularly. In March 2021, Sion Power announced an 810-Wh/L pouch cell; three months later NanoGraf announced a cylindrical cell with 800 Wh/L. Earlier claims spoke of even loftier energy densities—QuantumScape mentioned a 1,000-Wh/L cell in a December 2020 claim, and Sion Power of a 1,400-Wh/L cell as far back as 2018. But Sion’s cells came from a pilot production line, not from a routine mass-scale operation, and QuantumScape’s claim was based on laboratory tests of single-layer cells, not on any commercially available multilayer products.

The real-world leader seems to be Amprius Technologies of Fremont, Calif.: In February 2022, the company announced the first delivery of batteries rated as high as 1,150 Wh/L, to a maker of a new generation of high-altitude uncrewed aircraft, to be used to relay signals. This is obviously a niche market, orders of magnitude smaller than the potential market for electric vehicles, but it is a welcome confirmation of continuous density gains.

There is a long way to go before batteries rival the energy density of liquid fuels. Over the past 50 years, the highest energy density of mass-produced batteries has roughly quintupled, from less than 150 to more than 700 Wh/L. But even if that trend continues for the next 50 years, we would still see top densities of about 3,500 Wh/L, no more than a third that of kerosene. The wait for superbatteries ready to power intercontinental flight may not be over by even 2070.

This article appears in the December 2022 print issue.


Match ID: 53 Score: 15.00 source: spectrum.ieee.org age: 0 days
qualifiers: 15.00 energy

Big polluters given almost €100bn in free carbon permits by EU
Tue, 29 Nov 2022 13:00:48 GMT

Free allowances ‘in direct contradiction with the polluter pays principle’, WWF report says

Big polluting industries have been given almost €100bn (£86bn) in free carbon permits by the EU in the last nine years, according to an analysis by the WWF. The free allowances are “in direct contradiction with the polluter pays principle”, the group said.

Free pollution permits worth €98.5bn were given to energy-intensive sectors including steel, cement, chemicals and aviation from 2013-21. This is more than the €88.5bn that the EU’s emissions trading scheme (ETS) charged polluters, mostly coal and gas power stations, for their CO2 emissions.

Continue reading...
Match ID: 54 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 energy

Ukraine war: Moldova braces for blackouts
Tue, 29 Nov 2022 12:54:48 GMT
Residents say they are already struggling to cope with soaring energy prices and inflation.
Match ID: 55 Score: 15.00 source: www.bbc.co.uk age: 0 days
qualifiers: 15.00 energy

Energy bills: 'We cut back but are paying three times as much'
Tue, 29 Nov 2022 08:24:31 GMT
Many households are seeing their energy bills going up despite the power they use going down.
Match ID: 56 Score: 15.00 source: www.bbc.co.uk age: 0 days
qualifiers: 15.00 energy

Great Barrier Reef flagged as ‘in danger’ world heritage site. What does this mean?
Tue, 29 Nov 2022 02:48:59 GMT

Scientists have delivered clear advice for the reef. So what did their report say, how might the government respond and what happens next?

The status of the Great Barrier Reef as a globally significant and intact world heritage site is under the spotlight again.

Two UN-backed scientists have today recommended, after a 10-day inspection earlier this year, the world’s biggest coral reef system should be placed on a list of world heritage sites “in danger”.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...
Match ID: 57 Score: 15.00 source: www.theguardian.com age: 1 day
qualifiers: 15.00 energy

Calls for UK ban on pre-payment meter installations made under court warrants
Tue, 29 Nov 2022 00:01:11 GMT

End Fuel Poverty Coalition fears energy suppliers are using warrants to disconnect poorest ‘by the back door’

Campaigners have called for an immediate ban on pre-payment meter (PPM) installations made under court warrants because of fears that energy suppliers are using them to disconnect the poorest, most indebted customers “by the back door”.

Energy firms’ licence conditions protect many vulnerable people from formal disconnection over the winter, but the End Fuel Poverty Coalition said transferring households on to PPMs, which require regular top-ups and charge for energy at a higher rate, often prompted people in debt to “self-disconnect”.

Continue reading...
Match ID: 58 Score: 15.00 source: www.theguardian.com age: 1 day
qualifiers: 15.00 energy

The EV Transition Explained: Can the Grid Cope?
Mon, 28 Nov 2022 21:18:46 +0000


There have been vigorous debates pro and con in the United States and elsewhere over whether electric grids can support EVs at scale. The answer is a nuanced “perhaps.” It depends on several factors, including the speed of grid-component modernization, the volume of EV sales, where they occur and when, what kinds of EV charging are being done and when, regulator and political decisions, and critically, economics.

The city of Palo Alto, Calif. is a microcosm of many of the issues involved. Palo Alto boasts the highest adoption rate of EVs in the United States: In 2020, one in six of the town’s 25,000 households owned an EV. Of the 52,000 registered vehicles in the city, 4,500 are EVs, and on workdays, commuters drive another 3,000 to 5,000 EVs to enter the city. Residents can access about 1,000 charging ports spread over 277 public charging stations, with another 3,500 or so charging ports located at residences.

Palo Alto’s government has set a very aggressive Sustainability and Climate Action Plan with a goal of reducing its greenhouse gas emissions to 80 percent below the 1990 level by the year 2030. In comparison, the state’s goal is to achieve this amount by 2050. To realize this reduction, Palo Alto must have 80 percent of vehicles within the next eight years registered in (and commuting into) the city be EVs (around 100,000 total). The projected number of charging ports will need to grow to an estimated 6,000 to 12,000 public ports (some 300 being DC fast chargers) and 18,000 to 26,000 residential ports, with most of those being L2-type charging ports.

“There are places even today where we can’t even take one more heat pump without having to rebuild the portion of the system. Or we can’t even have one EV charger go in.” —Tomm Marshall

To meet Palo Alto’s 2030 emission-reduction goals, the city, which owns and operates the electric utility, would like to increase significantly the amount of local renewable energy being used for electricity generation (think rooftop solar) including the ability to use EVs as distributed-energy resources (vehicle-to-grid (V2G) connections). The city has provided incentives for the purchase of both EVs and charging ports, the installation of heat-pump water heaters, and the installation of solar and battery-storage systems.

There are, however, a few potholes that need to be filled to meet the city’s 2030 emission objectives. At a February meeting of Palo Alto’s Utilities Advisory Commission, Tomm Marshall, assistant director of utilities, stated, “There are places even today [in the city] where we can’t even take one more heat pump without having to rebuild the portion of the [electrical distribution] system. Or we can’t even have one EV charger go in.”

Peak loading is the primary concern. Palo Alto’s electrical-distribution system was built for the electric loads of the 1950s and 1960s, when household heating, water, and cooking were running mainly on natural gas. The distribution system does not have the capacity to support EVs and all electric appliances at scale, Marshall suggested. Further, the system was designed for one-way power, not for distributed-renewable-energy devices sending power back into the system.

A big problem is the 3,150 distribution transformers in the city, Marshall indicated. A 2020 electrification-impact study found that without improvements, more than 95 percent of residential transformers would be overloaded if Palo Alto hits its EV and electrical-appliance targets by 2030.

Palo Alto’s electrical-distribution system needs a complete upgrade to allow the utility to balance peak loads.

For instance, Marshall stated, it is not unusual for a 37.5 kilovolt-ampere transformer to support 15 households, as the distribution system was originally designed for each household to draw 2 kilowatts of power. Converting a gas appliance to a heat pump, for example, would draw 4 to 6 kW, while an L2 charger for EVs would be 12 to 14 kW. A cluster of uncoordinated L2 charging could create an excessive peak load that would overload or blow out a transformer, especially when they are toward the end of their lives, which many already are. Without smart meters—that is, Advanced Metering Infrastructure (AMI), which will be introduced into Palo Alto in 2024—the utility has little to no household peak load insights.

Palo Alto’s electrical-distribution system needs a complete upgrade to allow the utility to balance peak loads, manage two-way power flows, install the requisite number of EV charging ports and electric appliances to support the city’s emission-reduction goals, and deliver power in a safe, reliable, sustainable, and cybersecure manner. The system also must be able to cope in a multihour-outage situation, where future electrical appliances and EV charging will commence all at once when power is restored, placing a heavy peak load on the distribution system.

A map of EV charging stations in the Palo Alto, CA area PlugShare.comA map of EV charging stations in the Palo Alto, CA area from PlugShare.com

Palo Alto is considering investing US $150 million toward modernizing its distribution system, but that will take two to three years of planning, as well as another three to four years or more to perform all the necessary work, but only if the utility can get the engineering and management staff, which continues to be in short supply there and at other utilities across the country. Further, like other industries, the energy business has become digitized, meaning the skills needed are different from those previously required.

Until it can modernize its distribution network, Marshall conceded that the utility must continue to deal with angry and confused customers who are being encouraged by the city to invest in EVs, charging ports, and electric appliances, only then to be told that they may not be accommodated anytime soon.

Policy runs up against engineering reality

The situation in Palo Alto is not unique. There are some 465 cities in the United States with populations between 50,000 and 100,000 residents, and another 315 that are larger, many facing similar challenges. How many can really support a rapid influx of thousands of new EVs? Phoenix, for example, wants 280,000 EVs plying its streets by 2030, nearly seven times as many as it has currently. Similar mismatches between climate-policy desires and an energy infrastructure incapable of supporting those policies will play out across not only the United States but elsewhere in one form or another over the next two decades as conversion to EVs and electric appliances moves to scale.

As in Palo Alto, it will likely be blown transformers or constantly flickering lights that signal there is an EV charging-load issue. Professor Deepak Divan, the director of the Center for Distributed Energy at Georgia Tech, says his team found that in residential areas “multiple L2 chargers on one distribution transformer can reduce its life from an expected 30 to 40 years to 3 years.” Given that most of the millions of U.S. transformers are approaching the end of their useful lives, replacing transformers soon could be a major and costly headache for utilities, assuming they can get them.

Supplies for distribution transformers are low, and costs have skyrocketed from a range of $3,000 to $4,000 to $20,000 each. Supporting EVs may require larger, heavier transformers, which means many of the 180 million power poles on which these need to sit will need to be replaced to support the additional weight.

Exacerbating the transformer loading problem, Divan says, is that many utilities “have no visibility beyond the substation” into how and when power is being consumed. His team surveyed “twenty-nine utilities for detailed voltage data from their AMI systems, and no one had it.”

This situation is not true universally. Xcel Energy in Minnesota, for example, has already started to upgrade distribution transformers because of potential residential EV electrical-load issues. Xcel president Chris Clark told the Minneapolis Star Tribune that four or five families buying EVs noticeably affects the transformer load in a neighborhood, with a family buying an EV “adding another half of their house.”

Joyce Bodoh, director of energy solutions and clean energy for Virginia’s Rappahannock Electric Cooperative (REC), a utility distributor in central Virginia, says that “REC leadership is really, really supportive of electrification, energy efficiency, and electric transportation.” However, she adds, “all those things are not a magic wand. You can’t make all three things happen at the same time without a lot of forward thinking and planning.”

Total U.S. Energy Consumption


For nearly 50 years, Lawrence Livermore National Laboratory has been publishing a Sankey diagram of estimated U.S. energy consumption from various generation sources, as shown above. In 2021, the United States consumed 97.3 quadrillion British thermal units (quads) of energy, with the transportation sector using 26.9 quads, 90 percent of it from petroleum. Obviously, as the transportation sector electrifies, electricity generation will need to grow in some reduced proportion of the energy once provided to the transportation section by petroleum, given the higher energy efficiency of EVs.

To achieve the desired reduction in greenhouse gases, renewable-energy generation of electricity will need to replace fossil fuels. The improvements and replacements to the grid’s 8,000 power-generation units and 600,000 circuit miles of AC transmission lines (240,000 circuit miles being high-voltage lines) and 70,000 substations to support increased renewable energy and battery storage is estimated to be more than $2.5 trillion in capital, operations, and maintenance costs by 2035.

In the short term, it is unlikely that EVs will create power shortfalls in the U.S. grid, but the rising number of EVs will test the local grid’s reliability at many of the 3,000 electric-distribution utilities in the United States, which themselves own more than 5.5 million miles of power lines. It is estimated that these utilities need $1 trillion in upgrades by 2035.

As part of this planning effort, Bodoh says that REC has actively been performing “an engineering study that looked at line loss across our systems as well as our transformers, and said, ‘If this transformer got one L2 charger, what would happen? If it got two L2s, what would happen, and so on?’” She adds that REC “is trying to do its due diligence, so we don’t get surprised when a cul-de-sac gets a bunch of L2 chargers and there’s a power outage.”

REC also has hourly energy-use data from which it can find where L2 chargers may be in use because of the load profile of EV charging. However, Bodoh says, REC does not just want to know where the L2 chargers are, but also to encourage its EV-owning customers to charge at nonpeak hours—that is, 9 p.m. to 5 a.m. and 10 a.m. to 2 p.m. REC has recently set up an EV charging pilot program for 200 EV owners that provides a $7 monthly credit if they do off-peak charging. Whether REC or other utilities can convince enough EV owners of L2 chargers to consistently charge during off-peak hours remains to be seen.

“Multiple L2 chargers on one distribution transformer can reduce its life from an expected 30 to 40 years to 3 years.” —Deepak Divan

Even if EV owner behavior changes, off-peak charging may not fully solve the peak-load problem once EV ownership really ramps up. “Transformers are passively cooled devices,” specifically designed to be cooled at night, says Divan. “When you change the (power) consumption profile by adding several EVs using L2 chargers at night, that transformer is running hot.” The risk of transformer failure from uncoordinated overnight charging may be especially aggravated during times of summer heat waves, an issue that concerns Palo Alto’s utility managers.

There are technical solutions available to help spread EV charging peak loads, but utilities will have to make the investments in better transformers and smart metering systems, as well as get regulatory permission to change electricity-rate structures to encourage off-peak charging. Vehicle-to-grid (V2G), which allows an EV to serve as a storage device to smooth out grid loads, may be another solution, but for most utilities in the United States, this is a long-term option. Numerous issues need to be addressed, such as the updating of millions of household electrical panels and smart meters to accommodate V2G, the creation of agreed-upon national technical standards for the information exchange needed between EVs and local utilities, the development of V2G regulatory policies, and residential and commercial business models, including fair compensation for utilizing an EV’s stored energy.

As energy expert Chris Nelder noted at a National Academy EV workshop, “vehicle-to-grid is not really a thing, at least not yet. I don’t expect it to be for quite some time until we solve a lot of problems at various utility commissions, state by state, rate by rate.”

In the next article in the series, we will look at the complexities of creating an EV charging infrastructure.


Match ID: 59 Score: 15.00 source: spectrum.ieee.org age: 1 day
qualifiers: 15.00 energy

Home insulation: How can it cut energy bills?
Mon, 28 Nov 2022 12:17:45 GMT
Better home insulation could save some households hundreds of pounds a year on their energy bills.
Match ID: 60 Score: 15.00 source: www.bbc.co.uk age: 1 day
qualifiers: 15.00 energy

UK households have cut energy consumption by 10%, say suppliers
Sun, 27 Nov 2022 15:30:08 GMT

E.ON reports up to 15% drop as Grant Shapps writes to firms saying customers cutting back on energy use should not face direct debit rise

Britons have cut their gas and electricity use by more than 10% since October in the first evidence of the impact of the energy crisis on household habits, according to two of Britain’s biggest suppliers.

E.ON, Britain’s second-largest supplier, and Telecom Plus, which owns Utility Warehouse, have reported “double-digit” declines in recent weeks.

Continue reading...
Match ID: 61 Score: 15.00 source: www.theguardian.com age: 2 days
qualifiers: 15.00 energy

Millions of households will be spending nearly third of income on fuel by spring
Sun, 27 Nov 2022 06:00:13 GMT

Findings come amid mounting evidence that the poorest people in the UK are paying a ‘poverty premium’ for basic services

Millions of households will be paying almost a third of their income in fuel costs this spring, amid warnings that a “black hole in provision” remains for Britain’s poorest families.

The vast majority of households in some vulnerable groups – including some 70% of pensioners – will be spending a tenth or more of their income on fuel from April, when support for energy costs will be reduced.

Continue reading...
Match ID: 62 Score: 15.00 source: www.theguardian.com age: 2 days
qualifiers: 15.00 energy

ISS Daily Summary Report – 11/25/2022
Fri, 25 Nov 2022 16:00:17 +0000
RS Extravehicular Activity (EVA) #56: During preparations for today’s Russian EVA, both the primary and backup cooling pumps on EV1’s Orlan showed unstable behavior. After unsuccessful attempts at troubleshooting, the EVA has been cancelled. The European Robotic Arm (ERA) has been returned to its stowed configuration, and the crew and ground teams are backing out of the …
Match ID: 63 Score: 14.29 source: blogs.nasa.gov age: 4 days
qualifiers: 14.29 russia

Putin’s grip on regional allies loosens again after Armenia snub
Fri, 25 Nov 2022 14:08:46 GMT

Damaging optics of ‘family’ photo at CSTO summit highlights fragility of Russia’s influence in wake of war in Ukraine

Armenia has asked the French president, Emmanuel Macron, to chair peace talks with Azerbaijan in a fresh challenge to Vladimir Putin’s increasingly loose grip on Russia’s regional allies in the wake of the war in Ukraine.

The snub from a traditional ally to Putin, who had hosted an inconsequential meeting of the warring countries’ leaders last month, comes immediately on the back of his disastrous summit with six former Soviet states.

Continue reading...
Match ID: 64 Score: 14.29 source: www.theguardian.com age: 4 days
qualifiers: 14.29 russia

The Data Guy Who Got the Midterms Right
Thu, 24 Nov 2022 03:43:32 +0000

Tom Bonier of TargetSmart on how Republican polls were able to skew media predictions.

The post The Data Guy Who Got the Midterms Right appeared first on The Intercept.


Match ID: 65 Score: 14.29 source: theintercept.com age: 5 days
qualifiers: 14.29 trump

Video Friday: Turkey Sandwich
Fri, 25 Nov 2022 17:13:24 +0000


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Happy Thanksgiving, for those who celebrate it. Now spend 10 minutes watching a telepresence robot assemble a turkey sandwich.

[ Sanctuary ]

Ayato Kanada, an assistant professor at Kyushu University, in Japan, wrote in to share “the world’s simplest omnidirectional mobile robot.”

We propose a palm-sized omnidirectional mobile robot with two torus wheels. A single torus wheel is made of an elastic elongated coil spring in which the two ends of the coil connected each other and is driven by a piezoelectric actuator (stator) that can generate 2-degrees-of-freedom (axial and angular) motions. The stator converts its thrust force and torque into longitudinal and meridian motions of the torus wheel, respectively, making the torus work as an omnidirectional wheel on a plane.

[ Paper ]

Thanks, Ayato!

This work, entitled “Virtually turning robotic manipulators into worn devices: opening new horizons for wearable assistive robotics,” proposes a novel hybrid system using a virtually worn robotic arm in augmented reality, and a real robotic manipulator servoed on such a virtual representation. We basically aim at creating the illusion of wearing a robotic system while its weight is fully supported. We believe that this approach could offer a solution to the critical challenge of weight and discomfort caused by robotic sensorimotor extensions—such as supernumerary robotic limbs (SRL), prostheses, or handheld tools—and open new horizons for the development of wearable robotics.

[ Paper ]

Thanks, Nathanaël!

Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jumps, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-size jumping robots.

[ Georgia Tech ]

Thanks, Jason!

The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the moon’s shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800-square-meter “lunar” terrain. The winning team will have the opportunity to have their technology implemented on the moon.

[ ESA ]

Thanks, Arne!

If only cobots were as easy to use as this video from Kuka makes it seem.

The Kuka website doesn’t say how much this thing costs, which means it’s almost certainly not something that you impulse buy.

[ Kuka ]

We present the tensegrity aerial vehicle, a design of collision-resilient rotor robots with icosahedron tensegrity structures. With collision resilience and reorientation ability, the tensegrity aerial vehicles can operate in cluttered environments without complex collision-avoidance strategies. These capabilities are validated by a test of an experimental tensegrity aerial vehicle operating with only onboard inertial sensors in a previously unknown forest.

[ HiPeR Lab ]

The robotics research group Brubotics and the polymer-science and physical-chemistry group FYSC of the University of Brussels have together developed self-healing materials that can be scratched, punctured, or completely cut through and heal themselves back together, with the required heat, or even at room temperature.

[ Brubotics ]

Apparently, the World Cup needs more drone footage, because this is kinda neat.

[ DJI ]

Researchers at MIT’s Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.

[ MIT ]

The researchers from North Carolina State University have recently developed a fast and efficient soft robotic swimmer whose motions resemble a human’s butterfly-stroke style. It can achieve a high average swimming speed of 3.74 body lengths per second, close to five times as fast as the fastest similar soft swimmers, and also a high-power efficiency with a low energy cost.

[ NC State ]

To facilitate sensing and physical interaction in remote and/or constrained environments, high-extension, lightweight robot manipulators are easier to transport and can reach substantially further than traditional serial-chain manipulators. We propose a novel planar 3-degrees-of-freedom manipulator that achieves low weight and high extension through the use of a pair of spooling bistable tapes, commonly used in self-retracting tape measures, which are pinched together to form a reconfigurable revolute joint.

[ Charm Lab ]

SLURP!

[ River Lab ]

This video may encourage you to buy a drone. Or a snowmobile.

[ Skydio ]

Moxie is getting an update for the holidays!

[ Embodied ]

Robotics professor Henny Admoni answers the Internet’s burning questions about robots! How do you program a personality? Can robots pick up a single M&M? Why do we keep making humanoid robots? What is Elon Musk’s goal for the Tesla Optimus robot? Will robots take over my job writing video descriptions...I mean, um, all our jobs? Henny answers all these questions and much more.

[ CMU ]

This GRASP on Robotics talk is from Julie Adams at Oregon State University, on “Towards Adaptive Human-Robot Teams: Workload Estimation.”

The ability for robots, be it a single robot, multiple robots, or a robot swarm, to adapt to the humans with which they are teamed requires algorithms that allow robots to detect human performance in real time. The multidimensional workload algorithm incorporates physiological metrics to estimate overall workload and its components (cognitive, speech, auditory, visual, and physical). The algorithm is sensitive to changes in a human’s individual workload components and overall workload across domains, human-robot teaming relationships (supervisory, peer-based), and individual differences. The algorithm has also been demonstrated to detect shifts in workload in real time in order to adapt the robot’s interaction with the human and autonomously change task responsibilities when the human’s workload is over- or underloaded. Recently, the algorithm was used to analyze post hoc the resulting workload for a single human deploying a heterogeneous robot swarm in an urban environment. Current efforts are focusing on predicting the human’s future workload, recognizing the human’s current tasks, and estimating workload for previously unseen tasks.

[ UPenn ]


Match ID: 66 Score: 10.71 source: spectrum.ieee.org age: 4 days
qualifiers: 10.71 energy

The U.S.-China Chip Ban, Explained
Mon, 21 Nov 2022 17:28:29 +0000


It has now been over a month since the U.S. Commerce Department issued new rules that clamped down on the export of certain advanced chips—which have military or AI applications—to Chinese customers.

China has yet to respond—but Beijing has multiple options in its arsenal. It’s unlikely, experts say, that the U.S. actions will be the last fighting word in an industry that is becoming more geopolitically sensitive by the day.

This is not the first time that the U.S. government has constrained the flow of chips to its perceived adversaries. Previously, the United States has blocked chip sales to individual Chinese customers. In response to the Russian invasion of Ukraine earlier this year, the United States (along with several other countries, including South Korea and Taiwan) placed Russia under a chip embargo.


But none of these prior U.S. chip bans were as broad as the new rules, issued on 7 October. “This announcement is perhaps the most expansive export control in decades,” says Sujai Shivakumar, an analyst at the Center for International and Strategic Studies, in Washington.

The rules prohibit the sale, to Chinese customers, of advanced chips with both high performance (at least 300 trillion operations per second, or 300 teraops) and fast interconnect speed (generally, at least 600 gigabytes per second). Nvidia’s A100, for comparison, is capable of over 600 teraops and matches the 600 Gb/s interconnect speed. Nvidia’s more-impressive H100 can reach nearly 4,000 trillion operations and 900 Gb/s. Both chips, intended for data centers and AI trainers, cannot be sold to Chinese customers under the new rules.

Additionally, the rules restrict the sale of fabrication equipment if it will knowingly be used to make certain classes of advanced logic or memory chips. This includes logic chips produced at nodes of 16 nanometers or less (which the likes of Intel, Samsung, and TSMC have done since the early 2010s); NAND long-term memory integrated circuits with at least 128 layers (the state of the art today); or DRAM short-term memory integrated circuits produced at 18 nanometers or less (which Samsung began making in 2016).

Chinese chipmakers have barely scratched the surface of those numbers. SMIC switched on 14-nm mass production this year, despite facing existing U.S. sanctions. YMTC started shipping 128-layer NAND chips last year.

The rules restrict not just U.S. companies, but citizens and permanent residents as well. U.S. employees at Chinese semiconductor firms have had to pack up. ASML, a Dutch maker of fabrication equipment, has told U.S. employees to stop servicing Chinese customers.

Speaking of Chinese customers, most—including offices, gamers, designers of smaller chips—probably won’t feel the controls. “Most chip trade and chip production in China is unimpacted,” says Christopher Miller, a historian who studies the semiconductor trade at Tufts University.

The controlled sorts of chips instead go into supercomputers and large data centers, and they’re desirable for training and running large machine-learning models. Most of all, the United States hopes to stop Beijing from using chips to enhance its military—and potentially preempt an invasion of Taiwan, where the vast majority of the world’s semiconductors and microprocessors are produced.

In order to seal off one potential bypass, the controls also apply to non-U.S. firms that rely on U.S.-made equipment or software. For instance, Taiwanese or South Korean chipmakers can’t sell Chinese customers advanced chips that are fabricated with U.S.-made technology.

It’s possible to apply to the U.S. government for an exemption from at least some of the restrictions. Taiwanese fab juggernaut TSMC and South Korean chipmaker SK Hynix, for instance, have already acquired temporary exemptions—for a year. “What happens after that is difficult to say,” says Patrick Schröder, a researcher at Chatham House in London. And the Commerce Department has already stated that such licenses will be the exception, not the rule (although Commerce Department undersecretary Alan Estevez suggested that around two-thirds of licenses get approved).

More export controls may be en route. Estevez indicated that the government is considering placing restrictions on technologies in other sensitive fields—specifically mentioning quantum information science and biotechnology, both of which have seen China-based researchers forge major progress in the past decade.

The Chinese government has so far retorted with harsh words and little action. “We don’t know whether their response will be an immediate reaction or whether they have a longer-term approach to dealing with this,” says Shivakumar. “It’s speculation at this point.”

Beijing could work with foreign companies whose revenue in the lucrative Chinese market is now under threat. “I’m really not aware of a particular company that thinks it’s coming out a winner in this,” says Shivakumar. This week, in the eastern city of Hefei, the Chinese government hosted a chipmakers’ conference whose attendees included U.S. firms AMD, Intel, and Qualcomm.

Nvidia has already responded by introducing a China-specific chip, the A800, which appears to be a modified A100 cut down to meet the requirements. Analysts say that Nvidia’s approach could be a model for other companies to keep up Chinese sales.

There may be other tools the Chinese government can exploit. While China may be dependent on foreign semiconductors, foreign electronics manufacturers are in turn dependent on China for rare-earth metals—and China supplies the supermajority of the world’s rare earths.

There is precedent for China curtailing its rare-earth supply for geopolitical leverage. In 2010, a Chinese fishing boat collided with two Japanese Coast Guard vessels, triggering an international incident when Japanese authorities arrested the boat’s captain. In response, the Chinese government cut off rare-earth exports to Japan for several months.

Certainly, much of the conversation has focused on the U.S. action and the Chinese reaction. But for third parties, the entire dispute delivers constant reminders of just how tense and volatile the chip supply can be. In the European Union, home to less than 10 percent of the world’s microchips market, the debate has bolstered interest in the prospective European Chips Act, a plan to heavily invest in fabrication in Europe. “For Europe in particular, it’s important not to get caught up in this U.S.-China trade issue,” Schröder says.

“The way in which the semiconductor industry has evolved over the past few decades has predicated on a relatively stable geopolitical order,” says Shivakumar. “Obviously, the ground realities have shifted.”


Match ID: 67 Score: 10.71 source: spectrum.ieee.org age: 8 days
qualifiers: 5.00 sanctions, 2.86 russia, 2.86 korea

Sizewell C ‘confirmed’ again – this time it might be the real deal | Nils Pratley
Tue, 29 Nov 2022 18:47:31 GMT

Buying out China’s stake was inevitable but the government still has the onerous task of finding committed investors

Another day, another “confirmation” that the government plans to build the Sizewell C nuclear power plant in Suffolk – surely the “most announced” project in UK infrastructure history. The latest update, though, contained a genuine sign of seriousness: the Chinese are being paid to go away.

China General Nuclear (CGN), a state-backed firm, owned a 20% stake in the fledgling project and had, in effect, a right to subscribe to maintain its holding through the various funding rounds – just as it did at Hinkley Point C in Somerset. In practice, any form of Chinese involvement in Sizewell has been impossible for at least a year.

Continue reading...
Match ID: 68 Score: 10.00 source: www.theguardian.com age: 0 days
qualifiers: 10.00 nuclear

The James Webb Space Telescope was a Career-Defining Project for Janet Barth
Mon, 28 Nov 2022 19:00:01 +0000


Janet Barth spent most of her career at the Goddard Space Flight Center, in Greenbelt, Md.—which put her in the middle of some of NASA’s most exciting projects of the past 40 years.

She joined the center as a co-op student and retired in 2014 as chief of its electrical engineering division. She had a hand in Hubble Space Telescope servicing missions, launching the Lunar Reconnaissance Orbiter and the Magnetospheric Multiscale mission, and developing the James Webb Space Telescope.


About Janet Barth


Employer: Miller Engineering and Research Corp.

Title: Advisory board member

Member grade: Life Fellow

Alma mater: University of Maryland in College Park

Barth, an IEEE Life Fellow, conducted pioneering work in analyzing the effects of cosmic rays and solar radiation on spacecraft observatories. Her tools and techniques are still used today. She also helped develop science requirements for NASA’s Living With a Star program, which studies the sun, magnetospheres, and planetary systems.

For her work, Barth was honored with this year’s IEEE Marie Sklodowska-Curie Award for “leadership of and contributions to the advancement of the design, building, deployment, and operation of capable, robust space systems.”

“I still tear up just thinking about it,” Barth says. “Receiving this award is humbling. Everyone at IEEE and Goddard who I worked with owns a piece of this award.”

From co-op hire to chief of NASA’s EE division

Barth initially attended the University of Michigan in Ann Arbor, to pursue a degree in biology, but she soon realized that it wasn’t a good fit for her. She transferred to the University of Maryland in College Park, and changed her major to applied mathematics.

She was accepted for a co-op position in 1978 at the Goddard center, which is about 9 kilometers from the university. Co-op jobs allow students to work at a company and gain experience while pursuing their degree.

“I was excited about using my analysis and math skills to enable new science at Goddard,” she says. She conducted research on radiation environments and their effects on electronic systems.

Goddard hired her after she graduated as a radiation and hardness assurance engineer. She helped ensure that the electronics and materials in space systems would perform as designed after being exposed to radiation in space.

Because of her expertise in space radiation, George Withbroe, director of the NASA Solar-Terrestrial Physics program (now its Heliophysics Division), asked her in 1999 to help write a funding proposal for a program he wanted to launch—which became Living With a Star. It received US $2 billion from the U.S. Congress and launched in 2001.

During her 12 years with the program, Barth helped write the architecture document, which she says became a seminal publication for the field of heliophysics (the study of the sun and how it influences space). The document outlines the program’s goals and objectives.

In 2001 she was selected to be project manager for a NASA test bed that aimed to understand how spacecraft are affected by their environment. The test bed, which collected data from space to predict how radiation might impact NASA missions, successfully completed its mission in 2020.

Barth reached the next rung on her career ladder in 2002, when she became one of the first female associate branch heads of engineering at Goddard. At the space center’s Flight Data Systems and Radiation Effects Branch, she led a team of engineers who designed flight computers and storage systems. Although it was a steep learning curve for her, she says, she enjoyed it. Three years later, she was heading the branch.

She got another promotion, in 2010, to chief of the electrical engineering division. As the Goddard Engineering Directorate’s first female division chief, she led a team of 270 employees who designed, built, and tested electronics and electrical systems for NASA instruments and spacecraft.

vintage photograph of woman smiling in group of 3 people Barth (left) and Moira Stanton at the 1997 RADiation and its Effects on Components and Systems Conference, held in Cannes, France. Barth and Stanton coauthored a poster paper and received the outstanding poster paper award.Janet Barth

Working on the James Webb Space Telescope

Throughout her career, Barth was involved in the development of the Webb space telescope. Whenever she thought that she was done with the massive project, she says with a laugh, her path would “intersect with Webb again.”

She first encountered the Webb project in the late 1990s, when she was asked to be on the initial study team for the telescope.

She wrote its space-environment specifications. After they were published in 1998, however, the team realized that there were several complex problems to solve with the telescope’s detectors. The Goddard team supported Matt Greenhouse, John C. Mather, and other engineers to work on the tricky issues. Greenhouse is a project scientist for the telescope’s science instrument payload. Mather won the 2006 Nobel Prize in Physics for discoveries supporting the Big Bang model.

The Webb’s detectors absorb photons—light from far-away galaxies, stars, and planets—and convert them into electronic voltages. Barth and her team worked with Greenhouse and Mather to verify that the detectors would work while exposed to the radiation environment at the L2 Lagrangian point, one of the positions in space where human-sent objects tend to stay put.

Years later, when Barth was heading the Flight Data Systems and Radiation Effects branch, she oversaw the development of the telescope’s instrument command and data handling systems. Because of her important role, Barth’s name was written on the telescope’s instrument ICDH flight box.

When she became chief of Goddard’s electrical engineering division, she was assigned to the technical review panel for the telescope.

“At that point,” she says, “we focused on the mechanics of deployment and the risks that came with not being able to fully test it in the environment it would be launched and deployed in.”

She served on that panel until she retired. In 2019, five years after retiring, she joined the Miller Engineering and Research Corp. advisory board. The company, based in Pasadena, Md., manufactures parts for aerospace and aviation organizations.

“I really like the ethics of the company. They service science missions and crewed missions,” Barth says. “I went back to my roots, and that’s been really rewarding.”

The best things about being an IEEE member

Barth and her husband, Douglas, who is also an engineer, joined IEEE in 1989. She says they enjoy belonging to a “unique peer group.” She especially likes attending IEEE conferences, having access to journals, and being able to take continuing education courses and workshops, she says.

“I stay up to date on the advancements in science and engineering,” she says, “and going to conferences keeps me inspired and motivated in what I do.” The networking opportunities are “terrific,” she adds, and she’s been able to meet people from just about all engineering industries.

An active IEEE volunteer for more than 20 years, she is executive chairwoman of the IEEE Nuclear and Plasma Sciences Society’s Radiation Effects Steering Group, and she served as 2013–2014 president of the IEEE Nuclear and Plasma Sciences Society. She also is an associate editor for IEEE Transactions on Nuclear Science.

“IEEE has definitely benefited my career,” she says. “There’s no doubt about that.”


Match ID: 69 Score: 10.00 source: spectrum.ieee.org age: 1 day
qualifiers: 10.00 nuclear

Puzzling Out the Drone War Over Ukraine
Fri, 25 Mar 2022 12:26:23 +0000


In 2014, Ukrainian soldiers fighting in Crimea knew that the sight of Russian drones would soon be followed by a heavy barrage of Russian artillery. During that war, the Russian military integrated drones into tactical missions, using them to hunt for Ukrainian forces, whom they then pounded with artillery and cannon fire. Russian drones weren’t as advanced as those of their Western counterparts, but the Russian military’s integration of drones into its battlefield tactics was second to none.

Eight years later, the Russians are again invading Ukraine. And since the earlier incursion, the Russian military has spent approximately US $9 billion to domestically produce an armada of some 500 drones (a.k.a. unmanned aerial vehicles, or UAVs). But, astonishingly, three weeks into this invasion, the Russians have not had anywhere near their previous level of success with their drones. There are even signs that in the drone war, the Ukrainians have an edge over the Russians.

How could the drone capabilities of these two militaries have experienced such differing fortunes over the same period? The answer lies in a combination of trade embargoes, tech development, and the rising importance of countermeasures.

Since 2014’s invasion of Crimea, Russia’s drone-development efforts have lagged—during a time of dynamic evolution and development across the UAV industry.

First, some background. Military drones come in a wide variety of sizes, purposes, and capabilities, but they can be grouped into a few categories. On one end of the spectrum are relatively tiny flying bombs, small enough to be carried in a rucksack. On the other end are high-altitude drones, with wingspans up to 25 meters and capable of staying aloft for 30 or 40 hours, of being operated from consoles thousands of kilometers from the battlefield, and of firing air-to-surface missiles with deadly precision. In between are a range of intermediate-size drones used primarily for surveillance and reconnaissance.

Russia’s fleet of drones includes models in each of these categories. However, sanctions imposed after the 2014 invasion of Crimea blocked the Russian military from procuring some key technologies necessary to stay on the cutting edge of drone development, particularly in optics, lightweight composites, and electronics. With relatively limited capabilities of its own in these areas, Russia’s drone development efforts became somewhat sluggish during a time of dynamic evolution and development elsewhere.

Current stalwarts in the Russian arsenal include the Zala Kyb, which is a “loitering munition” that can dive into a target and explode. The most common Russian drones are midsize ones used for surveillance and reconnaissance. These include the Eleron-3SV and the Orlan-10 drones, both of which have been used extensively in Syria and Ukraine. In fact, just last week, an Orlan-10 operator was awarded a military medal for locating a site from which Ukrainian soldiers were ambushing Russian tanks, and also a Ukrainian basing area outside Kyiv containing ten artillery pieces, which were subsequently destroyed. Russia’s only large, missile-firing drone is the Kronshtadt Orion, which is similar to the American MQ-1 Predator and can be used for precision strikes as well as reconnaissance. An Orion was credited with an air strike on a command center in Ukraine in early March 2022.

Meanwhile, since the 2014 Crimea war, when they had no drones at all, the Ukrainians have methodically assembled a modest but highly capable set of drones. The backbone of the fleet, with some 300 units fielded, are the A1-SM Fury and the Leleka-100 reconnaissance drones, both designed and manufactured in Ukraine. The A1-SM Fury entered service in April 2020, and the Leleka-100, in May, 2021.

On offense, the Ukrainian and Russian militaries are closely matched in the drone war. The difference is on defense.

The heavy hitter for Ukraine in this war, though, is the Bayraktar TB2 drone, a combat aerial flyer with a wingspan of 12 meters and an armament of four laser-guided bombs. As of the beginning of March, and after losing two TB2s to Russian-backed separatist forces in Lugansk, Ukraine had a complement of 30 of the drones, which were designed and developed in Turkey. These drones are specifically aimed at destroying tanks and as of 24 March had been credited with destroying 26 vehicles, 10 surface-to-air missile systems, and 3 command posts. Various reports have put the cost of a TB2 at anywhere from $1 million to $10 million. It’s much cheaper than the tens of millions fetched for better-known combat drones, such as the MQ-9 Reaper, the backbone of the U.S. Air Force’s fleet of combat drones.

The Ukrainian arsenal also includes the Tu-141 reconnaissance drones, which are large, high-altitude Soviet-era drones that have had little success in the war. At the small end of the Ukraine drone complement are 100 Switchblade drones, which were donated by the United States as part of the $800 million weapons package announced on 16 March. The Switchblades are loitering munitions similar in size and functionality to the Russian Zala Kyb.

The upshot is that on offense, the Ukrainian and Russian militaries are closely matched in the drone war. The difference is on defense: Ukraine has the advantage when it comes to counter-drone technology. A decade ago, counter-drone technology mostly meant using radar to detect drones and surface-to-air missiles to shoot them down. It quickly proved far too costly and ineffective. Drone technology advanced at a brisk pace over the past decade, so counter-drone technology had to move rapidly to keep up. In Russia, it didn’t. Here, again, the Russian military was hampered by technology embargoes and a domestic industrial base that has been somewhat stagnant and lacking in critical capabilities. For contrast, the combined industrial base of the countries supporting Ukraine in this war is massive and has invested heavily in counter-drone technology.

Russia has deployed electronic warfare systems to counter enemy drones and have likely been using the Borisoglebsk 2 MT-LB and R-330Zh Zhitel systems, which use a combination of jamming and spoofing. These systems fill the air with radio-frequency energy, increasing the noise threshold to such a level that the drone cannot distinguish control signals from the remote pilot. Another standard counterdrone technique is sending false signals to the drone, with the most common being fake (“spoofed”) GPS signals, which disorient the flyer. Jamming and spoofing systems are easy to target because they emit radio-frequency waves at fairly high intensities. In fact, open-source images show that Ukrainian forces have already destroyed three of these Russian counterdrone systems.

The exact systems that have been provided to the Ukrainians is not publicly known, but it’s possible to make an educated guess from among the many systems available.

Additionally, some of the newer drones being used by the Ukrainians include features to withstand such electronic attacks. For example, when one of these drones detects a jamming signal, it switches to frequencies that are not being jammed; if it is still unable to reestablish a connection, the drone operates autonomously with a series of preset maneuvers until a connection can be reestablished.

Meanwhile, Ukraine has access to the wide array of NATO counterdrone technologies. The exact systems that have been provided to the Ukrainians is not publicly known, but it’s possible to make an educated guess from among the many systems available. One of the more powerful ones, from Lockheed Martin, repurposes a solid-state, phased-array radar system developed to spot incoming munitions, to detect and identify a drone. The system then tracks the drone and uses high-energy lasers to shoot it down. Raytheon’s counterdrone portfolio includes similar capabilities along with drone-killing drones and systems capable of beaming high-power microwaves that disrupt the drone’s electronics.

While most major Western defense contractors have some sort of counterdrone system, there has also been significant innovation in the commercial sector, given the mass proliferation of commercial drones. While many of these technologies are aimed at smaller drones, some of the technologies, including acoustic sensing and radio-frequency localization, are effective against larger drones as well. Also, a dozen small companies have developed jamming and spoofing systems specifically aimed at countering modern drones.

Although we don’t know specifically which counterdrone systems are being deployed by the Ukrainians, the images of the destroyed drones tell a compelling story. In the drone war, many of the flyers on both sides have been captured or destroyed on the ground, but more than half were disabled while in flight. The destroyed Ukrainian drones often show tremendous damage, including burn marks and other signs that they were shot down by a Russian surface-to-air missile. A logical conclusion is that the Russians’ electronic counterdrone systems were not effective. Meanwhile, the downed Russian drones are typically much more intact, showing relatively minor damage consistent with a precision strike from a laser or electromagnetic pulse. This is exactly what you would expect if the drones had been dispatched by one of the newer Western counterdrone systems.

In the first three weeks of this conflict, Russian drones have failed to achieve the level of success that they did in 2014. The Ukrainians, on the other hand, have logged multiple victories with drone and counterdrone forces assembled in just 8 years. The Russian drones, primarily domestically sourced, have been foiled repeatedly by NATO counterdrone technology. Meanwhile, the Ukrainian drones, such as the TB2s procured from NATO-member Turkey, have had multiple successes against the Russian counterdrone systems.
Match ID: 70 Score: 10.00 source: spectrum.ieee.org age: 249 days
qualifiers: 5.00 sanctions, 2.86 russia, 2.14 energy

Delving for Joules in the Fusion Mines
Tue, 22 Nov 2022 16:00:00 +0000


The Big Picture features technology through the lens of photographers.

Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition.

Enjoy the latest images, and if you have suggestions, leave a comment below.

Shot of Nuclear Fusion


A women using robotic equipment in front of multiple screens.

An old saw regarding the multitude of dashed hopes about fusion energy’s promise goes “Fusion is 30 years away—and it always will be.” After decades of researchers predicting that fusion was just around the corner, a team at the UK Atomic Energy Authority (which hosts the Joint European Torus [JET] plasma physics experiment) did something that suggests scientists are homing in on exactly which corner that is. In February 2022, the JET experimenters induced the single greatest sustained energy pulse ever created by humans. It had twice the energy of the previous record-setting blast, triggered a quarter century earlier. A doubling every 25 years is far behind the pace of the microchip improvements described by Moore’s Law. But that hasn’t dampened enthusiasm over an alternative energy source that could make fossil fuels and their effect on the environment relics of a bygone era. In the foreground of the picture is a trainee learning how to use the systems involved in accomplishing the feat.

Leon Neal/Getty Images


A laser cut rice caked based drone.

Turning Drones into Scones

What has two wings, can reach a person stranded in a disaster zone, and doubles as a source of precious calories when no other food is available? This drone, designed and built by a team of researchers at the Swiss Federal Institute of Technology Lausanne (EPFL), has wings made entirely of laser-cut rice cakes held together with “glue” made from gelatin. The EPFL group says it plans to keep refining the edible aircraft to improve its aeronautics and enhance its nutritional profile.

EPFL


Green laser light illuminates a metasurface that is a hundred times as thin as paper.

Metasurface Weaves Entangled Photons

Creating the quantum mechanical state of entanglement (in which paired atoms influence each other from across vast distances) has heretofore been reminiscent of the story of Noah’s ark. The tried-and-true method for entangling photons (by shining light through a nonlinear crystal) puts them in this state two by two, the way the animals are said to have boarded the ark. The ambition of quantum researchers has been to expand these connections from pairs to parties. And it seems they’ve figured out how to reliably entangle multiple photons in a complicated web, using half-millimeter-thick metasurfaces covered with forests of microscopic pillars. This, say experts, will not only greatly simplify the setup needed for quantum technology but also help support more-complex quantum applications.

Craig Fritz


A large camera within a lab in Chile.

Colossal Camera Coming to Chile

In a world obsessed with miniaturization, it’s almost shocking when, every now and then, a big deal is made of something, er, big. That is certainly the case with the new camera being built for the Vera C. Rubin Observatory in Chile. When the camera is delivered and set up in May 2023, its 1.57-meter-wide lens will make it the world’s largest device for taking snapshots. The gargantuan point-and-shoot instrument will capture images of a swath of the sky seven times the width of the moon.

Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory


A young woman with brown eyes and neck length hair dyed rose gold sits at a white table. In one hand she holds a carbon fiber robotic arm and hand. Her other arm ends near her elbow. Her short sleeve shirt has a pattern on it of illustrated hands.

Bionic Hands Haven’t Fully Grasped Users’ Needs

When we’re carrying out our quotidian activities, most of us rarely stop to think about what marvels of engineering our arms and hands are. But for those who have lost the use of a limb—or, like Britt Young, the woman pictured here, were born without one—there’s hardly ever a day when the challenges of navigating a two-handed world are not in the forefront of their thoughts. In Young’s October 2022 IEEE Spectrum cover story, she discusses these challenges, as well as how the bionic-hand technology intended to come to the rescue falls short of designers’ and users’ expectations.

Gabriela Hasbun. Makeup: Maria Nguyen for Mac Cosmetics; Hair: Joan Laqui for Living Proof


Match ID: 71 Score: 7.14 source: spectrum.ieee.org age: 7 days
qualifiers: 4.29 energy, 2.86 nuclear

In U.S. Military, Sexual Assault Against Men Is Vastly Underreported
Tue, 29 Nov 2022 12:00:17 +0000

An average of 45 male servicemembers are sexually assaulted every day, according to Pentagon statistics. As with women, shame and stigma suppress the truth.

The post In U.S. Military, Sexual Assault Against Men Is Vastly Underreported appeared first on The Intercept.


Match ID: 72 Score: 5.00 source: theintercept.com age: 0 days
qualifiers: 5.00 pompeo

The EV Transition Explained: Battery Challenges
Sat, 19 Nov 2022 19:30:00 +0000


“Energy and information are two basic currencies of organic and social systems,” the economics Nobelist Herb Simon once observed. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.”

Electric vehicles at scale alter the terms of both basic currencies concurrently. Reliable, secure supplies of minerals and software are core elements for EVs, which represent a “shift from a fuel-intensive to a material-intensive energy system,” according to a report by the International Energy Agency (IEA). For example, the mineral requirements for an EV’s batteries and electric motors are six times that of an internal-combustion-engine (ICE) vehicle, which can increase the average weight of an EV by 340 kilograms (750 pounds). For something like the Ford Lightning, the weight can be more than twice that amount.

EVs also create a shift from an electromechanical-intensive to an information-intensive vehicle. EVs offer a virtual clean slate from which to accelerate the design of safe, software-defined vehicles, with computing and supporting electronics being the prime enabler of a vehicle’s features, functions, and value. Software also allows for the decoupling of the internal mechanical connections needed in an ICE vehicle, permitting an EV to be controlled remotely or autonomously. An added benefit is that the loss of the ICE power train not only reduces the components a vehicle requires but also frees up space for increased passenger comfort and storage.

The effects of Simon’s profound changes are readily apparent, forcing a 120-year-old industry to fundamentally reinvent itself. EVs require automakers to design new manufacturing processes and build plants to make both EVs and their batteries. Ramping up the battery supply chain is the automakers’ current “most challenging topic,” according to VW chief financial officer Arno Antlitz.

It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.

These plants are also very expensive. Ford and its Korean battery supplier SK Innovation are spending US $5.6 billion to produce F-Series EVs and batteries in Stanton, Tenn., for example, while GM is spending $2 billion to produce its new Cadillac Lyriq EVs in Spring Hill, Tenn. As automakers expand their lines of EVs, tens of billions more will need to be invested in both manufacturing and battery plants. It is little wonder that Tesla CEO Elon Musk calls EV factories “gigantic money furnaces.”

Furthermore, Kristin Dziczek a policy analyst with the Federal Reserve Bank of Chicago adds, there are scores of new global EV competitors actively seeking to replace the legacy automakers. The “simplicity” of EVs in comparison with ICE vehicles allows these disruptors to compete virtually from scratch with legacy automakers, not only in the car market itself but for the material and labor inputs as well.

Batteries and the supply-chain challenge

Another critical question is whether all the planned battery-plant output will support expected EV production demands. For instance, the United States will require 8 million EV batteries annually by 2030 if its target to make EVs half of all new-vehicle sales is met, with that number rising each year after. As IEA executive director Fatih Birol observes, “Today, the data shows a looming mismatch between the world’s strengthened climate ambitions and the availability of critical minerals that are essential to realizing those ambitions.”

This mismatch worries automakers. GM, Ford, Tesla, and others have moved to secure batteries through 2025, but it could be tricky after that. Rivian Automotive chief executive RJ Scaringe was recently quoted in the Wall Street Journal as saying that “90 to 95 percent of the (battery) supply chain does not exist,” and that the current semiconductor chip shortage is “a small appetizer to what we are about to feel on battery cells over the next two decades.”

The competition for securing raw materials, along with the increased consumer demand, has caused EV prices to spike. Ford has raised the price of the Lightning $6,000 to $8,500, and CEO Jim Farley bluntly states that in regard to material shortages in the foreseeable future, “I don’t think we should be confident in any other outcomes than an increase in prices.”

Stiff Competition for Engineering Talent


One critical area of resource competition is over the limited supply of software and systems engineers with the mechatronics and robotics expertise needed for EVs. Major automakers have moved aggressively to bring more software and systems-engineering expertise on board, rather than have it reside at their suppliers, as they have traditionally done. Automakers feel that if they’re not in control of the software, they’re not in control of their product.

Volvo’s CEO Jim Rowan stated earlier this year that increasing the computing power in EVs will be harder and more altering of the automotive industry than switching from ICE vehicles to EVs. This means that EV winners and losers will in great part be separated by their “relative strength in their cyberphysical systems engineering,” states Clemson’s Paredis.

Even for the large auto suppliers, the transition to EVs will not be an easy road. For instance, automakers are demanding these suppliers absorb more cost cuts because automakers are finding EVs so expensive to build. Not only do automakers want to bring cutting-edge software expertise in-house, they want greater inside expertise in critical EV supply-chain components, especially batteries.

Automakers, including Tesla, are all scrambling for battery talent, with bidding wars reportedly breaking out to acquire top candidates. With automakers planning to spend more than $13 billion to build at least 13 new EV battery plants in North America within the next five to seven years, experienced management and production-line talent will likely be in extremely short supply. Tesla’s Texas Gigafactory needs some 10,000 workers alone, for example. With at least 60 new battery plants planned to be in operation globally by 2030, and scores needed soon afterward, major battery makers are already highlighting their expected skill shortages.


The underlying reason for the worry: Supplying sufficient raw materials to existing and planned battery plants as well as to the manufacturers of other renewable energy sources and military systems—who are competing for the same materials—has several complications to overcome. Among them is the need for more mines to provide the metals required, which have spiked in price as demand has increased. For example, while demand for lithium is growing rapidly, investment in mines has significantly lagged the investment that has been aimed toward EVs and battery plants. It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.

Mining the raw materials, of course, assumes that there is sufficient refining capability to process them, which, outside of China, is limited. This is especially true in the United States, which, according to a Biden Administration special supply-chain investigative report, has “limited raw material production capacity and virtually no processing capacity.” Consequently, the report states, the United States “exports the limited raw materials produced today to foreign markets.” For example, output from the only nickel mine in the United States, the Eagle mine in Minnesota, is sent to Canada for smelting.

“Energy and information are two basic currencies of organic and social systems. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.” —Herb Simon

One possible solution is to move away from lithium-ion batteries and nickel metal hydride batteries to other battery chemistries such as lithium-iron phosphate, lithium-ion phosphate, lithium-sulfur, lithium-metal, and sodium-ion, among many others, not to mention solid-state batteries, as a way to alleviate some of the material supply and cost problems. Tesla is moving toward the use of lithium-iron phosphate batteries, as is Ford for some of its vehicles. These batteries are cobalt free, which alleviates several sourcing issues.

Another solution may be recycling both EV batteries as well as the waste and rejects from battery manufacturing, which can run between 5 to 10 percent of production. Effective recycling of EV batteries “has the potential to reduce primary demand compared to total demand in 2040, by approximately 25 percent for lithium, 35 percent for cobalt and nickel, and 55 percent for copper,” according to a report by the University of Sidney’s Institute for Sustainable Futures.



While investments into creating EV battery recycling facilities have started, there is a looming question of whether there will be enough battery factory scrap and other lithium-ion battery waste for them to remain operational while they wait for sufficient numbers of batteries to make them profitable. Lithium-ion battery-pack recycling is very time-consuming and expensive, making mining lithium often cheaper than recycling it, for example. Recycling low or no-cobalt lithium batteries, which is the direction many automakers are taking, may also make it unprofitable to recycle them.

An additional concern is that EV batteries, once no longer useful for propelling the EV, have years of life left in them. They can be refurbished, rebuilt, and reused in EVs, or repurposed into storage devices for homes, businesses, or the grid. Whether it will make economic sense to do either at scale versus recycling them remains to be seen.

Howard Nusbaum, the administrator of the National Salvage Vehicle Reporting Program (NSVRP), succinctly puts it, “There is no recycling, and no EV-recycling industry, if there is no economic basis for one.”

In the next article in the series, we will look at whether the grid can handle tens of millions of EVs.


Match ID: 73 Score: 5.00 source: spectrum.ieee.org age: 10 days
qualifiers: 2.86 korea, 2.14 energy

Could Trump's legal issues derail his 2024 presidential bid? – video explainer
Thu, 17 Nov 2022 16:09:08 GMT

Donald Trump has announced his candidacy for the Republican presidential nomination in 2024, probably sparking another period of tumult in US politics and especially his own political party. His third candidacy comes as he faces intensifying legal troubles, including investigations by the justice department into the removal of hundreds of classified documents from the White House to his Florida estate and into his role in the January 6 attack. But could they derail his bid? The Guardian US politics correspondent Hugo Lowell explains what Trump is facing and whether he still stands a chance

Continue reading...
Match ID: 74 Score: 3.57 source: www.theguardian.com age: 12 days
qualifiers: 3.57 trump

The Twitter Bubble Let Democrats Defy Political Gravity
Thu, 17 Nov 2022 15:15:50 +0000
The midterm elections showed that the far-right's manufactured narrative about trans kids doomed the GOP when they made it policy.
Match ID: 75 Score: 3.57 source: www.wired.com age: 12 days
qualifiers: 3.57 trump

MOXIE Shows How to Make Oxygen on Mars
Thu, 08 Sep 2022 15:27:59 +0000


Planning for the return journey is an integral part of the preparations for a crewed Mars mission. Astronauts will require a total mass of about 50 tonnes of rocket propellent for the ascent vehicle that will lift them off the planet’s surface, including 31 tonnes of oxygen approximately. The less popular option is for crewed missions to carry the required oxygen themselves. But scientists are optimistic that it could instead be produced from the carbon dioxide–rich Martian atmosphere itself, using a system called MOXIE.

The Mars Oxygen ISRU (In-Situ Resource Utilization) Experiment is an 18-kilogram unit housed within the Perseverance rover on Mars. The unit is “the size of a toaster,” adds Jeffrey Hoffman, professor of aerospace engineering at MIT. Its job is to electrochemically break down carbon dioxide collected from the Martian atmosphere into oxygen and carbon monoxide. It also tests the purity of the oxygen.

Between February 2021, when it arrived on Mars aboard the Perseverance, and the end of the year, MOXIE has had several successful test runs. According to a review of the system by Hoffman and colleagues, published in Science Advances, it has demonstrated its ability to produce oxygen during both night and day, when temperatures can vary by over 100 ºC. The generation and purity rates of oxygen also meet requirements to produce rocket propellent and for breathing. The authors assert that a scaled-up version of MOXIE could produce the required oxygen for lift-off as well as for the astronauts to breathe.

Next question: How to power any oxygen-producing factories that NASA can land on Mars? Perhaps via NASA’s Kilopower fission reactors?

MOXIE is a first step toward a much larger and more complex system to support the human exploration of Mars. The researchers estimate a required generation rate of 2 to 3 kilograms per hour, compared with the current MOXIE rate of 6 to 8 grams per hour, to produce enough oxygen for lift-off for a crew arriving 26 months later. “So we’re talking about a system that’s a couple of hundred times bigger than MOXIE,” Hoffman says.

They calculate this rate accounting for eight months to get to Mars, followed by some time to set up the system. “We figure you'd probably have maybe 14 months to make all the oxygen.” Further, he says, the produced oxygen would have to be liquefied to be used a rocket propellant, something the current version of MOXIE doesn’t do.

MOXIE also currently faces several design constraints because, says Hoffman, a former astronaut, “our only ride to Mars was inside the Perseverance rover.” This limited the amount of power available to operate the unit, the amount of heat they could produce, the volume and the mass.

“MOXIE does not work nearly as efficiently as a stand-alone system that was specifically designed would,” says Hoffman. Most of the time, it’s turned off. “Every time we want to make oxygen, we have to heat it up to 800 ºC, so most of the energy goes into heating it up and running the compressor, whereas in a well-designed stand-alone system, most of the energy will go into the actual electrolysis, into actually producing the oxygen.”

However, there are still many kinks to iron out for the scaling-up process. To begin with, any oxygen-producing system will need lots of power. Hoffman thinks nuclear power is the most likely option, maybe NASA’s Kilopower fission reactors. The setup and the cabling would certainly be challenging, he says. “You’re going to have to launch to all of these nuclear reactors, and of course, they’re not going to be in exactly the same place as the [other] units,” he says. "So, robotically, you’re going to have to connect to the electrical cables to bring power to the oxygen-producing unit.”

Then there is the solid oxide electrolysis units, which Hoffman points out are carefully machined systems. Fortunately, the company that makes them, OxEon, has already designed, built, and tested a full-scale unit, a hundred times bigger than the one on MOXIE. “Several of those units would be required to produce oxygen at the quantities that we need,” Hoffman says.

He also adds that at present, there is no redundancy built into MOXIE. If any part fails, the whole system dies. “If you’re counting on a system to produce oxygen for rocket propellant and for breathing, you need very high reliability, which means you’re going to need quite a few redundant units.”

Moreover, the system has to be pretty much autonomous, Hoffman says. “It has to be able to monitor itself, run itself.” For testing purposes, every time MOXIE is powered up, there is plenty of time to plan. A full-scale MOXIE system, though, would have to run continuously, and for that it has to be able to adjust automatically to changes in the Mars atmosphere, which can vary by a factor of two over a year, and between nighttime and daytime temperature differences.


Match ID: 76 Score: 3.57 source: spectrum.ieee.org age: 82 days
qualifiers: 2.14 energy, 1.43 nuclear

Pentagon Aims to Demo a Nuclear Spacecraft Within 5 Years
Thu, 09 Jun 2022 16:44:41 +0000


In the latest push for nuclear power in space, the Pentagon’s Defense Innovation Unit (DIU) awarded a contract in May to Seattle-based Ultra Safe Nuclear to advance its nuclear power and propulsion concepts. The company is making a soccer ball–size radioisotope battery it calls EmberCore. The DIU’s goal is to launch the technology into space for demonstration in 2027.

Ultra Safe Nuclear’s system is intended to be lightweight, scalable, and usable as both a propulsion source and a power source. It will be specifically designed to give small-to-medium-size military spacecraft the ability to maneuver nimbly in the space between Earth orbit and the moon. The DIU effort is part of the U.S. military’s recently announced plans to develop a surveillance network in cislunar space.

Besides speedy space maneuvers, the DIU wants to power sensors and communication systems without having to worry about solar panels pointing in the right direction or batteries having enough charge to work at night, says Adam Schilffarth, director of strategy at Ultra Safe Nuclear. “Right now, if you are trying to take radar imagery in Ukraine through cloudy skies,” he says, “current platforms can only take a very short image because they draw so much power.”

Radioisotope power sources are well suited for small, uncrewed spacecraft, adds Christopher Morrison, who is leading EmberCore’s development. Such sources rely on the radioactive decay of an element that produces energy, as opposed to nuclear fission, which involves splitting atomic nuclei in a controlled chain reaction to release energy. Heat produced by radioactive decay is converted into electricity using thermoelectric devices.

Radioisotopes have provided heat and electricity for spacecraft since 1961. The Curiosity and Perseverance rovers on Mars, and deep-space missions including Cassini, New Horizons, and Voyager all use radioisotope batteries that rely on the decay of plutonium-238, which is nonfissile—unlike plutonium-239, which is used in weapons and power reactors.

For EmberCore, Ultra Safe Nuclear has instead turned to medical isotopes such as cobalt-60 that are easier and cheaper to produce. The materials start out inert, and have to be charged with neutrons to become radioactive. The company encapsulates the material in a proprietary ceramic for safety.

Cobalt-60 has a half-life of five years (compared to plutonium-238’s 90 years), which is enough for the cislunar missions that the DOD and NASA are looking at, Morrison says. He says that EmberCore should be able to provide 10 times as much power as a plutonium-238 system, providing over 1 million kilowatt-hours of energy using just a few pounds of fuel. “This is a technology that is in many ways commercially viable and potentially more scalable than plutonium-238,” he says.

One downside of the medical isotopes is that they can produce high-energy X-rays in addition to heat. So Ultra Safe Nuclear wraps the fuel with a radiation-absorbing metal shield. But in the future, the EmberCore system could be designed for scientists to use the X-rays for experiments. “They buy this heater and get an X-ray source for free,” says Schilffarth. “We’ve talked with scientists who right now have to haul pieces of lunar or Martian regolith up to their sensor because the X-ray source is so weak. Now we’re talking about a spotlight that could shine down to do science from a distance.”

Ultra Safe Nuclear’s contract is one of two awarded by the DIU—which aims to speed up the deployment of commercial technology through military use—to develop nuclear power and propulsion for spacecraft. The other contract was awarded to Avalanche Energy, which is making a lunchbox-size fusion device it calls an Orbitron. The device will use electrostatic fields to trap high-speed ions in slowly changing orbits around a negatively charged cathode. Collisions between the ions can result in fusion reactions that produce energetic particles.

Both companies will use nuclear energy to power high-efficiency electric propulsion systems. Electric propulsion technologies such as ion thrusters, which use electromagnetic fields to accelerate ions and generate thrust, are more efficient than chemical rockets, which burn fuel. Solar panels typically power the ion thrusters that satellites use today to change their position and orientation. Schilffarth says that the higher power from EmberCore should give a greater velocity change of 10 kilometers per second in orbit than today’s electric propulsion systems.

Ultra Safe Nuclear is also one of three companies developing nuclear fission thermal propulsion systems for NASA and the Department of Energy. Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is seeking companies to develop a fission-based nuclear thermal rocket engine, with demonstrations expected in 2026.

This article appears in the August 2022 print issue as “Spacecraft to Run on Radioactive Decay.”


Match ID: 77 Score: 3.57 source: spectrum.ieee.org age: 173 days
qualifiers: 2.14 energy, 1.43 nuclear

Outside the Box: The Trump administration wants to discourage your 401(k) from including ESG investment options
Tue, 08 Sep 2020 22:11:24 GMT
Two proposed rules imply that investment managers promote social goals over sound investment analysis — but that’s not what actually happens.
Match ID: 78 Score: 3.57 source: www.marketwatch.com age: 812 days
qualifiers: 3.57 trump

How the Graphical User Interface Was Invented
Sun, 20 Nov 2022 20:00:00 +0000


Mice, windows, icons, and menus: these are the ingredients of computer interfaces designed to be easy to grasp, simplicity itself to use, and straightforward to describe. The mouse is a pointer. Windows divide up the screen. Icons symbolize application programs and data. Menus list choices of action.

But the development of today’s graphical user interface was anything but simple. It took some 30 years of effort by engineers and computer scientists in universities, government laboratories, and corporate research groups, piggybacking on each other’s work, trying new ideas, repeating each other’s mistakes.


This article was first published as “Of Mice and menus: designing the user-friendly interface.” It appeared in the September 1989 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The photographs and diagrams appeared in the original print version.


Throughout the 1970s and early 1980s, many of the early concepts for windows, menus, icons, and mice were arduously researched at Xerox Corp.’s Palo Alto Research Center (PARC), Palo Alto, Calif. In 1973, PARC developed the prototype Alto, the first of two computers that would prove seminal in this area. More than 1200 Altos were built and tested. From the Alto’s concepts, starting in 1975, Xerox’s System Development Department then developed the Star and introduced it in 1981—the first such user-friendly machine sold to the public.

In 1984, the low-cost Macintosh from Apple Computer Inc., Cupertino, Calif., brought the friendly interface to thousands of personal computer users. During the next five years, the price of RAM chips fell enough to accommodate the huge memory demands of bit-mapped graphics, and the Mac was followed by dozens of similar interfaces for PCs and workstations of all kinds. By now, application programmers are becoming familiar with the idea of manipulating graphic objects.

The Mac’s success during the 1980s spurred Apple Computer to pursue legal action over ownership of many features of the graphical user interface. Suits now being litigated could assign those innovations not to the designers and their companies, but to those who first filed for legal protection on them.

The GUI started with Sketchpad


The grandfather of the graphical user interface was Sketchpad [see photograph]. Massachusetts Institute of Technology student Ivan E. Sutherland built it in 1962 as a Ph.D. thesis at MIT’s Lincoln Laboratory in Lexington, Mass. Sketchpad users could not only draw points, line segments, and circular arcs on a cathode ray tube (CRT) with a light pen—they could also assign constraints to, and relationships among, whatever they drew.

Arcs could have a specified diameter, lines could be horizontal or vertical, and figures could be built up from combinations of elements and shapes. Figures could be moved, copied, shrunk, expanded, and rotated, with their constraints (shown as onscreen icons) dynamically preserved. At a time when a CRT monitor was a novelty in itself, the idea that users could interactively create objects by drawing on a computer was revolutionary.


Man sitting in front of a round cathode ray display with a white square and triangle on a black background

Moreover, to zoom in on objects, Sutherland wrote the first window-drawing program, which required him to come up with the first clipping algorithm. Clipping is a software routine that calculates which part of a graphic object is to be displayed and displays only that part on the screen. The program must calculate where a line is to be drawn, compare that position to the coordinates of the window in use, and prevent the display of any line segment whose coordinates fall outside the window.

Though films of Sketchpad in operation were widely shown in the computer research community, Sutherland says today that there was little immediate fallout from the project. Running on MIT’s TX-2 mainframe, it demanded too much computing power to be practical for individual use. Many other engineers, however, see Sketchpad’s design and algorithms as a primary influence on an entire generation of research into user interfaces.

The origin of the computer mouse


The light pens used to select areas of the screen by interactive computer systems of the 1950s and 1960s—including Sketchpad—had drawbacks. To do the pointing, the user’s arm had to be lifted up from the table, and after a while that got tiring. Picking up the pen required fumbling around on the table or, if it had a holder, taking the time after making a selection to put it back.

Sensing an object with a light pen was straightforward: the computer displayed spots of light on the screen and interrogated the pen as to whether it sensed a spot, so the program always knew just what was being displayed. Locating the position of the pen on the screen required more sophisticated techniques—like displaying a cross pattern of nine points on the screen, then moving the cross until it centered on the light pen.

In 1964, Douglas Engelbart, a research project leader at SRI International in Menlo Park, Calif., tested all the commercially available pointing devices, from the still-popular light pen to a joystick and a Graphicon (a curve-tracing device that used a pen mounted on the arm of a potentiometer). But he felt the selection failed to cover the full spectrum of possible pointing devices, and somehow he should fill in the blanks.

Then he remembered a 1940s college class he had taken that covered the use of a planimeter to calculate area. (A planimeter has two arms, with a wheel on each. The wheels can roll only along their axes; when one of them rolls, the other must slide.)

If a potentiometer were attached to each wheel to monitor its rotation, he thought, a planimeter could be used as a pointing device. Engelbart explained his roughly sketched idea to engineer William English, who with the help of the SRI machine shop built what they quickly dubbed “the mouse.”



This first mouse was big because it used single-turn potentiometers: one rotation of the wheels had to be scaled to move a cursor from one side of the screen to the other. But it was simple to interface with the computer: the processor just read frequent samples of the potentiometer positioning signals through analog-to-digital converters.

The cursor moved by the mouse was easy to locate, since readings from the potentiometer determined the position of the cursor on the screen-unlike the light pen. But programmers for later windowing systems found that the software necessary to determine which object the mouse had selected was more complex than that for the light pen: they had to compare the mouse’s position with that of all the objects displayed onscreen.

The computer mouse gets redesigned—and redesigned again

Engelbart’s group at SRI ran controlled experiments with mice and other pointing devices, and the mouse won hands down. People adapted to it quickly, it was easy to grab, and it stayed where they put it. Still, Engelbart wanted to tinker with it. After experimenting, his group had concluded that the proper ratio of cursor movement to mouse movement was about 2:1, but he wanted to try varying that ratio—decreasing it at slow speeds and raising it at fast speeds—to improve user control of fine movements and speed up larger movements. Some modern mouse-control software incorporates this idea, including that of the Macintosh.

The mouse, still experimental at this stage, did not change until 1971. Several members of Engelbart’s group had moved to the newly established PARC, where many other researchers had seen the SRI mouse and the test report. They decided there was no need to repeat the tests; any experimental systems they designed would use mice.

Said English, “This was my second chance to build a mouse; it was obvious that it should be a lot smaller, and that it should be digital.” Chuck Thacker, then a member of the research staff, advised PARC to hire inventor Jack Hawley to build it.

Hawley decided the mouse should use shaft encoders, which measure position by a series of pulses, instead of potentiometers (both were covered in Engelbart’s 1970 patent), to eliminate the expensive analog-to-digital converters. The basic principle, of one wheel rolling while the other slid, was licensed from SRI.

The ball mouse was the “easiest patent I ever got. It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”
—Ron Rider

In 1972, the mouse changed again. Ron Rider, now vice president of systems architecture at PARC but then a new arrival, said he was using the wheel mouse while an engineer made excuses for its asymmetric operation (one wheel dragging while one turned). “I suggested that they turn a trackball upside down, make it small, and use it as a mouse instead,” Rider told IEEE Spectrum. This device came to be known as the ball mouse. “Easiest patent I ever got,” Rider said. “It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”

Defining terms


Bit map

The pixel pattern that makes up the graphic display on a computer screen.

Clicking

The motion of pressing a mouse button to Initiate an action by software; some actions require double-clicking.

Graphical user interface (GUI)

The combination of windowing displays, menus, icons, and a mouse that is increasingly used on personal computers and workstations.

Icon

An onscreen drawing that represents programs or data.

Menu

A list of command options currently available to the computer user; some stay onscreen, while pop-up or pull-down menus are requested by the user.

Mouse

A device whose motion across a desktop or other surface causes an on-screen cursor to move commensurately; today’s mice move on a ball and have one, two, or three buttons.

Raster display

A cathode ray tube on which Images are displayed as patterns of dots, scanned onto the screen sequentially in a predetermined pattern of lines.

Vector display

A cathode ray tube whose gun scans lines, or vectors, onto the screen phosphor.

Window

An area of a computer display, usually one of several, in which a particular program is executing.


In the PARC ball mouse design, the weight of the mouse is transferred to the ball by a swivel device and on one or two casters at the end of the mouse farthest from the wire “tail.” A prototype was built by Xerox’s Electronics Division in El Segundo, Calif., then redesigned by Hawley. The rolling ball turned two perpendicular shafts, with a drum on the end of each that was coated with alternating stripes of conductive and nonconductive material. As the drum turned, the stripes transmitted electrical impulses through metal wipers.

When Apple Computer decided in 1979 to design a mouse for its Lisa computer, the design mutated yet again. Instead of a metal ball held against the substrate by a swivel, Apple used a rubber ball whose traction depended on the friction of the rubber and the weight of the ball itself. Simple pads on the bottom of the case carried the weight, and optical scanners detected the motion of the internal wheels. The device had loose tolerances and few moving parts, so that it cost perhaps a quarter as much to build as previous ball mice.

How the computer mouse gained and lost buttons

The first, wooden, SRI mouse had only one button, to test the concept. The plastic batch of SRI mice bad three side-by-side buttons—all there was room for, Engelbart said. The first PARC mouse bad a column of three buttons-again, because that best fit the mechanical design. Today, the Apple mouse has one button, while the rest have two or three. The issue is no longer 1950—a standard 6-by-10-cm mouse could now have dozens of buttons—but human factors, and the experts have strong opinions.

Said English, now director of internationalization at Sun Microsystems Inc., Mountain View, Calif.: “Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” He sees two buttons as the minimum because two functions are basic to selecting an object: pointing to its start, then extending the motion to the end of the object.

William Verplank, a human factors specialist in the group that tested the graphical interface at Xerox from 1978 into the early 1980s, concurred. He told Spectrum that with three buttons, Alto users forgot which button did what. The group’s tests showed that one button was also confusing, because it required actions such as double-clicking to select and then open a file.

“We have agonizing videos of naive users struggling” with these problems, Verplank said. They concluded that for most users, two buttons (as used on the Star) are optimal, if a button means the same thing in every application. English experimented with one-button mice at PARC before concluding they were a bad idea.


“Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.”
—William English


A computer monitor with a chunky white keyboard sitting on a desk

But many interface designers dislike multiple buttons, saying that double-clicking a single button to select an item is easier than remembering which button points and which extends. Larry Tesler, formerly a computer scientist at PARC, brought the one-button mouse to Apple, where he is now vice president of advanced technology. The company’s rationale is that to attract novices to its computers one button was as simple as it could get.

More than two million one-button Apple mice are now in use. The Xerox and Microsoft two-button mice are less common than either Apple’s ubiquitous one-button model or the three-button mice found on technical workstations. Dozens of companies manufacture mice today; most are slightly smaller than a pack of cigarettes, with minor variations in shape.

How windows first came to the computer screen


In 1962, Sketchpad could split its screen horizontally into two independent sections. One section could, for example, give a close-up view of the object in the other section. Researchers call Sketchpad the first example of tiled windows, which are laid out side by side. They differ from overlapping windows, which can be stacked on top of each other, or overlaid, obscuring all or part of the lower layers.

Windows were an obvious means of adding functionality to a small screen. In 1969, Engelbart equipped NLS (as the On-Line System he invented at SRI during the 1960s was known, to distinguish it from the Off-Line System known as FLS) with windows. They split the screen into multiple parts horizontally or vertically, and introduced cross-window editing with a mouse.

By 1972, led by researcher Alan Kay, the Smalltalk programming language group at Xerox PARC had implemented their version of windows. They were working with far different technology from Sutherland or Engelbart: by deciding that their images had to be displayed as dots on the screen, they led a move from vector to raster displays, to make it simple to map the assigned memory location of each of those spots. This was the bit map invented at PARC, and made viable during the 1980s by continual performance improvements in processor logic and memory speed.

Experimenting with bit-map manipulation, Smalltalk researcher Dan Ingalls developed the bit-block transfer procedure, known as BitBlt. The BitBlt software enabled application programs to mix and manipulate rectangular arrays of pixel values in on-screen or off-screen memory, or between the two, combining the pixel values and storing the result in the appropriate bit-map location.

BitBlt made it much easier to write programs to scroll a window (move an image through it), resize (enlarge or contract) it, and drag windows (move them from one location to another on screen). It led Kay to create overlapping windows. They were soon implemented by the Smalltalk group, but made clipping harder.

Some researchers question whether overlapping windows offer more benefits than tiled on the grounds that screens with overlapping windows become so messy the user gets lost.

In a tiling system, explained researcher Peter Deutsch, who worked with the Smalltalk group, the clipping borders are simply horizontal or vertical lines from one screen border to another, and software just tracks the location of those lines. But overlapping windows may appear anywhere on the screen, randomly obscuring bits and pieces of other windows, so that quite irregular regions must be clipped. Thus application software must constantly track which portions of their windows remain visible.

Some researchers still question whether overlapping windows offer more benefits than tiled, at least above a certain screen size, on the grounds that screens with overlapping windows become so messy the user gets lost. Others argue that overlapping windows more closely match users’ work patterns, since no one arranges the papers on their physical desktop in neat horizontal and vertical rows. Among software engineers, however, overlapping windows seem to have won for the user interface world.

So has the cut-and-paste editing model that Larry Tesler developed, first for the Gypsy text editor he wrote at PARC and later for Apple. Charles Irby—who worked on Xerox’s windows and is now vice president of development at Metaphor Computer Systems Inc., Mountain View, Calif.—noted, however, that cut-and-paste worked better for pure text-editing than for moving graphic objects from one application to another.

The origin of the computer menu bar


Menus—functions continuously listed onscreen that could be called into action with key combinations—were commonly used in defense computing by the 1960s. But it was only with the advent of BitBlt and windows that menus could be made to appear as needed and to disappear after use. Combined with a pointing device to indicate a user’s selection, they are now an integral part of the user-friendly interface: users no longer need to refer to manuals or memorize available options.

Instead, the choices can be called up at a moment’s notice whenever needed. And menu design has evolved. Some new systems use nested hierarchies of menus; others offer different menu versions—one with the most commonly used commands for novices, another with all available commands for the experienced user.

Among the first to test menus on demand was PARC researcher William Newman, in a program called Markup. Hard on his heels, the Smalltalk group built in pop-up menus that appeared on screen at the cursor site when the user pressed one of the mouse buttons.

Implementation was on the whole straightforward, recalled Deutsch. The one exception was determining whether the menu or the application should keep track of the information temporarily obscured by the menu. In the Smalltalk 76 version, the popup menu saved and restored the screen bits it overwrote. But in today’s multitasking systems, that would not work, because an application may change those bits without the menu’s knowledge. Such systems add another layer to the operating system: a display manager that tracks what is written where.

The production Xerox Star, in 1981, featured a further advance: a menu bar, essentially a row of words indicating available menus that could be popped up for each window. Human factors engineer Verplank recalled that the bar was at first located at the bottom of its window. But the Star team found users were more likely to associate a bar with the window below it, so it was moved to the top of its window.

Apple simplified things in its Lisa and Macintosh with a single bar placed at the top of the screen. This menu bar relates only to the window in use: the menus could be ‘‘pulled down” from the bar, to appear below it. Designer William D. Atkinson received a patent (assigned to Apple Computer) in August 1984 for this innovation.

One new addition that most user interface pioneers consider an advantage is the tear-off menu, which the user can move to a convenient spot on the screen and “pin” there, always visible for ready access.

Many windowing interfaces now offer command-key or keyboard alternatives for many commands as well. This return to the earliest of user interfaces—key combinations—neatly supplements menus, providing both ease of use for novices and for the less experienced, and speed for those who can type faster than they can point to a menu and click on a selection.

How the computer “icon” got its name


Sketchpad had on-screen graphic objects that represented constraints (for example, a rule that lines be the same length), and the Flex machine built in 1967 at the University of Utah by students Alan Kay and Ed Cheadle had squares that represented programs and data (like today’s computer “folders”). Early work on icons was also done by Bell Northern Research, Ottawa, Canada, stemming from efforts to replace the recently legislated bilingual signs with graphic symbols.

But the concept of the computer “icon” was not formalized until 1975. David Canfield Smith, a computer science graduate student at Stanford University in California, began work on his Ph.D. thesis in 1973. His advisor was PARC’s Kay, who suggested that he look at using the graphics power of the experimental Alto not just to display text, but rather to help people program.

David Canfield Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents.

Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents: a Russian icon of a saint is holy and is to be venerated. Smith’s computer icons contained all the properties of the programs and data represented, and therefore could be linked or acted on as if they were the real thing.

After receiving his Ph.D. in 1975, Smith joined Xerox in 1976 to work on Star development. The first thing he did, he said, was to recast his concept of icons in office terms. “I looked around my office and saw papers, folders, file cabinets, a telephone, and bookshelves, and it was an easy translation to icons,” he said.

Xerox researchers developed, tested, and revised icons for the Star interface for three years before the first version was complete. At first they attempted to make the icons look like a detailed photographic rendering of the object, recalled Irby, who worked on testing and refining the Xerox windows. Trading off label space, legibility, and the number of icons that fit on the screen, they decided to constrain icons to a 1-inch (2.5-centimeter) square of 64 by 64 pixels, or 512 eight-bit bytes.

Then, Verplank recalls, they discovered that because of a background pattern based on two-pixel dots, the right-hand side of the icons appeared jagged. So they increased the width of the icons to 65 pixels, despite an outcry from programmers who liked the neat 16-bit breakdown. But the increase stuck, Verplank said, because they had already decided to store 72 bits per side to allow for white space around each icon.

After settling on a size for the icons, the Star developers tested four sets developed by two graphic designers and two software engineers. They discovered that, for example, resizing may cause problems. They shrunk the icon for a person—a head and shoulders—in order to use several of them to represent a group, only to hear one test subject say the screen resolution made the reduced icon look like a cross above a tombstone. Computer graphics artist Norm Cox, now of Cox & Hall, Dallas, Texas, was finally hired to redesign the icons.

Icon designers today still wrestle with the need to make icons adaptable to the many different system configurations offered by computer makers. Artist Karen Elliott, who has designed icons for Microsoft, Apple, Hewlett-Packard Co., and others, noted that on different systems an icon may be displayed in different colors, several resolutions, and a variety of gray shades, and it may also be inverted (light and dark areas reversed).

In the past few years, another concern has been added to icon designers’ tasks: internationalization. Icons designed in the United States often lack space for translations into languages other than English. Elliott therefore tries to leave space for both the longer words and the vertical orientation of some languages.


A square white macintosh computer with a white keyboard, in a separate image below, computer icons and the text address book, address, addresses

The main rule is to make icons simple, clean, and easily recognizable. Discarded objects are placed in a trash can on the Macintosh. On the NeXT Computer System, from NeXT Inc., Palo Alto, Calif.—the company formed by Apple cofounder Steven Jobs after he left Apple—they are dumped into a Black Hole. Elliott sees NeXT’s black hole as one of the best icons ever designed: ”It is distinct; its roundness stands out from the other, square icons, and this is important on a crowded display. It fits my image of information being sucked away, and it makes it clear that dumping something is serious.

English disagrees vehemently. The black hole “is fundamentally wrong,” he said. “You can dig paper out of a wastebasket, but you can’t dig it out of a black hole.” Another critic called the black hole familiar only to “computer nerds who read mostly science fiction and comics,” not to general users.

With the introduction of the Xerox Star in June 1981, the graphical user interface, as it is known today, arrived on the market. Though not a commercial triumph, the Star generated great interest among computer users, as the Alto before it had within the universe of computer designers.

Even before the Star was introduced, Jobs, then still at Apple, had visited Xerox PARC in November 1979 and asked the Smalltalk researchers dozens of questions about the Alto’s internal design. He later recruited Larry Tesler from Xerox to design the user interface of the Apple Lisa.

With the Lisa and then the Macintosh, introduced in January 1983 and January 1984 respectively, the graphical user interface reached the low-cost, high-volume computer market.

At almost $10,000, buyers deemed the Lisa too expensive for the office market. But aided by prizewinning advertising and its lower price, the Macintosh took the world by storm. Early Macs had only 128K bytes of RAM, which made them slow to respond because it was too little memory for heavy graphic manipulation. Also, the time needed for programmers to learn its Toolbox of graphics routines delayed application packages until well into 1985. But the Mac’s ease of use was indisputable, and it generated interest that spilled over into the MS-DOS world of IBM PCs and clones, as well as Unix-based workstations.

Who owns the graphical user interface?


The widespread acceptance of such interfaces, however, has led to bitter lawsuits to establish exactly who owns what. So far, none of several litigious companies has definitively established that it owns the software that implements windows, icons, or early versions of menus. But the suits continue.

Virtually all the companies that make and sell either wheel or ball mice paid license fees to SRI or to Xerox for their patents. Engelbart recalled that SRI patent attorneys inspected all the early work on the interface, but understood only hardware. After looking at developments like the implementation of windows, they told him that none of it was patentable.

At Xerox, the Star development team proposed 12 patents having to do with the user interface. The company’s patent committee rejected all but two on hardware—one on BitBlt, the other on the Star architecture. At the time, Charles Irby said, it was a good decision. Patenting required full disclosure, and no precedents then existed for winning software patent suits.


A computer screen in blue and white with multiple open windows


Three computer windows with greyscale images on a dark grey background


Computer windows tinted blue on a black background partially obscuring a planet and starfield


The most recent and most publicized suit was filed in March 1988, by Apple, against both Microsoft and Hewlett-Packard Co., Palo Alto, Calif. Apple alleges that HP’s New Wave interface, requiring version 2.03 of Microsoft’s Windows program, embodies the copyrighted “audio visual computer display” of the Macintosh without permission; that the displays of Windows 2.03 are illegal copies of the Mac’s audiovisual works; and that Windows 2.03 also exceeds the rights granted in a November 198S agreement in which Microsoft acknowledged that the displays in Windows 1.0 were derivatives of those in Apple’s Lisa and Mac.

In March 1989, U.S. District Judge William W. Schwarzer ruled Microsoft had exceeded the bounds of its license in creating Windows 2.03. Then in July 1989 Schwarzer ruled that all but 11 of the 260 items that Apple cited in its suit were, in fact, acceptable under the 1985 agreement. The larger issue—whether Apple’s copyrights are valid, and whether Microsoft and HP infringed on them—will not now be examined until 1990.

Among those 11 are overlapping windows and movable icons. According to Pamela Samuelson, a noted software intellectual property expert and visiting professor at Emory University Law School, Atlanta, Ga., many experts would regard both as functional features of an interface that cannot be copyrighted, rather than “expressions” of an idea protectable by copyright.

But lawyers for Apple—and for other companies that have filed lawsuits to protect the “look and feel’’ of their screen displays—maintain that if such protection is not granted, companies will lose the economic incentive to market technological innovations. How is Apple to protect its investment in developing the Lisa and Macintosh, they argue, if it cannot license its innovations to companies that want to take advantage of them?

If the Apple-Microsoft case does go to trial on the copyright issues, Samuelson said, the court may have to consider whether Apple can assert copyright protection for overlapping windows-an interface feature on which patents have also been granted. In April 1989, for example, Quarterdeck Office Systems Inc., Santa Monica, Calif., received a patent for a multiple windowing system in its Desq system software, introduced in 1984.

Adding fuel to the legal fire, Xerox said in May 1989 it would ask for license fees from companies that use the graphical user interface. But it is unclear whether Xerox has an adequate claim to either copyright or patent protection for the early graphical interface work done at PARC. Xerox did obtain design patents on later icons, noted human factors engineer Verplank. Meanwhile, both Metaphor and Sun Microsystems have negotiated licenses with Xerox for their own interfaces.

To Probe Further

The September 1989 IEEE Computer contains an article, “The Xerox ‘Star’: A Retrospective,” by Jeff Johnson et al., covering development of the Star. “Designing the Star User Interface,’’ [PDF] by David C. Smith et al., appeared in the April 1982 issue of Byte.

The Sept. 12, 1989, PC Magazine contains six articles on graphical user interfaces for personal computers and workstations. The July 1989 Byte includes ‘‘A Guide to [Graphical User Interfaces),” by Frank Hayes and Nick Baran, which describes 12 current interfaces for workstations and personal computers. “The Interface of Tomorrow, Today,’’ by Howard Reingold, in the July 10, 1989, InfoWorld does the same. “The interface that launched a thousand imitations,” by Richard Rawles, in the March 21, 1989, MacWeek covers the Macintosh interface.

The human factors of user interface design are discussed in The Psychology of Everyday Things, by Donald A. Norman (Basic Books Inc., New York, 1988). The January 1989 IEEE Software contains several articles on methods, techniques, and tools for designing and implementing graphical interfaces. The Way Things Work, by David Macaulay (Houghton Mifflin Co., Boston, 1988), contains a detailed drawing of a ball mouse.

The October 1985 IEEE Spectrum covered Xerox PARC’s history in “Research at Xerox PARC: a founder’s assessment,” by George Pake (pp. 54-61) and “Inside the PARC: the ‘information architects,’“ by Tekla Perry and Paul Wallich (pp. 62-75).

William Atkinson received patent no. 4,464,652 for the pulldown menu system on Aug. 8, 1984, and assigned it to Apple. Gary Pope received patent no. 4,823,108, for an improved system for displaying images in “windows” on a computer screen, on April 18, 1989, and assigned it to Quarterdeck Office Systems.

The wheel mouse patent, no. 3,541,541, “X-Y position indicator for a display system,” was issued to Douglas Engelbart on Nov. 17, 1970, and assigned to SRI International. The ball mouse patent, no. 3,835,464, was issued to Ronald Rider on Sept. 10, 1974, and assigned to Xerox.

The first selection device tests to include a mouse are covered in “Display-Selection Techniques for Text Manipulation,” by William English, Douglas Engelbart, and Melvyn Berman, in IEEE Transactions on Human Factors in Electronics, March 1967.

Sketchpad: A Man-Machine Graphical Communication System, by Ivan E. Sutherland (Garland Publishing Inc., New York City and London, 1980), reprints his 1963 Ph.D. thesis.










Match ID: 79 Score: 2.86 source: spectrum.ieee.org age: 9 days
qualifiers: 2.86 russia

A Destabilizing Hack-and-Leak Operation Hits Moldova
Sat, 19 Nov 2022 14:00:00 +0000
Plus: Google’s location snooping ends in a $391 million settlement, Russian code sneaks into US government apps, and the World Cup apps set off alarms.
Match ID: 80 Score: 2.86 source: www.wired.com age: 10 days
qualifiers: 2.86 russia

Russian Software Company Pretending to Be American
2022-11-16T11:03:27Z

Computer code developed by a company called Pushwoosh is in about 8,000 Apple and Google smartphone apps. The company pretends to be American when it is actually Russian.

According to company documents publicly filed in Russia and reviewed by Reuters, Pushwoosh is headquartered in the Siberian town of Novosibirsk, where it is registered as a software company that also carries out data processing. It employs around 40 people and reported revenue of 143,270,000 rubles ($2.4 mln) last year. Pushwoosh is registered with the Russian government to pay taxes in Russia...


Match ID: 81 Score: 2.86 source: www.schneier.com age: 13 days
qualifiers: 2.86 russia

‘Dark Ships’ Emerge From the Shadows of the Nord Stream Mystery
Fri, 11 Nov 2022 12:00:00 +0000
Satellite monitors discovered two vessels with their trackers turned off in the area of the pipeline prior to the suspected sabotage in September.
Match ID: 82 Score: 2.86 source: www.wired.com age: 18 days
qualifiers: 2.86 russia

Russia’s Sway Over Criminal Ransomware Gangs Is Coming Into Focus
Thu, 10 Nov 2022 19:40:12 +0000
Questions about the Kremlin’s relationships with these groups remain. But researchers are finally getting some answers.
Match ID: 83 Score: 2.86 source: www.wired.com age: 19 days
qualifiers: 2.86 russia

NASA’s Artemis I Revives the Moonshot
Sun, 28 Aug 2022 13:00:00 +0000



Update 5 Sept.: For now, NASA’s giant Artemis I remains on the ground after two launch attempts scrubbed by a hydrogen leak and a balky engine sensor. Mission managers say Artemis will fly when everything's ready—but haven't yet specified whether that might be in late September or in mid-October.

“When you look at the rocket, it looks almost retro,” said Bill Nelson, the administrator of NASA. “Looks like we’re looking back toward the Saturn V. But it’s a totally different, new, highly sophisticated—more sophisticated—rocket, and spacecraft.”

Artemis, powered by the Space Launch System rocket, is America’s first attempt to send astronauts to the moon since Apollo 17 in 1972, and technology has taken giant leaps since then. On Artemis I, the first test flight, mission managers say they are taking the SLS, with its uncrewed Orion spacecraft up top, and “stressing it beyond what it is designed for”—the better to ensure safe flights when astronauts make their first landings, currently targeted to begin with Artemis III in 2025.

But Nelson is right: The rocket is retro in many ways, borrowing heavily from the space shuttles America flew for 30 years, and from the Apollo-Saturn V.

Much of Artemis’s hardware is refurbished: Its four main engines, and parts of its two strap-on boosters, all flew before on shuttle missions. The rocket’s apricot color comes from spray-on insulation much like the foam on the shuttle’s external tank. And the large maneuvering engine in Orion’s service module is actually 40 years old—used on 19 space shuttle flights between 1984 and 1992.

“I have a name for missions that use too much new technology—failures.”
—John Casani, NASA

Perhaps more important, the project inherits basic engineering from half a century of spaceflight. Just look at Orion’s crew capsule—a truncated cone, somewhat larger than the Apollo Command Module but conceptually very similar.

Old, of course, does not mean bad. NASA says there is no need to reinvent things engineers got right the first time.

“There are certain fundamental aspects of deep-space exploration that are really independent of money,” says Jim Geffre, Orion vehicle-integration manager at the Johnson Space Center in Houston. “The laws of physics haven’t changed since the 1960s. And capsule shapes happen to be really good for coming back into the atmosphere at Mach 32.”

Roger Launius, who served as NASA’s chief historian from 1990 to 2002 and as a curator at the Smithsonian Institution from then until 2017, tells of a conversation he had with John Casani, a veteran NASA engineer who managed the Voyager, Galileo, and Cassini probes to the outer planets.

“I have a name for missions that use too much new technology,” he recalls Casani saying. “Failures.”

The Artemis I flight is slated for about six weeks. (Apollo 11 lasted eight days.) The ship roughly follows Apollo’s path to the moon’s vicinity, but then puts itself in what NASA calls a distant retrograde orbit. It swoops within 110 kilometers of the lunar surface for a gravity assist, then heads 64,000 km out—taking more than a month but using less fuel than it would in closer orbits. Finally, it comes home, reentering the Earth’s atmosphere at 11 km per second, slowing itself with a heatshield and parachutes, and splashing down in the Pacific not far from San Diego.

If all four, quadruply redundant flight computer modules fail, there is a fifth, entirely separate computer onboard, running different code to get the spacecraft home.

“That extra time in space,” says Geffre, “allows us to operate the systems, give more time in deep space, and all those things that stress it, like radiation and micrometeoroids, thermal environments.”

There are, of course, newer technologies on board. Orion is controlled by two vehicle-management computers, each composed of two flight computer modules (FCMs) to handle guidance, navigation, propulsion, communications, and other systems. The flight control system, Geffre points out, is quad-redundant; if at any point one of the four FCMs disagrees with the others, it will take itself offline and, in a 22-second process, reset itself to make sure its outputs are consistent with the others’. If all four FCMs fail, there is a fifth, entirely separate computer running different code to get the spacecraft home.

Guidance and navigation, too, have advanced since the sextant used on Apollo. Orion uses a star tracker to determine its attitude, imaging stars and comparing them to an onboard database. And an optical navigation camera shoots Earth and the moon so that guidance software can determine their distance and position and keep the spacecraft on course. NASA says it’s there as backup, able to get Orion to a safe splashdown even if all communication with Earth has been lost.

But even those systems aren’t entirely new. Geffre points out that the guidance system’s architecture is derived from the Boeing 787. Computing power in deep space is limited by cosmic radiation, which can corrupt the output of microprocessors beyond the protection of Earth’s atmosphere and magnetic field.

Beyond that is the inevitable issue of cost. Artemis is a giant project, years behind schedule, started long before NASA began to buy other launches from companies like SpaceX and Rocket Lab. NASA’s inspector general, Paul Martin, testified to Congress in March that the first four Artemis missions would cost US $4.1 billion each—“a price tag that strikes us as unsustainable.”

Launius, for one, rejects the argument that government is inherently wasteful. “Yes, NASA’s had problems in managing programs in the past. Who hasn’t?” he says. He points out that Blue Origin and SpaceX have had plenty of setbacks of their own—they’re just not obliged to be public about them. “I could go on and on. It’s not a government thing per se and it’s not a NASA thing per se.”

So why return to the moon with—please forgive the pun—such a retro rocket? Partly, say those who watch Artemis closely, because it’s become too big to fail, with so much American money and brainpower invested in it. Partly because it turns NASA’s astronauts outward again, exploring instead of maintaining a space station. Partly because new perspectives could come of it. And partly because China and Russia have ambitions in space that threaten America’s.

“Apollo was a demonstration of technological verisimilitude—to the whole world,” says Launius. “And the whole world knew then, as they know today, that the future belongs to the civilization that can master science and technology.”

Update 7 Sept.: Artemis I has been on launchpad 39B, not 39A as previously reported, at Kennedy Space Center.


Match ID: 84 Score: 2.86 source: spectrum.ieee.org age: 93 days
qualifiers: 2.86 russia

DARPA Wants a Better, Badder Caspian Sea Monster
Thu, 19 May 2022 19:31:02 +0000


Arguably, the primary job of any military organization is moving enormous amounts of stuff from one place to another as quickly and efficiently as possible. Some of that stuff is weaponry, but the vast majority are things that support that weaponry—fuel, spare parts, personnel, and so on. At the moment, the U.S. military has two options when it comes to transporting large amounts of payload. Option one is boats (a sealift), which are efficient, but also slow and require ports. Option two is planes (an airlift), which are faster by a couple of orders of magnitude, but also expensive and require runways.

To solve this, the Defense Advanced Research Projects Agency (DARPA) wants to combine traditional sealift and airlift with the Liberty Lifter program, which aims to “design, build, and flight test an affordable, innovative, and disruptive seaplane” that “enables efficient theater-range transport of large payloads at speeds far exceeding existing sea lift platforms.”

DARPA

DARPA is asking for a design like this to take advantage of ground effect, which occurs when an aircraft’s wing deflects air downward and proximity to the ground generates a cushioning effect due to the compression of air between the bottom of the wing and the ground. This boosts lift and lowers drag to yield a substantial overall improvement in efficiency. Ground effect works on both water and land, but you can take advantage of it for only so long on land before your aircraft runs into something. Which is why oceans are the ideal place for these aircraft—or ships, depending on your perspective.

During the late 1980s, the Soviets (and later the Russians) leveraged ground effect in the design of a handful of awesomely bizarre ships and aircraft. There’s the VVA-14, which was also an airplane, along with the vehicle shown in DARPA’s video above, the Lun-class ekranoplan, which operated until the late 1990s. The video clip really does not do this thing justice, so here’s a better picture, taken a couple of years ago:

Oblique overhead view of a huge grey seaplane on the water Instagram

The Lun (only one was ever made) had a wingspan of 44 meters and was powered by eight turbojet engines. It flew about 4 meters above the water at speeds of up to 550 kilometers per hour, and could transport almost 100,000 kilograms of cargo for 2,000 km. It was based on an earlier, even larger prototype (the largest aircraft in the world at the time) that the CIA spotted in satellite images in 1967 and which seems to have seriously freaked them out. It was nicknamed the Caspian Sea Monster, and it wasn’t until the 1980s that the West understood what it was and how it worked.

In the mid 1990s, DARPA itself took a serious look at a stupendously large ground-effect vehicle of its own, the Aerocon Dash 1.6 wingship. The concept image below is of a 4.5-million-kg vehicle, 175 meters long with a 100-meter wingspan, powered by 20 (!) jet engines:

A black and white wireframe drawing of a huge streamlined aircraft Wikipedia

With a range of almost 20,000 km at over 700 km/h, the wingship could have carried 3,000 passengers or 1.4 million kg of cargo. By 1994, though, DARPA had decided that the potential billion-dollar project to build a wingship like this was too risky, and canceled the whole thing.

A concept image of a massive grey seaplane skimming over the ocean

Less than 10 years later, Boeing’s Phantom Works started exploring an enormous ground-effect aircraft, the Pelican Ultra Large Transport Aircraft. The Pelican would have been even larger than the Aerocon wingship, with a wingspan of 152 meters and a payload of 1.2 million kg—that’s about 178 shipping containers’ worth. Unlike the wingship, the Pelican would take advantage of ground effect to boost efficiency only in transit above water, but would otherwise use runways like a normal aircraft and be able to reach flight altitudes of 7,500 meters. Operating as a traditional aircraft and with an optimal payload, the Pelican would have a range of about 12,000 km. In ground effect, however, the range would have increased to 18,500 km, illustrating the appeal of designs like these. But Boeing dropped the project in 2005 to focus on lower cost, less risky options.

We’d be remiss if we didn’t at least briefly mention two other massive aircraft: the H-4 Hercules, the cargo seaplane built by Hughes Aircraft Co. in the 1940s, and the Stratolaunch carrier aircraft, which features a twin-fuselage configuration that DARPA seems to be favoring in its concept video for some reason.

From the sound of DARPA’s announcement, they’re looking for something a bit more like the Pelican than the Aerocon Dash or the Lun. DARPA wants the Liberty Lifter to be able to sustain flight out of ground effect if necessary, although it’s expected to spend most of its time over water for efficiency. It won’t use runways on land at all, though, and should be able to stay out on the water for 4 to 6 weeks at a time, operating even in rough seas—a significant challenge for ground-effect aircraft.

DARPA is looking for an operational range of 7,500 km, with a maximum payload of at least 90,000 kg, including the ability to launch and recover amphibious vehicles. The hardest thing DARPA is asking for could be that, unlike most other X-planes, the Liberty Lifter should incorporate a “low cost design and construction philosophy” inspired by the mass-produced Liberty ships of World War II.

With US $15 million to be awarded to up to two Liberty Lifter concepts, DARPA is hoping that at least one of those concepts will pass a system-level critical design review in 2025. If everything goes well after that, the first flight of a full-scale prototype vehicle could happen as early as 2027.

This article appears in the September 2022 print issue as “DARPA Reincarnates Soviet-Era Sea Monster.”


Match ID: 85 Score: 2.86 source: spectrum.ieee.org age: 194 days
qualifiers: 2.86 russia

U.N. Kills Any Plans to Use Mercury as a Rocket Propellant
Tue, 19 Apr 2022 18:00:01 +0000


A recent United Nations provision has banned the use of mercury in spacecraft propellant. Although no private company has actually used mercury propellant in a launched spacecraft, the possibility was alarming enough—and the dangers extreme enough—that the ban was enacted just a few years after one U.S.-based startup began toying with the idea. Had the company gone through with its intention to sell mercury propellant thrusters to some of the companies building massive satellite constellations over the coming decade, it would have resulted in Earth’s upper atmosphere being laced with mercury.

Mercury is a neurotoxin. It’s also bio-accumulative, which means it’s absorbed by the body at a faster rate than the body can remove it. The most common way to get mercury poisoning is through eating contaminated seafood. “It’s pretty nasty,” says Michael Bender, the international coordinator of the Zero Mercury Working Group (ZMWG). “Which is why this is one of the very few instances where the governments of the world came together pretty much unanimously and ratified a treaty.”

Bender is referring to the 2013 Minamata Convention on Mercury, a U.N. treaty named for a city in Japan whose residents suffered from mercury poisoning from a nearby chemical factory for decades. Because mercury pollutants easily find their way into the oceans and the atmosphere, it’s virtually impossible for one country to prevent mercury poisoning within its borders. “Mercury—it’s an intercontinental pollutant,” Bender says. “So it required a global treaty.”

Today, the only remaining permitted uses for mercury are in fluorescent lighting and dental amalgams, and even those are being phased out. Mercury is otherwise found as a by-product of other processes, such as the burning of coal. But then a company hit on the idea to use it as a spacecraft propellant.

In 2018, an employee at Apollo Fusion approached the Public Employees for Environmental Responsibility (PEER), a nonprofit that investigates environmental misconduct in the United States. The employee—who has remained anonymous—alleged that the Mountain View, Calif.–based space startup was planning to build and sell thrusters that used mercury propellant to multiple companies building low Earth orbit (LEO) satellite constellations.

Four industry insiders ultimately confirmed that Apollo Fusion was building thrusters that utilized mercury propellant. Apollo Fusion, which was acquired by rocket manufacturing startup Astra in June 2021, insisted that the composition of its propellant mixture should be considered confidential information. The company withdrew its plans for a mercury propellant in April 2021. Astra declined to respond to a request for comment for this story.

Apollo Fusion wasn’t the first to consider using mercury as a propellant. NASA originally tested it in the 1960s and 1970s with two Space Electric Propulsion Tests (SERT), one of which was sent into orbit in 1970. Although the tests demonstrated mercury’s effectiveness as a propellant, the same concerns over the element’s toxicity that have seen it banned in many other industries halted its use by the space agency as well.

“I think it just sort of fell off a lot of folks’ radars,” says Kevin Bell, the staff counsel for PEER. “And then somebody just resurrected the research on it and said, ‘Hey, other than the environmental impact, this was a pretty good idea.’ It would give you a competitive advantage in what I imagine is a pretty tight, competitive market.”

That’s presumably why Apollo Fusion was keen on using it in their thrusters. Apollo Fusion as a startup emerged more or less simultaneously with the rise of massive LEO constellations that use hundreds or thousands of satellites in orbits below 2,000 kilometers to provide continual low-latency coverage. Finding a slightly cheaper, more efficient propellant for one large geostationary satellite doesn’t move the needle much. But doing the same for thousands of satellites that need to be replaced every several years? That’s a much more noticeable discount.

Were it not for mercury’s extreme toxicity, it would actually make an extremely attractive propellant. Apollo Fusion wanted to use a type of ion thruster called a Hall-effect thruster. Ion thrusters strip electrons from the atoms that make up a liquid or gaseous propellant, and then an electric field pushes the resultant ions away from the spacecraft, generating a modest thrust in the opposite direction. The physics of rocket engines means that the performance of these engines increases with the mass of the ion that you can accelerate.

Mercury is heavier than either xenon or krypton, the most commonly used propellants, meaning more thrust per expelled ion. It’s also liquid at room temperature, making it efficient to store and use. And it’s cheap—there’s not a lot of competition with anyone looking to buy mercury.

Bender says that ZMWG, alongside PEER, caught wind of Apollo Fusion marketing its mercury-based thrusters to at least three companies deploying LEO constellations—One Web, Planet Labs, and SpaceX. Planet Labs, an Earth-imaging company, has at least 200 CubeSats in low Earth orbit. One Web and SpaceX, both wireless-communication providers, have many more. One Web plans to have nearly 650 satellites in orbit by the end of 2022. SpaceX already has nearly 1,500 active satellites aloft in its Starlink constellation, with an eye toward deploying as many as 30,000 satellites before its constellation is complete. Other constellations, like Amazon’s Kuiper constellation, are also planning to deploy thousands of satellites.

In 2019, a group of researchers in Italy and the United States estimated how much of the mercury used in spacecraft propellant might find its way back into Earth’s atmosphere. They figured that a hypothetical LEO constellation of 2,000 satellites, each carrying 100 kilograms of propellant, would emit 20 tonnes of mercury every year over the course of a 10-year life span. Three quarters of that mercury, the researchers suggested, would eventually wind up in the oceans.

That amounts to 1 percent of global mercury emissions from a constellation only a fraction of the size of the one planned by SpaceX alone. And if multiple constellations adopted the technology, they would represent a significant percentage of global mercury emissions—especially, the researchers warned, as other uses of mercury are phased out as planned in the years ahead.

Fortunately, it’s unlikely that any mercury propellant thrusters will even get off the ground. Prior to the fourth meeting of the Minamata Convention, Canada, the European Union, and Norway highlighted the dangers of mercury propellant, alongside ZMWG. The provision to ban mercury usage in satellites was passed on 26 March 2022.

The question now is enforcement. “Obviously, there aren’t any U.N. peacekeepers going into space to shoot down” mercury-based satellites, says Bell. But the 137 countries, including the United States, who are party to the convention have pledged to adhere to its provisions—including the propellant ban.

The United States is notable in that list because as Bender explains, it did not ratify the Minamata Convention via the U.S. Senate but instead deposited with the U.N. an instrument of acceptance. In a 7 November 2013 statement (about one month after the original Minamata Convention was adopted), the U.S. State Department said the country would be able to fulfill its obligations “under existing legislative and regulatory authority.”

Bender says the difference is “weedy” but that this appears to mean that the U.S. government has agreed to adhere to the Minamata Convention’s provisions because it already has similar laws on the books. Except there is still no existing U.S. law or regulation banning mercury propellant. For Bender, that creates some uncertainty around compliance when the provision goes into force in 2025.

Still, with a U.S. company being the first startup to toy with mercury propellant, it might be ideal to have a stronger U.S. ratification of the Minamata Convention before another company hits on the same idea. “There will always be market incentives to cut corners and do something more dangerously,” Bell says.

Update 19 April 2022: In an email, a spokesperson for Astra stated that the company's propulsion system, the Astra Spacecraft Engine, does not use mercury. The spokesperson also stated that Astra has no plans to use mercury propellant and that the company does not have anything in orbit that uses mercury.

Updated 20 April 2022 to clarify that Apollo Fusion was building thrusters that used mercury, not that they had actually used them.


Match ID: 86 Score: 2.86 source: spectrum.ieee.org age: 224 days
qualifiers: 2.86 italy

New Records for the Biggest and Smallest AI Computers
Thu, 17 Nov 2022 16:28:08 +0000


The machine-learning consortium MLCommons released the latest set of benchmark results last week, offering a glimpse at the capabilities of new chips and old as they tackled executing lightweight AI on the tiniest systems and training neural networks at both server and supercomputer scales. The benchmark tests saw the debut of new chips from Intel and Nvidia as well as speed boosts from software improvements and predictions that new software will play a role in speeding the new chips in the years after their debut.


Training Servers

Training AI has been a problem that’s driven billions of dollars in investment, and it seems to be paying off. “A few years ago we were talking about training these networks in days or weeks, now we’re talking about minutes,” says Dave Salvator, director of product marketing at Nvidia.

There are eight benchmarks in the MLPerf training suite, but here I’m showing results from just two—image classification and natural-language processing—because although they don’t give a complete picture, they’re illustrative of what’s happening. Not every company puts up benchmark results every time; in the past, systems from Baidu, Google, Graphcore, and Qualcomm have made marks, but none of these were on the most recent list. And there are companies whose goal is to train the very biggest neural networks, such as Cerebras and SambaNova, that have never participated.

Another note about the results I’m showing—they are incomplete. To keep the eye glazing to a minimum, I’ve listed only the fastest system of each configuration. There were already four categories in the main “closed” contest: cloud (self-evident), on premises (systems you could buy and install in-house right now), preview (systems you can buy soon but not now), and R&D (interesting but odd, so I excluded them). I then listed the fastest training result for each category for each configuration—the number of accelerators in a computer. If you want to see the complete list, it’s at the MLCommons website.

A casual glance shows that machine-learning training is still very much Nvidia’s house. It can bring a supercomputer-scale number of GPUs to the party to smash through training problems in mere seconds. Its A100 GPUs have dominated the MLPerf list for several iterations now, and it powers Microsoft’s Azure cloud AI offerings as well as systems large and small from partners including Dell, HPE, and Fujitsu. But even among the A100 gang there’s real competition, particularly between Dell and HPE.

But perhaps more important was Azure’s standing. On image classification, the cloud systems were essentially a match for the best A100 on-premises computers. The results strengthen Microsoft’s case that renting resources in the cloud is as good as buying your own. And that case might might be even stronger soon. This week Nvidia and Microsoft announced a multiyear collaboration that would see the inclusion of Nvidia’s upcoming GPU, the H100, in the Azure cloud.

This was the first peek at training abilities for the H100. And Nivida’s Dave Salvator emphasized how much progress happens—largely due to software improvements—in the years after a new chip comes out. On a per-chip basis, the A100 delivers 2.5 times the average performance today versus its first run at the MLPerf benchmarks in 2020. Compared to A100’s debut scores, H100 delivered 6.7 times the speed. But compared to A100 with today’s software, the gain is only 2.6-fold.

In a way, H100 seems a bit overpowered for the MLPerf benchmarks, tearing through most of them in minutes using a fraction of the A100 hardware needed to match it. And in truth, it is meant for bigger things. “H100 is our solution for the most advanced models where we get into the millions, even billions of hyperparameters,” says Salvator.

Salvator says a lot of the gain is from the H100’s “transformer engine.” Essentially, it’s the intelligent use of low-precision—efficient but less accurate—computations whenever possible. The scheme is particularly designed for neural networks called transformers, of which the natural language processing benchmark BERT is an example. Transformers are in the works for many other machine learning tasks. “Transformer-based networks have been literally transformative to AI,” says Salvator. “It’s a horrible pun.“

Memory is a bottleneck for all sorts of AI, but it’s particularly limiting in BERT and other transformer models. Such neural networks rely on a quality called “attention.” You can think of it as how many words a language processor is aware of at once. It doesn’t scale up well, largely because it leads to a huge increase in writing to system memory. Earlier this year Hazy Research (the name for Chris Re’s lab at Stanford) deployed an algorithm to an Azure cloud system that shaved 10 percent of the training time off Microsoft’s best effort. For this round, Azure and Hazy Research worked together to demonstrate the algorithm—called Flash Attention.

Both the image-classification and natural-language-processing tables show Intel’s competitive position. The company showed results for the Habana Gaudi2, its second generation AI accelerator, and the Sapphire Rapids Xeon CPU, which will be commercially available in the coming months. For the latter, the company was out to prove that you can do a lot of machine-learning training without a GPU.

A setup with 32 CPUs landed well behind a Microsoft Azure cloud-based system with only four GPUs on object recognition, but it still finished in less than an hour and a half, and for natural-language processing, it nearly matched that Azure system. In fact, none of the training took longer than 90 minutes, even on much more modest CPU-only computers.

“This is for customers for whom training is part of the workload, but it’s not the workload,” says Jordan Plawner, an Intel senior director and AI product manager. Intel is reasoning that if a customer is retraining only once a week, whether the work takes 30 minutes or 5 minutes is of too little importance for them to spend on a GPU accelerator they don’t need for the rest of the week.

Habana Gaudi2 is a different story. As the company’s dedicated machine-learning accelerator, the 7-nanometer chip goes up against Nvidia’s A100 (another 7-nm chip) and soon will face the 5-nm H100. In that light, it performed well on certain tests. On image classification, an eight-chip system landed only a couple of minutes behind an eight-chip H100. But the gap was much wider with the H100 at the natural-language-processing task, though it still narrowly bested an equal-size and Hazy-Research-enhanced A100 system.

“We’re not done with Gaudi 2,” says Habana’s Eitan Medina. Like others, Habana is hoping to speed learning by strategically using low-precision computations on certain layers of neural networks. The chip has 8-bit floating-point capabilities, but so far the smallest precision the company has engaged on the chip for MLPerf training purposes is bfloat 16.


Training Supercomputers

MLCommons released results for training high-performance computers—supercomputers and other big systems—at the same time as those for training servers. The HPC benchmarks are not as established and have fewer participants, but they still give a snapshot of how machine learning is done in the supercomputing space and what the goals are. There are three benchmarks: CosmoFlow estimates physical quantities from cosmological image data; DeepCAM spots hurricanes and atmospheric rivers in climate simulation data; and OpenCatalyst predicts the energy levels of molecular configurations.

There are two ways to measure systems on these benchmarks. One is to run a number of instances of the same neural network on the supercomputer, and the other is to just throw a bunch of resources at a single instance of the problem and see how long it takes. The table below is the latter and just for CosmoFlow, because it’s much simpler to read. (Again, feel free to view the whole schemozzle at MLCommons.)

The CosmoFlow results show four supercomputers powered by as many different types of CPU architectures and two types of GPU. Three of the four were accelerated by Nvidia GPUs, but Fugaku, the second most powerful computer in the world, used only its own custom-built processor, the Fujitsu A64FX.

The MLPerf HPC benchmarks came out only the week before Supercomputing 2022, in Dallas, one of the two conferences at which new Top500 rankings of supercomputers are announced.

A separate benchmark for supercomputing AI has also been developed. Instead of training particular neural networks, it solves “a system of linear equations using novel, mixed-precision algorithms that exploit modern hardware.” Although results from the two benchmarks don’t line up, there is overlap between the HPL-MxP list and the CosmoFlow results including: Nvidia’s Selene, Riken’s Fugaku, and Germany’s JUWELS.


Tiny ML systems

The latest addition to the MLPerf effort is a suite of benchmarks designed to test the speed and energy efficiency of microcontrollers and other small chips that execute neural networks that do things like spotting keywords and other low-power, always-on tasks. MLPerf Tiny, as it’s called, is too new for real trends to have emerged in the data. But the results released so far show a couple of standouts. The table here shows the fastest “visual wakewords” results for each type of processor, and shows that Syntiant and Greenwave Technologies have an edge over the competition.


Match ID: 87 Score: 2.14 source: spectrum.ieee.org age: 12 days
qualifiers: 2.14 energy

What is COP27 and why is it important?
Mon, 07 Nov 2022 13:25:26 GMT
This year's climate summit comes amid spiralling energy costs and diplomatic tensions.
Match ID: 88 Score: 2.14 source: www.bbc.co.uk age: 22 days
qualifiers: 2.14 energy

How gas is being rebranded as green – video
Thu, 03 Nov 2022 09:05:21 GMT

Is natural gas renewable? Is it a fossil fuel? A casual google search for natural gas gives the impression that these questions are somehow up for debate. And while natural gas has helped reduce carbon emissions as it was widely adopted as a replacement for coal, it is now up against zero-emission energy such as wind and solar. So how did natural gas end up in the same bracket as renewables? Josh Toussaint-Strauss explores the lengths fossil fuel companies have gone to in order to try to convince consumers, voters and lawmakers that natural gas is somehow a clean energy source

Continue reading...
Match ID: 89 Score: 2.14 source: www.theguardian.com age: 26 days
qualifiers: 2.14 energy

How to cut your energy bills
Thu, 20 Oct 2022 11:48:32 GMT
Energy bills have gone up, but there are some easy ways households can reduce costs.
Match ID: 90 Score: 2.14 source: www.bbc.co.uk age: 40 days
qualifiers: 2.14 energy

Dow, S&P 500 and Nasdaq snap 3-session skid as stocks eke out gains
Wed, 24 Aug 2022 16:01:33 -0500
U.S. stocks finished modestly higher Wednesday, with all three major stock benchmarks ending a 3-session skid, as investors picked up shares after a sharp market selloff earlier this week. The Dow Jones Industrial Average rose about 61 points, or 0.2%, ending near 32,970, while the S&P 500 index closed up 0.3% and the Nasdaq Composite Index advanced 0.4%. Stocks booked modest gains as investors remained focused on the Federal Reserve's inflation fight and Fed Chairman Jerome Powell's speech at the Jackson Hole, Wyo. symposium on Friday. Recession worries also were in focus, with pending homes sales falling in July, a sign that rate hikes have been helping cool demand, even through shelter costs, specifically rents, have contributed to high U.S. inflation. Benchmark lending climbed Wednesday, with the 10-year Treasury rate climbing to 3.105% Wednesday, the highest since June 28, according to Dow Jones Market Data based on 3 p.m. Eastern levels. The S&P 500's energy sector helped lead the way higher, up 1.2%, while financial rose 0.5%, according to FactSet
Match ID: 91 Score: 2.14 source: www.marketwatch.com age: 97 days
qualifiers: 2.14 energy

Solar-to-Jet-Fuel System Readies for Takeoff
Wed, 03 Aug 2022 17:00:00 +0000


As climate change edges from crisis to emergency, the aviation sector looks set to miss its 2050 goal of net-zero emissions. In the five years preceding the pandemic, the top four U.S. airlines—American, Delta, Southwest, and United—saw a 15 percent increase in the use of jet fuel. Despite continual improvements in engine efficiencies, that number is projected to keep rising.

A glimmer of hope, however, comes from solar fuels. For the first time, scientists and engineers at the Swiss Federal Institute of Technology (ETH) in Zurich have reported a successful demonstration of an integrated fuel-production plant for solar kerosene. Using concentrated solar energy, they were able to produce kerosene from water vapor and carbon dioxide directly from air. Fuel thus produced is a drop-in alternative to fossil-derived fuels and can be used with existing storage and distribution infrastructures, and engines.

Fuels derived from synthesis gas (or syngas)—an intermediate product that is a specific mixture of carbon monoxide and hydrogen—is a known alternative to conventional, fossil-derived fuels. Syngas is produced by Fischer-Tropsch (FT) synthesis, in which chemical reactions convert carbon monoxide and water vapor into hydrocarbons. The team of researchers at ETH found that a solar-driven thermochemical method to split water and carbon dioxide using a metal oxide redox cycle can produce renewable syngas. They demonstrated the process in a rooftop solar refinery at the ETH Machine Laboratory in 2019.

Close-up of a spongy looking material Reticulated porous structure made of ceria used in the solar reactor to thermochemically split CO2 and H2O and produce syngas, a specific mixture of H2 and CO.ETH Zurich

The current pilot-scale solar tower plant was set up at the IMDEA Energy Institute in Spain. It scales up the solar reactor of the 2019 experiment by a factor of 10, says Aldo Steinfeld, an engineering professor at ETH who led the study. The fuel plant brings together three subsystems—the solar tower concentrating facility, solar reactor, and gas-to-liquid unit.

First, a heliostat field made of mirrors that rotate to follow the sun concentrates solar irradiation into a reactor mounted on top of the tower. The reactor is a cavity receiver lined with reticulated porous ceramic structures made of ceria (or cerium(IV) oxide). Within the reactor, the concentrated sunlight creates a high-temperature environment of about 1,500 °C which is hot enough to split captured carbon dioxide and water from the atmosphere to produce syngas. Finally, the syngas is processed to kerosene in the gas-to-liquid unit. A centralized control room operates the whole system.

Fuel produced using this method closes the fuel carbon cycle as it only produces as much carbon dioxide as has gone into its manufacture. “The present pilot fuel plant is still a demonstration facility for research purposes,” says Steinfeld, “but it is a fully integrated plant and uses a solar-tower configuration at a scale that is relevant for industrial implementation.”

“The solar reactor produced syngas with selectivity, purity, and quality suitable for FT synthesis,” the authors noted in their paper. They also reported good material stability for multiple consecutive cycles. They observed a value of 4.1 percent solar-to-syngas energy efficiency, which Steinfeld says is a record value for thermochemical fuel production, even though better efficiencies are required to make the technology economically competitive.

Schematic of the solar tower fuel plant.  A heliostat field concentrates solar radiation onto a solar reactor mounted on top of the solar tower. The solar reactor cosplits water and carbon dioxide and produces a mixture of molecular hydrogen and carbon monoxide, which in turn is processed to drop-in fuels such as kerosene.ETH Zurich

“The measured value of energy conversion efficiency was obtained without any implementation of heat recovery,” he says. The heat rejected during the redox cycle of the reactor accounted for more than 50 percent of the solar-energy input. “This fraction can be partially recovered via thermocline heat storage. Thermodynamic analyses indicate that sensible heat recovery could potentially boost the energy efficiency to values exceeding 20 percent.”

To do so, more work is needed to optimize the ceramic structures lining the reactor, something the ETH team is actively working on, by looking at 3D-printed structures for improved volumetric radiative absorption. “In addition, alternative material compositions, that is, perovskites or aluminates, may yield improved redox capacity, and consequently higher specific fuel output per mass of redox material,” Steinfeld adds.

The next challenge for the researchers, he says, is the scale-up of their technology for higher solar-radiative power inputs, possibly using an array of solar cavity-receiver modules on top of the solar tower.

To bring solar kerosene into the market, Steinfeld envisages a quota-based system. “Airlines and airports would be required to have a minimum share of sustainable aviation fuels in the total volume of jet fuel that they put in their aircraft,” he says. This is possible as solar kerosene can be mixed with fossil-based kerosene. This would start out small, as little as 1 or 2 percent, which would raise the total fuel costs at first, though minimally—adding “only a few euros to the cost of a typical flight,” as Steinfeld puts it

Meanwhile, rising quotas would lead to investment, and to falling costs, eventually replacing fossil-derived kerosene with solar kerosene. “By the time solar jet fuel reaches 10 to 15 percent of the total jet-fuel volume, we ought to see the costs for solar kerosene nearing those of fossil-derived kerosene,” he adds.

However, we may not have to wait too long for flights to operate solely on solar fuel. A commercial spin-off of Steinfeld’s laboratory, Synhelion, is working on commissioning the first industrial-scale solar fuel plant in 2023. The company has also collaborated with the airline SWISS to conduct a flight solely using its solar kerosene.


Match ID: 92 Score: 2.14 source: spectrum.ieee.org age: 118 days
qualifiers: 2.14 energy

Inside the Universe Machine: The Webb Space Telescope’s Chilly Sun Shield
Thu, 07 Jul 2022 13:48:57 +0000


For a deep dive into the engineering behind the James Webb Space Telescope, see our collection of posts here.

When Apollo astronauts sent back the first pictures of Earth as a disk in space, the poet Archibald MacLeish wrote of it as “that bright loveliness in the eternal cold.” He was not far off. Deep space has a temperature of 2.7 kelvins—just 2.7 degrees above absolute zero.

If the James Webb Space Telescope is to work—looking so far out and therefore so far back in time that it can see the first galaxies forming after the big bang—it will have to image objects so faint that they barely stand out from the cold around them. The world will begin finding out how well the observatory works as soon as next week, when JWST is expected to release its first trove of scientific images and spectroscopic data.

So, for argument’s sake, let’s assume all indications so far do in fact point to a successful kickoff of the (hopefully long and storied) scientific data-gathering phase of Webb’s mission. How then did the engineers and designers of this massive telescope ever make it possible to cool the telescope down enough—all at a remove of nearly four times the distance from Earth to the moon—to possibly do its job?

After more than 25 years’ work and technological hurdles beyond counting, the Webb team has launched and stationed its mammoth observatory in solar orbit—and brought its instruments below 40 kelvins (-233 °C), cold enough to see the early universe more than 13.5 billion years ago. Remarkably, most of the cooling has been done passively, by shielding the telescope from the sun and letting physics take care of the rest.

“Webb is not just the product of a group of people. It’s not the product of some smart astronomers—Webb is truly the product of our entire world’s capability,” says Keith Parrish, a leader on the Webb team at NASA’s Goddard Space Flight Center in Maryland. “Taken as a whole, Webb is truly the result of our entire know-how of how to build complex machines.”

Parrish joined the project in 1997, ultimately becoming its commissioning manager through the years of design, assembly, testing, delay and, finally, launch on 25 December 2021. He says almost everything about it—its shape and location, the materials from which it’s made—was dictated by the need to have an observatory that would survive for years at supercold temperatures.

Photo of clean room with five giant foil-like sheets stacked atop one another, with three scientists in the distance, inspecting the sunshield In this photo, the five-layered JWST sunshield is being unfurled and inspected in a clean room. The layers of coated Kapton E never touch, minimizing the transmission of heat from one layer to the next. Alex Evers/Northrop Grumman

The Webb is an infrared observatory for many reasons, not the least of which is that as the universe expands, the wavelength of light from distant objects is lengthened, causing dramatic redshift. Infrared is also good for seeing through cosmic dust and gas, and for imaging cold things such as comets, Kuiper Belt objects, and perhaps planets orbiting other stars.

But infrared radiation is often best measured as heat, which is why it’s important for the Webb to be so cold. If, like the Hubble Telescope, it were in low Earth orbit, and it had no shielding from the sun, most of its targets would be drowned out by the sun and ground, and by heat in the telescope itself.

“If my signal is heat—and infrared is heat—then what I can’t have is other heat sources that are noise in the system,” says Jim Flynn, the sunshield manager at Northrop Grumman, the prime contractor for the Webb.

So the Webb has been sent to circle a spot in space called L2, 1.5 million kilometers away, opposite the sun, one of the locations known as Lagrange points. These "L" points are where the gravity of Earth and the sun exactly conspire to keep it in a stable and relatively "fixed" orbit with respect to the Earth as it makes its way around its 365.256-day course circling the sun. It’s a good compromise: Earth is distant enough that it doesn’t interfere with observations, but close enough that communication with the spacecraft can be relatively fast. And since the ship isn’t flying from day to night and back on every orbit, its temperature is relatively stable. All it needs is a really, really good sunshade.

“Four [layers of sunshield] would have probably done the job. Five gave us a little bit of an insurance policy. I’d like to say it was way more sophisticated than that, but that’s really not what it was at all.”
—Keith Parrish, NASA Goddard Space Flight Center

“The engineering was pushed above and beyond to meet the scientific goals,” says Alexandra Lockwood, a project scientist at the Space Telescope Science Institute, which operates the Webb. “It is specifically designed the way that it is because they wanted to do intensive infrared science.”

It makes for an ungainly-looking ship in many renderings, with the telescope assembly, intentionally open to space to prevent heat buildup, attached to its silvery sunshield, about 14 meters wide and 21 meters long, with five layers of insulating film to keep the telescope in almost total darkness.

From its sunlit side the sunshield roughly resembles a kite. The elongated shape, engineers found, would be the most efficient way to keep the Webb’s optics out of the sun. They considered a square or octagon, but the final version covers more area without much more mass.

“It’s no larger than it needs to be to meet the science field-of-view requirements, and that unique kite shape is the result,” says Parrish. “Any larger than it is now, it just makes everything more complex.”

The shield’s five layers are made of Kapton E, a plastic film first developed by DuPont in the 1960s and used for spacecraft insulation and printed circuits. The layers are coated in aluminum and silicon. Each is thinner than a human hair. But engineers say they are, together, very effective in blocking the sun’s heat. The first layer reduces its strength by about an order of magnitude (or 90 percent), the second layer removes another order of magnitude, and so on. The layers never touch, and they’re slightly flared as one gets away from the center of the shield, so that heat will escape out the sides.

The result: Temperatures on the sunny side of the shield approach 360 K (87 °C), but on the dark side they’re below that all-important 40 K (-233 °C). Or, put another way: More than 200 kilowatts of solar energy fall on the first layer, but only 23 milliwatts make it all the way through the fifth.

The Two Sides of the Webb Telescope


illustration depicting features of JWST's sunshield

Why five layers? There was a lot of computer modeling, but it was hard to simulate the shield’s thermal behavior before flight. “Four would have probably done the job. Five gave us a little bit of an insurance policy,” says Parrish. “I’d like to say it was way more sophisticated than that, but that’s really not what it was at all.”

The ability to cool the telescope naturally, first calculated in the 1980s to be possible, was a major advance. It meant the Webb would not have to rely on a heavy, complex cryogenic apparatus, with refrigerants that could leak and shorten the mission. Of its four main scientific instruments, only one, a midinfrared detector called MIRI, needs to be cooled to 6.7 K. It’s chilled by a multistage cryocooler, which pumps cold helium gas through pulse tubes to draw heat away from the instrument’s sensor. It uses the Joule-Thomson effect, reducing the temperature of the helium by making it expand after it’s forced through a 1-millimeter valve. Pressure comes from two pistons—the cryocooler system’s only moving parts—facing opposite directions so their movements will cancel each other out and not disturb observations.

Building the telescope proved immensely complicated; it fell years behind while its budget ballooned toward US $10 billion. The sunshield needed lengthy redesign after testing, when Kapton tore and fasteners came loose.

“We just bit off way more than we could chew,” Parrish says now. “That’s exactly what NASA should be doing. It should be pushing the envelope. The problem is that eventually Webb got too big to fail.”

But it’s finally deployed, sending data, and surprising engineers who expected at least some failures as it began to operate. Keith Parrish, his work done, is moving on to other projects at Goddard.

“I think Webb,” he says, “is just a great product of what it means to be an advanced civilization.”


Update: 26 July 2022: The story was updated to clarify that the gravity at Lagrange point L2 does not "cancel" (as the story had previously stated) but in fact adds to keep an object at L2 orbiting at the precise same orbital period as, in this case, the Earth—i.e. at 365.256 days.


Match ID: 93 Score: 2.14 source: spectrum.ieee.org age: 145 days
qualifiers: 2.14 energy

Inside the Universe Machine: The Webb Space Telescope’s Staggering Vision
Wed, 06 Jul 2022 13:00:00 +0000


For a deep dive into the engineering behind the James Webb Space Telescope, see our collection of posts here.

“Build something that will absolutely, positively work.” This was the mandate from NASA for designing and building the James Webb Space Telescope—at 6.5 meters wide the largest space telescope in history. Last December, JWST launched famously and successfully to its observing station out beyond the moon. And now according to NASA, as soon as next week, the JWST will at long last begin releasing scientific images and data.

Mark Kahan, on JWST’s product integrity team, recalls NASA’s engineering challenge as a call to arms for a worldwide team of thousands that set out to create one of the most ambitious scientific instruments in human history. Kahan—chief electro-optical systems engineer at Mountain View, Calif.–based Synopsys—and many others in JWST’s “pit crew” (as he calls the team) drew hard lessons from three decades ago, having helped repair another world-class space telescope with a debilitating case of flawed optics. Of course the Hubble Space Telescope is in low Earth orbit, and so a special space-shuttle mission to install corrective optics ( as happened in 1993) was entirely possible.

Not so with the JWST.

The meticulous care NASA demanded of JWST’s designers is all the more a necessity because Webb is well out of reach of repair crews. Its mission is to study the infrared universe, and that requires shielding the telescope and its sensors from both the heat of sunlight and the infrared glow of Earth. A good place to do that without getting too far from Earth is an empty patch of interplanetary space 1.5 million kilometers away (well beyond the moon’s orbit) near a spot physicists call the second Lagrange point, or L2.

The pit crew’s job was “down at the detail level, error checking every critical aspect of the optical design,” says Kahan. Having learned the hard way from Hubble, the crew insisted that every measurement on Webb’s optics be made in at least two different ways that could be checked and cross-checked. Diagnostics were built into the process, Kahan says, so that “you could look at them to see what to kick” to resolve any discrepancies. Their work had to be done on the ground, but their tests had to assess how the telescope would work in deep space at cryogenic temperatures.

Three New Technologies for the Main Mirror

Superficially, Webb follows the design of all large reflecting telescopes. A big mirror collects light from stars, galaxies, nebulae, planets, comets, and other astronomical objects—and then focuses those photons onto a smaller secondary mirror that sends it to a third mirror that then ultimately directs the light to instruments that record images and spectra.

Webb’s 6.5-meter primary mirror is the first segmented mirror to be launched into space. All the optics had to be made on the ground at room temperature but were deployed in space and operated at 30 to 55 degrees above absolute zero. “We had to develop three new technologies” to make it work, says Lee D. Feinberg of the NASA Goddard Space Flight Center, the optical telescope element manager for Webb for the past 20 years.

The longest wavelengths that Hubble has to contend with were 2.5 micrometers, whereas Webb is built to observe infrared light that stretches to 28 μm in wavelength. Compared with Hubble, whose primary mirror is a circle of an area 4.5 square meters, “[Webb’s primary mirror] had to be 25 square meters,” says Feinberg. Webb also “needed segmented mirrors that were lightweight, and its mass was a huge consideration,” he adds. No single-component mirror that could provide the required resolution would have fit on the Ariane 5 rocket that launched JWST. That meant the mirror would have to be made in pieces, assembled, folded, secured to withstand the stress of launch, then unfolded and deployed in space to create a surface that was within tens of nanometers of the shape specified by the designers.

Images of the James Webb Space Telescope and Hubble Space Telescope to scale, compared to a human figure, who is dwarfed by their size The James Webb Space Telescope [left] and the Hubble Space Telescope side by side—with Hubble’s 2.4-meter-diameter mirror versus Webb’s array of hexagonal mirrors making a 6.5-meter-diameter light-collecting area. NASA Goddard Space Flight Center

NASA and the U.S. Air Force, which has its own interests in large lightweight space mirrors for surveillance and focusing laser energy, teamed up to develop the technology. The two agencies narrowed eight submitted proposals down to two approaches for building JWST’s mirrors: one based on low-expansion glass made of a mixture of silicon and titanium dioxides similar to that used in Hubble and the other the light but highly toxic metal beryllium. The most crucial issue came down to how well the materials could withstand temperature changes from room temperature on the ground to around 50 K in space. Beryllium won because it could fully release stress after cooling without changing its shape, and it’s not vulnerable to the cracking that can occur in glass. The final beryllium mirror was a 6.5-meter array of 18 hexagonal beryllium mirrors, each weighing about 20 kilograms. The weight per unit area of JWST’s mirror was only 10 percent of that in Hubble. A 100-nanometer layer of pure gold makes the surface reflect 98 percent of incident light from JWST’s main observing band of 0.6 to 28.5 μm. “Pure silver has slightly higher reflectivity than pure gold, but gold is more robust,” says Feinberg. A thin layer of amorphous silica protects the metal film from surface damage.

In addition, a wavefront-sensing control system keeps mirror segment surfaces aligned to within tens of nanometers. Built on the ground, the system is expected to keep mirror alignment stabilized throughout the telescope’s operational life. A backplane kept at a temperature of 35 K holds all 2.4 tonnes of the telescope and instruments rock-steady to within 32 nm while maintaining them at cryogenic temperatures during observations.

Metal superstructure of cages and supports stands on a giant platform in a warehouse-sized clean-room. A man in a cleanroom suit watches the operations. The JWST backplane, the “spine” that supports the entire hexagonal mirror structure and carries more than 2,400 kg of hardware, is readied for assembly to the rest of the telescope. NASA/Chris Gunn

Hubble’s amazing, long-exposure images of distant galaxies are possible through the use of gyroscopes and reaction wheels. The gyroscopes are used to sense unwanted rotations, and reaction wheels are used to counteract them.

But the gyroscopes used on Hubble have had a bad track record and have had to be replaced repeatedly. Only three of Hubble’s six gyros remain operational today, and NASA has devised plans for operating with one or two gyros at reduced capability. Hubble also includes reaction wheels and magnetic torquers, used to maintain its orientation when needed or to point at different parts of the sky.

Webb uses reaction wheels similarly to turn across the sky, but instead of using mechanical gyros to sense direction, it uses hemispherical resonator gyroscopes, which have no moving parts. Webb also has a small fine-steering mirror in the optical path, which can tilt over an angle of just 5 arc seconds. Those very fine adjustments of the light path into the instruments keep the telescope on target. “It’s a really wonderful way to go,” says Feinberg, adding that it compensates for small amounts of jitter without having to move the whole 6-tonne observatory.

Instruments

Other optics distribute light from the fine-steering mirror among four instruments, two of which can observe simultaneously. Three instruments have sensors that observe wavelengths of 0.6 to 5 μm, which astronomers call the near-infrared. The fourth, called the Mid-InfraRed Instrument (MIRI), observes what astronomers call the mid-infrared spectrum, from 5 to 28.5 μm. Different instruments are needed because sensors and optics have limited wavelength ranges. (Optical engineers may blanch slightly at astronomers’ definitions of what constitutes the near- and mid-infrared wavelength ranges. These two groups simply have differing conventions for labeling the various regimes of the infrared spectrum.)

Mid-infrared wavelengths are crucial for observing young stars and planetary systems and the earliest galaxies, but they also pose some of the biggest engineering challenges. Namely, everything on Earth and planets out to Jupiter glow in the mid-infrared. So for JWST to observe distant astronomical objects, it must avoid recording extraneous mid-infrared noise from all the various sources inside the solar system. “I have spent my whole career building instruments for wavelengths of 5 μm and longer,” says MIRI instrument scientist Alistair Glasse of the Royal Observatory, in Edinburgh. “We’re always struggling against thermal background.”

Mountaintop telescopes can see the near-infrared, but observing the mid-infrared sky requires telescopes in space. However, the thermal radiation from Earth and its atmosphere can cloud their view, and so can the telescopes themselves unless they are cooled far below room temperature. An ample supply of liquid helium and an orbit far from Earth allowed the Spitzer Space Telescope’s primary observing mission to last for five years, but once the last of the cryogenic fluid evaporated in 2009, its observations were limited to wavelengths shorter than 5 μm.

Webb has an elaborate solar shield to block sunlight, and an orbit 1.5 million km from Earth that can keep the telescope to below 55 K, but that’s not good enough for low-noise observations at wavelengths longer than 5 μm. The near-infrared instruments operate at 40 K to minimize thermal noise. But for observations out to 28.5 μm, MIRI uses a specially developed closed-cycle, helium cryocooler to keep MIRI cooled below 7 K. “We want to have sensitivity limited by the shot noise of astronomical sources,” says Glasse. (Shot noise occurs when optical or electrical signals are so feeble that each photon or electron constitutes a detectable peak.) That will make MIRI 1,000 times as sensitive in the mid-infrared as Spitzer.

Another challenge is the limited transparency of optical materials in the mid-infrared. “We use reflective optics wherever possible,” says Glasse, but they also pose problems, he adds. “Thermal contraction is a big deal,” he says, because the instrument was made at room temperature but is used at 7 K. To keep thermal changes uniform throughout MIRI, they made the whole structure of gold-coated aluminum lest other metals cause warping.

Detectors are another problem. Webb’s near-infrared sensors use mercury cadmium telluride photodetectors with a resolution of 2,048 x 2,048 pixels. This resolution is widely used at wavelengths below 5 μm, but sensing at MIRI’s longer wavelengths required exotic detectors that are limited to offering only 1,024 x 1,024 pixels.

Glasse says commissioning “has gone incredibly well.” Although some stray light has been detected, he says, “we are fully expecting to meet all our science goals.”

NIRCam Aligns the Whole Telescope

The near-infrared detectors and optical materials used for observing at wavelengths shorter than 5 μm are much more mature than those for the mid-infrared, so the Near-Infrared Camera (NIRCam) does double duty by both recording images and aligning all the optics in the whole telescope. That alignment was the trickiest part of building the instrument, says NIRCam principal investigator Marcia Rieke of the University of Arizona.

Alignment means getting all the light collected by the primary mirror to get to the right place in the final image. That’s crucial for Webb, because it has 18 separate segments that have to overlay their images perfectly in the final image, and because all those segments were built on the ground at room temperature but operate at cryogenic temperatures in space at zero gravity. When NASA recorded a test image of a single star after Webb first opened its primary mirror, it showed 18 separate bright spots, one from each segment. When alignment was completed on 11 March, the image from NIRcam showed a single star with six spikes caused by diffraction.

Image of a star with six-pointed spikes caused by diffraction Even when performing instrumental calibration tasks, JWST couldn’t help but showcase its stunning sensitivity to the infrared sky. The central star is what telescope technicians used to align JWST’s mirrors. But notice the distant galaxies and stars that photobombed the image too!NASA/STScI

Building a separate alignment system would have added to both the weight and cost of Webb, Rieke realized, and in the original 1995 plan for the telescope she proposed designing NIRCam so it could align the telescope optics once it was up in space as well as record images. “The only real compromise was that it required NIRCam to have exquisite image quality,” says Rieke, wryly. From a scientific point, she adds, using the instrument to align the telescope optics “is great because you know you’re going to have good image quality and it’s going to be aligned with you.” Alignment might be just a tiny bit off for other instruments. In the end, it took a team at Lockheed Martin to develop the computational tools to account for all the elements of thermal expansion.

Escalating costs and delays had troubled Webb for years. But for Feinberg, “commissioning has been a magical five months.” It began with the sight of sunlight hitting the mirrors. The segmented mirror deployed smoothly, and after the near-infrared cameras cooled, the mirrors focused one star into 18 spots, then aligned them to put the spots on top of each other. “Everything had to work to get it to [focus] that well,” he says. It’s been an intense time, but for Feinberg, a veteran of the Hubble repair mission, commissioning Webb was “a piece of cake.”

NASA announced that between May 23rd and 25th, one segment of the primary mirror had been dinged by a micrometeorite bigger than the agency had expected when it analyzed the potential results of such impacts. “Things do degrade over time,” Feinberg said. But he added that Webb had been engineered to minimize damage, and NASA said the event had not affected Webb’s operation schedule.

Corrections 26-28 July 2022: The story was updated a) to reflect the fact that the Lagrange point L2 where Webb now orbits is not that of the "Earth-moon system" (as the story had originally reported) but rather the Earth-sun system
and b) to correct misstatements in the original posting about Webb's hardware for controlling its orientation.

Corrections 12 Aug. 2022: Alistair Glasse's name was incorrectly spelled in a previous version of this story, as was NIRCam (which we'd spelled as NIRcam); Webb's tertiary mirror (we'd originally reported only its primary and secondary mirrors) was also called out in this version.

This article appears in the September 2022 print issue as “Inside the Universe Machine.”


Match ID: 94 Score: 2.14 source: spectrum.ieee.org age: 146 days
qualifiers: 2.14 energy

Tell us: how are you affected by the cost of living crisis?
Mon, 04 Jul 2022 15:00:44 GMT

We need your help to find out more about the impact of the cost of living emergency in the UK

With energy bills soaring and grocery price inflation reaching 11.6% last month, households across the country are having to adjust their budgets for food, electricity, gas, rent or mortgages, and spending on non-essentials.

Your experiences can help inform our reporting on the ongoing crisis – so if you’re struggling with food or energy costs yourself or your job gives you an insight into the different ways people are affected, we’d like to hear from you.

Continue reading...
Match ID: 95 Score: 2.14 source: www.theguardian.com age: 148 days
qualifiers: 2.14 energy

Andrew Ng: Unbiggen AI
Wed, 09 Feb 2022 15:31:12 +0000


Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

Andrew Ng on...

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”


Match ID: 96 Score: 2.14 source: spectrum.ieee.org age: 293 days
qualifiers: 2.14 energy

How AI Will Change Chip Design
Tue, 08 Feb 2022 14:00:01 +0000


The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

How is AI currently being used to design the next generation of chips?

Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

Portrait of a woman with blonde-red hair smiling at the camera Heather GorrMathWorks

Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

What are the benefits of using AI for chip design?

Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

So it’s like having a digital twin in a sense?

Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

So, it’s going to be more efficient and, as you said, cheaper?

Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

We’ve talked about the benefits. How about the drawbacks?

Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years.

Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

How can engineers use AI to better prepare and extract insights from hardware or sensor data?

Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

What should engineers and designers consider when using AI for chip design?

Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

How do you think AI will affect chip designers’ jobs?

Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

How do you envision the future of AI and chip design?

Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.


Match ID: 97 Score: 2.14 source: spectrum.ieee.org age: 294 days
qualifiers: 2.14 energy

Outside the Box: After underperforming the stock market for years, alternative energy is red hot
Fri, 25 Sep 2020 14:39:17 GMT
Is clean tech finally a viable investment?
Match ID: 98 Score: 2.14 source: www.marketwatch.com age: 795 days
qualifiers: 2.14 energy

Stocks to Watch: Wisconsin Energy, Oracle, GE are stocks to watch
Mon, 23 Jun 2014 12:42:35 GMT
Wisconsin Energy Corp., Oracle Corp., and General Electric Co. may all see active trading after deals, or talk of deals.
Match ID: 99 Score: 2.14 source: www.marketwatch.com age: 3081 days
qualifiers: 2.14 energy

Industrial Functional Safety Training from UL Solutions
Wed, 16 Nov 2022 20:41:08 +0000


This is a sponsored article brought to you by UL Solutions.

Invest in building your team’s excellence with functional safety training and certification services from UL Solutions, a global safety science leader.

Our UL Certified Functional Safety Certification programs provide your team opportunities to learn about — or deepen their existing knowledge and skills in — functional safety to achieve professional credentials in this space.

We offer personnel certification at both the professional and expert levels in automotive, autonomous vehicles, electronics and semiconductors, machinery, industrial automation, and cybersecurity.

You can now register for any of the offerings listed below. All our instructor-led, virtual courses provide a deep dive into key functional safety standards.


IEC 61511

UL Certified Functional Safety Professional in IEC 61511 Class with Exam - Virtual

This three-day course provides a comprehensive overview of the IEC 61511:2016 and ANSI/ISA 61511:2018 standards for the process industry. Participants who complete all three days of training can take a two-hour certification exam on the morning of the fourth day. Those who pass the exam earn individual certification as a UL Certified Functional Safety Professional in IEC 61511 or UL-CFSP.

Purchase training→

IEC 61508

Functional Safety Overview and Designing Safety-Related Electronic Control Systems in Accordance with IEC 61508 Standard Class with Exam - Virtual (English)

This three-day course helps engineers, developers and managers successfully apply IEC 61508 to their safety-related electrical systems. IEC 61508 serves as the base functional safety standard for various industries, including process, nuclear and machinery, among others. This course includes a one-hour follow-up Q&A session (scheduled at a later date) with one of UL Solutions’ functional safety experts.

Purchase training→

UL 4600

UL Certified Autonomy Safety Professional Training in UL 4600 2nd Edition Class with Exam - Virtual (English)

This 2.5-day course highlights modern-day autonomous robotics, industrial automation, sensors and semi-automated technologies and how they can apply to safety. The course focuses on UL 4600, the Standard for Evaluation of Autonomous Products, and includes information on related safety standards.

Purchase training→

Functional Safety Training for Earth-Moving Machinery in Agricultural Tractor and Construction Control Systems Per ISO 25119, ISO 13849 and ISO 19014

UL Certified Functional Safety Professional Training in Agriculture and Construction Machinery Class with Exam - Virtual (English)

This 2.5-day course will cover functional safety standards and concepts related to agricultural and construction earth-moving machinery. Applicable standards covered in this training include the EU Machinery Directive; ISO 19014:2018, Earth-Moving Machinery — Functional Safety — Part 1: Methodology to Determine Safety-Related Parts of the Control System and Performance Requirements; and ISO 25119:2018, Tractors and Machinery for Agriculture and Forestry — Safety-Related Parts of Control Systems. UL Solutions’ experts will cover topics such as hazard identification and risk assessment per ISO 12100:2010, Safety of Machinery — General Principles for Design — Risk Assessment and Risk Reduction. Case studies on a range of topics, including motor drives and safety product life cycles, will also help provide examples of how the requirements and concepts of the standards apply.

Purchase training→

ISO 13849, IEC 62061, IEC 61800-5-2, 25119, and the EU Machinery Directive

UL Certified Functional Safety Professional Training in Machinery Class with Exam - Virtual (English)

This 2.5-day course is for engineers working on programmable machinery and control systems. The training course will cover functional safety standards and concepts related to the EU Machinery Directive, including ISO 13849, Safety of Machinery - Safety-Related Parts of Control Systems; IEC 61800-5-2, Adjustable Speed Electrical Power Drive Systems - Part 5-2: Safety Requirements - Functional; and IEC 62061, Safety of Machinery - Functional Safety of Safety-Related Electrical, Electronic and Programmable Electronic Control Systems.

Purchase training→


Match ID: 100 Score: 1.43 source: spectrum.ieee.org age: 13 days
qualifiers: 1.43 nuclear

NASA’s DART Mission Aims to Save the World
Fri, 23 Sep 2022 15:52:53 +0000


Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.

Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.

Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.

DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.

“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.

“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”

So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.

The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.

An infographic showing the orientation of Didymos,  Dimorphos, DART, and LICIACube. The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA

But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.

NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.

The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.

A golden cubesat with a bright light and lines The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA

What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”

So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.

“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”

There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.

“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”


Match ID: 101 Score: 1.43 source: spectrum.ieee.org age: 67 days
qualifiers: 1.43 nuclear

Filter efficiency 86.990 (102 matches/784 results)


********** SPACE **********
return to top



The James Webb Space Telescope was a Career-Defining Project for Janet Barth
Mon, 28 Nov 2022 19:00:01 +0000


Janet Barth spent most of her career at the Goddard Space Flight Center, in Greenbelt, Md.—which put her in the middle of some of NASA’s most exciting projects of the past 40 years.

She joined the center as a co-op student and retired in 2014 as chief of its electrical engineering division. She had a hand in Hubble Space Telescope servicing missions, launching the Lunar Reconnaissance Orbiter and the Magnetospheric Multiscale mission, and developing the James Webb Space Telescope.


About Janet Barth


Employer: Miller Engineering and Research Corp.

Title: Advisory board member

Member grade: Life Fellow

Alma mater: University of Maryland in College Park

Barth, an IEEE Life Fellow, conducted pioneering work in analyzing the effects of cosmic rays and solar radiation on spacecraft observatories. Her tools and techniques are still used today. She also helped develop science requirements for NASA’s Living With a Star program, which studies the sun, magnetospheres, and planetary systems.

For her work, Barth was honored with this year’s IEEE Marie Sklodowska-Curie Award for “leadership of and contributions to the advancement of the design, building, deployment, and operation of capable, robust space systems.”

“I still tear up just thinking about it,” Barth says. “Receiving this award is humbling. Everyone at IEEE and Goddard who I worked with owns a piece of this award.”

From co-op hire to chief of NASA’s EE division

Barth initially attended the University of Michigan in Ann Arbor, to pursue a degree in biology, but she soon realized that it wasn’t a good fit for her. She transferred to the University of Maryland in College Park, and changed her major to applied mathematics.

She was accepted for a co-op position in 1978 at the Goddard center, which is about 9 kilometers from the university. Co-op jobs allow students to work at a company and gain experience while pursuing their degree.

“I was excited about using my analysis and math skills to enable new science at Goddard,” she says. She conducted research on radiation environments and their effects on electronic systems.

Goddard hired her after she graduated as a radiation and hardness assurance engineer. She helped ensure that the electronics and materials in space systems would perform as designed after being exposed to radiation in space.

Because of her expertise in space radiation, George Withbroe, director of the NASA Solar-Terrestrial Physics program (now its Heliophysics Division), asked her in 1999 to help write a funding proposal for a program he wanted to launch—which became Living With a Star. It received US $2 billion from the U.S. Congress and launched in 2001.

During her 12 years with the program, Barth helped write the architecture document, which she says became a seminal publication for the field of heliophysics (the study of the sun and how it influences space). The document outlines the program’s goals and objectives.

In 2001 she was selected to be project manager for a NASA test bed that aimed to understand how spacecraft are affected by their environment. The test bed, which collected data from space to predict how radiation might impact NASA missions, successfully completed its mission in 2020.

Barth reached the next rung on her career ladder in 2002, when she became one of the first female associate branch heads of engineering at Goddard. At the space center’s Flight Data Systems and Radiation Effects Branch, she led a team of engineers who designed flight computers and storage systems. Although it was a steep learning curve for her, she says, she enjoyed it. Three years later, she was heading the branch.

She got another promotion, in 2010, to chief of the electrical engineering division. As the Goddard Engineering Directorate’s first female division chief, she led a team of 270 employees who designed, built, and tested electronics and electrical systems for NASA instruments and spacecraft.

vintage photograph of woman smiling in group of 3 people Barth (left) and Moira Stanton at the 1997 RADiation and its Effects on Components and Systems Conference, held in Cannes, France. Barth and Stanton coauthored a poster paper and received the outstanding poster paper award.Janet Barth

Working on the James Webb Space Telescope

Throughout her career, Barth was involved in the development of the Webb space telescope. Whenever she thought that she was done with the massive project, she says with a laugh, her path would “intersect with Webb again.”

She first encountered the Webb project in the late 1990s, when she was asked to be on the initial study team for the telescope.

She wrote its space-environment specifications. After they were published in 1998, however, the team realized that there were several complex problems to solve with the telescope’s detectors. The Goddard team supported Matt Greenhouse, John C. Mather, and other engineers to work on the tricky issues. Greenhouse is a project scientist for the telescope’s science instrument payload. Mather won the 2006 Nobel Prize in Physics for discoveries supporting the Big Bang model.

The Webb’s detectors absorb photons—light from far-away galaxies, stars, and planets—and convert them into electronic voltages. Barth and her team worked with Greenhouse and Mather to verify that the detectors would work while exposed to the radiation environment at the L2 Lagrangian point, one of the positions in space where human-sent objects tend to stay put.

Years later, when Barth was heading the Flight Data Systems and Radiation Effects branch, she oversaw the development of the telescope’s instrument command and data handling systems. Because of her important role, Barth’s name was written on the telescope’s instrument ICDH flight box.

When she became chief of Goddard’s electrical engineering division, she was assigned to the technical review panel for the telescope.

“At that point,” she says, “we focused on the mechanics of deployment and the risks that came with not being able to fully test it in the environment it would be launched and deployed in.”

She served on that panel until she retired. In 2019, five years after retiring, she joined the Miller Engineering and Research Corp. advisory board. The company, based in Pasadena, Md., manufactures parts for aerospace and aviation organizations.

“I really like the ethics of the company. They service science missions and crewed missions,” Barth says. “I went back to my roots, and that’s been really rewarding.”

The best things about being an IEEE member

Barth and her husband, Douglas, who is also an engineer, joined IEEE in 1989. She says they enjoy belonging to a “unique peer group.” She especially likes attending IEEE conferences, having access to journals, and being able to take continuing education courses and workshops, she says.

“I stay up to date on the advancements in science and engineering,” she says, “and going to conferences keeps me inspired and motivated in what I do.” The networking opportunities are “terrific,” she adds, and she’s been able to meet people from just about all engineering industries.

An active IEEE volunteer for more than 20 years, she is executive chairwoman of the IEEE Nuclear and Plasma Sciences Society’s Radiation Effects Steering Group, and she served as 2013–2014 president of the IEEE Nuclear and Plasma Sciences Society. She also is an associate editor for IEEE Transactions on Nuclear Science.

“IEEE has definitely benefited my career,” she says. “There’s no doubt about that.”


Match ID: 0 Score: 85.00 source: spectrum.ieee.org age: 1 day
qualifiers: 65.00 nasa, 20.00 planets

SpaceX fires up 11 engines as it prepares massive rocket for orbital test
Tue, 29 Nov 2022 22:24:01 +0000
Nothing blew up, which is always a win.
Match ID: 1 Score: 65.00 source: arstechnica.com age: 0 days
qualifiers: 65.00 nasa

NASA to Cancel GeoCarb Mission, Expands Greenhouse Gas Portfolio
Tue, 29 Nov 2022 11:34 EST
NASA announced Monday it intends to cancel development of its GeoCarb mission, and instead implement a plan for pursuing alternate options to measure and observe greenhouse gases.
Match ID: 2 Score: 65.00 source: www.nasa.gov age: 0 days
qualifiers: 65.00 nasa

NASA, ICON Advance Lunar Construction Technology for Moon Missions
Tue, 29 Nov 2022 08:45 EST
As NASA plans for long-term human exploration of the Moon under Artemis, new technologies are required to meet the unique challenges of living and working on another world.
Match ID: 3 Score: 65.00 source: www.nasa.gov age: 0 days
qualifiers: 65.00 nasa

Orion flies far beyond the Moon, returns an instantly iconic photo
Tue, 29 Nov 2022 13:42:24 +0000
"It’s really hard to articulate what the feeling is."
Match ID: 4 Score: 65.00 source: arstechnica.com age: 0 days
qualifiers: 65.00 nasa

Artemis: Nasa's Orion capsule breaks distance record
Mon, 28 Nov 2022 22:14:47 GMT
The spacecraft travels further from Earth than any previous vehicle built for astronauts.
Match ID: 5 Score: 65.00 source: www.bbc.co.uk age: 1 day
qualifiers: 65.00 nasa

ISS Daily Summary Report – 11/28/2022
Mon, 28 Nov 2022 16:00:08 +0000
SpaceX-26 (SpX-26) Launch and Dock: The SpX-26 Dragon spacecraft launched aboard a Falcon 9 rocket at 1:20 PM CT on Saturday, November 26th, and docked to the ISS the following morning at 6:39 AM CT. The spacecraft is carrying more than 7,700 pounds of research, supplies, and hardware to the ISS. Payloads: Faraday Research Facility-2: …
Match ID: 6 Score: 65.00 source: blogs.nasa.gov age: 1 day
qualifiers: 65.00 nasa

Moon Microscope, Solar Arrays Launch on NASA’s SpaceX Cargo Ship
Sat, 26 Nov 2022 14:15 EST
SpaceX’s 26th commercial resupply mission for NASA is on its way to the International Space Station.
Match ID: 7 Score: 55.71 source: www.nasa.gov age: 3 days
qualifiers: 55.71 nasa

The State of the Transistor in 3 Charts
Sat, 26 Nov 2022 16:00:01 +0000


The most obvious change in transistor technology in the last 75 years has been just how many we can make. Reducing the size of the device has been a titanic effort and a fantastically successful one, as these charts show. But size isn’t the only feature engineers have been improving.


In 1947, there was only one transistor. According to TechInsight’s forecast, the semiconductor industry is on track to produce almost 2 billion trillion (1021) devices this year. That’s more transistors than were cumulatively made in all the years prior to 2017. Behind that barely conceivable number is the continued reduction in the price of a transistor, as engineers have learned to integrate more and more of them into the same area of silicon.


Scaling down transistors in the 2D space of the plane of the silicon has been a smashing success: Transistor density in logic circuits has increased more than 600,000-fold since 1971. Reducing transistor size requires using shorter wavelengths of light, such as extreme ultraviolet, and other lithography tricks to shrink the space between transistor gates and between metal interconnects. Going forward, it’s the third dimension, where transistors will be built atop one another, that counts. This trend is more than a decade old in flash memory, but it’s still in the future for logic (see “Taking Moore’s Law to New Heights.”)


Perhaps the crowning achievement of all this effort is the ability to integrate millions, even billions, of transistors into some of the most complex systems on the planet: CPUs. Here’s a look at some of the high points along the way.

What Transistors Have Become


Besides making them tiny and numerous, engineers have devoted their efforts to enhancing the device’s other qualities. Here is a small sampling of what transistors have become in the last 75 years:


Icon of a series of circles.

Ephemeral:

Researchers in Illinois developed circuits that dissolve in the body using a combination of ultrathin silicon membranes, magnesium conductors, and magnesium oxide insulators. Five minutes in water was enough to turn the first generation to mush. But recently researchers used a more durable version to make temporary cardiac pacemakers that release an anti-inflammatory drug as they disappear.


An icon of lightning bolt over a circle.

Fast:

The first transistor was made for radio frequencies, but there are now devices that operate at about a billion times those frequencies. Engineers in South Korea and Japan reported the invention of an indium gallium arsenide high-electron mobility transistor, or HEMT, that reached a maximum frequency of 738 gigahertz. Seeking raw speed, engineers at Northrop Grumman made a HEMT that passed 1 terahertz.



An icon of an iron with a line underneath.

Flat:

Today’s (and yesterday’s) transistors depend on the semiconducting properties of bulk (3D) materials. Tomorrow’s devices might rely on 2D semiconductors, such as molybdenum disulfide and tungsten disulfide. These transistors might be built in the interconnect layers above a processor’s silicon, researchers say. So 2D semiconductors could help lead to 3D processors.


An icon of a circle with a series of lines on it

Flexible:

The world is not flat, and neither are the places transistors need to operate. Using indium gallium arsenide, engineers in South Korea recently made high-performance logic transistors on plastic that hardly suffered when bent around a radius of just 4 millimeters. And engineers in Illinois and England have made microcontrollers that are both affordable and bendable.



Icon of a eye with a question mark in the center.

Invisible:

When you need to hide your computing in plain sight, turn to transparent transistors. Researchers in Fuzhou, China, recently made a see-through analogue of flash memory using organic semiconductor thin-film transistors. And researchers in Japan and Malaysia produced transparent diamond devices capable of handling more than 1,000 volts.


Icon of a brain made out of square icons

Mnemonic:

NAND flash memory cells can store multiple bits in a single device. Those on the market today store either 3 or 4 bits each. Researchers at Kioxia Corp. built a modified NAND flash cell and dunked it in 77-kelvin liquid nitrogen. A single superchilled transistor could store up to 7 bits of data, or 128 different values.



Icon of a circle with a star inside.

Talented:

In 2018, engineers in Canada used an algorithm to generate all the possible unique and functional elementary circuits that can be made using just two metal-oxide field-effect transistors. The number of circuits totaled an astounding 582. Increasing the scope to three transistors netted 56,280 circuits, including several amplifiers previously unknown to engineering.


Icon of a shield

Tough:

Some transistors can take otherworldly punishment. NASA Glenn Research Center built 200-transistor silicon carbide ICs and operated them for 60 days in a chamber that simulates the environment on the surface of Venus—460 °C heat, a planetary-probe-crushing 9.3 megapascals of pressure, and the hellish planet’s corrosive atmosphere.

This article appears in the December 2022 print issue as “The State of the Transistor.”


Match ID: 8 Score: 55.71 source: spectrum.ieee.org age: 3 days
qualifiers: 55.71 nasa

NASA to Share Artemis I Update with Orion at Farthest Point from Earth
Fri, 25 Nov 2022 17:08 EST
NASA will host a news conference at 5 p.m. EST Monday, Nov. 28, from the agency’s Johnson Space Center in Houston to discuss the status of the uncrewed Artemis I flight test as the Orion spacecraft reaches the mid-point of its Moon mission and its farthest distance from Earth at nearly 270,000 miles away.
Match ID: 9 Score: 46.43 source: www.nasa.gov age: 4 days
qualifiers: 46.43 nasa

ISS Daily Summary Report – 11/25/2022
Fri, 25 Nov 2022 16:00:17 +0000
RS Extravehicular Activity (EVA) #56: During preparations for today’s Russian EVA, both the primary and backup cooling pumps on EV1’s Orlan showed unstable behavior. After unsuccessful attempts at troubleshooting, the EVA has been cancelled. The European Robotic Arm (ERA) has been returned to its stowed configuration, and the crew and ground teams are backing out of the …
Match ID: 10 Score: 46.43 source: blogs.nasa.gov age: 4 days
qualifiers: 46.43 nasa

ISS Daily Summary Report – 11/23/2022
Wed, 23 Nov 2022 16:00:24 +0000
Payloads: ISS Ham Pass: The crew participated in an ISS Ham pass with Five Bridges Junior High School, Stillwater Lake, Nova Scotia, Canada. Some of the questions asked by the students included if the ISS crew works in different shifts, what will replace the ISS in low Earth orbit if it is retired in 2030, …
Match ID: 11 Score: 27.86 source: blogs.nasa.gov age: 6 days
qualifiers: 27.86 nasa

NASA Awards Launch Services Task Order for TROPICS CubeSats Mission
Wed, 23 Nov 2022 08:35 EST
NASA has selected Rocket Lab USA Inc. of Long Beach, California, to provide the launch service for the agency’s Time-Resolved Observations of Precipitation Structure and Storm Intensity with a Constellation of Smallsats (TROPICS) mission, as part of the agency's Venture-class Acquisition of Dedicated and Rideshare (VADR) launch services contract.
Match ID: 12 Score: 27.86 source: www.nasa.gov age: 6 days
qualifiers: 27.86 nasa

US judge orders Amazon to ‘cease and desist’ anti-union retaliation
2022-11-30T02:28:43+00:00
US judge orders Amazon to ‘cease and desist’ anti-union retaliation submitted by /u/nacorom
[link] [comments]

Match ID: 13 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Body of 19-year-old man found in NSW four days after he vanished in flood waters
Wed, 30 Nov 2022 02:12:21 GMT

Death is third this month linked to NSW flooding crisis after Ljubisa ‘Les’ Vugec, 85, and Dianne Smith, 60, died in Eugowra

The body of a young man who went missing while swimming in flood waters in southern New South Wales has been found.

The discovery was made in Balranald about 8.30am on Wednesday as police began a fifth day of searching for signs of the 19-year-old.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...
Match ID: 14 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Modern Slavery Is a Global Problem in All Renewable Energy Supply Chains: New Report
2022-11-30T01:47:19+00:00
Modern Slavery Is a Global Problem in All Renewable Energy Supply Chains: New Report submitted by /u/chrisdh79
[link] [comments]

Match ID: 15 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Taiwan 2023 semiconductor output value forecast to grow 6.1% year-on-year
2022-11-30T01:45:19+00:00
Taiwan 2023 semiconductor output value forecast to grow 6.1% year-on-year submitted by /u/Saltedline
[link] [comments]

Match ID: 16 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Eufy Security Cameras Have Been Uploading Unencrypted Footage Without Owners Knowing | Eufy apparently stores thumbnails on the cloud, even if you don't have a cloud account.
2022-11-30T01:43:39+00:00
Eufy Security Cameras Have Been Uploading Unencrypted Footage Without Owners Knowing | Eufy apparently stores thumbnails on the cloud, even if you don't have a cloud account. submitted by /u/chrisdh79
[link] [comments]

Match ID: 17 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

U.S. victory over Iran in Qatar closes the loop on 1998 defeat
Tue, 29 Nov 2022 20:38:20 EST
For two nations that have no diplomatic relations and no shortage of political enmity, there were plenty of tensions that had nothing to do with qualification.
Match ID: 18 Score: 25.00 source: www.washingtonpost.com age: 0 days
qualifiers: 25.00 mit

Jailed Belarus opposition figure Maria Kolesnikova ‘in intensive care’
Wed, 30 Nov 2022 01:03:03 GMT

‘Our dear Masha, we all hope that you will be alright!’ said Sviatlana Tsikhanouskaya, leader of trio of women who led 2020 protests

The jailed Belarusian opposition leader Maria Kolesnikova has been admitted to intensive care and undergone surgery, her allies have said.

“Maria is in the emergency hospital in Gomel, in the intensive care unit,” said the press service of Viktor Babaryko, another opposition politician.

Continue reading...
Match ID: 19 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Google’s AI Could Check For Breast Cancer
2022-11-30T00:20:08+00:00
Google’s AI Could Check For Breast Cancer submitted by /u/rchaudhary
[link] [comments]

Match ID: 20 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Online shoppers 'held out' for Cyber Week this year, Salesforce says
Tue, 29 Nov 2022 23:59:42 GMT

Online sales during this year's Cyber Week rose 2% year-over-year to a record $281 billion worldwide, Salesforce said on Tuesday, as customers zeroed in on discounts amid decades-high inflation during the holiday shopping period, which runs from Thanksgiving through Cyber Monday. The company, which analyzed shopping data from 1.5 billion consumers, said that after "lackluster" deals earlier in the season, retailers poured on the markdowns more aggressively as Cyber Week began. "The data showed a departure from Cyber Weeks in 2020 and 2021, when online shopping was increasingly spread throughout the month of November," Salesforce said in a release. "In comparison, 2022 data show both shoppers and retailers held out for Cyber Week as retailers worked to limit discounts that increased margin pressures and consumers searched for better deals - contributing to the largest spike in U.S. digital sales growth in the last six weeks." While Salesforce noted weakness in the UK and Europe, which are feeling the pain of an energy crisis, Cyber Week sales in the U.S. rose 9% from 2021 to $68 billion.

Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.


Match ID: 21 Score: 25.00 source: www.marketwatch.com age: 0 days
qualifiers: 25.00 mit

Republican leaders rebuke Trump over dinner with white supremacist
Tue, 29 Nov 2022 23:28:44 GMT

Mitch McConnell and Kevin McCarthy break silence over meeting and say no room in party for antisemitism or white supremacy

The top two Republicans in Congress have broken their silence about Donald Trump’s dinner last week with the rightwing extremist Nick Fuentes, saying the Republican party has no place for antisemitism or white supremacy.

The Senate Republican leader, Mitch McConnell, and Kevin McCarthy, who may become House speaker in January, had not commented previously on the 22 November meeting.

Continue reading...
Match ID: 22 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

GM Dealers Have Been Quietly Repairing Teslas
2022-11-29T22:54:40+00:00
GM Dealers Have Been Quietly Repairing Teslas submitted by /u/LunacyNow
[link] [comments]

Match ID: 23 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Apple Restricted AirDrop Capabilities in China Ahead of Protests
2022-11-29T22:52:26+00:00
Apple Restricted AirDrop Capabilities in China Ahead of Protests submitted by /u/je97
[link] [comments]

Match ID: 24 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

French regulator called on to withdraw licence allowing CGTN to broadcast from London
Tue, 29 Nov 2022 22:33:58 GMT

Chinese state broadcaster transmits from Chiswick studio despite Ofcom revoking UK licence last year

France’s media regulator is under pressure to withdraw a licence that allows the Chinese state broadcaster to beam its programmes across Europe from a studio in west London.

Ofcom revoked the organisation’s licence to transmit in the UK last year but the China Global Television Network (CGTN) was able to continue broadcasting following authorisation from the French authority.

Continue reading...
Match ID: 25 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Why Queer Communities Are Welcoming Armed Anti-Fascist Protection
Tue, 29 Nov 2022 22:31:38 +0000

After the Club Q massacre, there’s no comparing far-right violence and community self-defense.

The post Why Queer Communities Are Welcoming Armed Anti-Fascist Protection appeared first on The Intercept.


Match ID: 26 Score: 25.00 source: theintercept.com age: 0 days
qualifiers: 25.00 mit

McConnell: Anyone meeting with antisemites is ‘unlikely to ever be elected president’
Tue, 29 Nov 2022 17:24:38 EST
The top two Republicans in Congress condemned former president Donald Trump's dinner with Ye and Nick Fuentes, who have both espoused antisemitic views.
Match ID: 27 Score: 25.00 source: www.washingtonpost.com age: 0 days
qualifiers: 25.00 mit

Duckett and Livingstone step up as England bring Bazball to Pakistan | Ali Martin
Tue, 29 Nov 2022 22:00:01 GMT

First Test series hosted by Pakistan against England for 17 years has generated goodwill over tourists’ bold approach

Ben Stokes is an England captain keen to do things differently. It includes showing his hand early and in the case of the historic series opener against Pakistan, starting on Thursday in bustling Rawalpindi, this means a debut for Liam Livingstone and what may well feel similar for Ben Duckett after a six-year absence.

But then everything feels very new for England right now as they shuttle between hotel and ground in a convoy of armoured vehicles. The 17-year absence from Pakistan means even Jimmy Anderson is light on knowledge. The 40‑year‑old father of the house admitted he was “racking his brains” for any memories of being 12th man in 2005 that could help England to win only a third series in these parts.

Continue reading...
Match ID: 28 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

US to spend $250m on cleanup at California’s toxic Salton Sea
Tue, 29 Nov 2022 21:49:11 GMT

Move could help restore drying lake, a former resort destination that has deteriorated into an environmental crisis amid drought

The US government said on Monday it will spend up to $250m over four years to help mitigate an environmental health disaster that has been brewing in California’s Salton Sea for nearly two decades.

The inland lake, which is fed by agricultural runoff and wastewater, has slowly been shrinking, exposing a powdery shoreline laced with arsenic, selenium and DDT. Dust from the drying lake has wafted into surrounding communities, exacerbating pollution and consequently respiratory conditions in one of California’s poorest and most environmentally burdened regions.

Continue reading...
Match ID: 29 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Twitter analytics staff decimated by layoffs and resignations | Departures raise questions about whether platform can track audiences and brand safety
2022-11-29T21:30:44+00:00
submitted by /u/MortWellian
[link] [comments]
Match ID: 30 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

San Francisco Cops Propose Using Killer Robots to Fight Crime
2022-11-29T21:08:35+00:00
submitted by /u/4wtfever
[link] [comments]
Match ID: 31 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Market Extra: Most deeply inverted Treasury curve in more than 4 decades has one upbeat takeaway for investors
Tue, 29 Nov 2022 20:35:00 GMT

Spencer Platt/Getty ImagesOne of the bond market’s most reliable indicators of impending U.S. recessions is pointed in a pretty pessimistic direction right now, but contains at least one optimistic message: The Federal Reserve will remain committed to its battle on inflation and, some analysts say, should ultimately win it.The spread between 2- BX:TMUBMUSD02Y and 10-year Treasury yields BX:TMUBMUSD10Y is stuck at one of its most negative levels since 1981-1982 after shrinking to as little as minus 78.5 basis points on Tuesday. Over the past week, it’s even approached minus 80 basis points. The more deeply negative the spread becomes, the more worrisome of a signal it’s emitting about the severity of the next economic downturn. Read:Bond-market recession gauge hits 41-year milestone as global growth fears mountBut there’s more than one way to read this measure: The spread also reflects the degree to which the bond market still has confidence that policy makers will do what’s needed to bring down inflation running near its highest levels of the past four decades. The policy-sensitive 2-year Treasury yield finished the New York session at 4.47% on Tuesday, and is up by 370.7 basis points since January, as traders factor in further Fed interest rate hikes. Meanwhile, the 10-year yield was at 3.75% — roughly 72 basis points below the 2-year yield, resulting in a deeply negative spread — and at a level that indicates traders aren’t factoring in a whole lot of additional premium based on the possibility of higher, long-term inflation. Higher and stickier yields at the front end of the curve are “a sign of Fed credibility,” with the central bank seen committed to keeping monetary policy restrictive for longer to rein in inflation, said Subadra Rajappa, head of U.S. rates strategy for Société Générale. “Unfortunately, tighter policy will lead to demand destruction and lower growth, which is keeping long-end yields depressed.”In theory, lower economic growth equates to lower inflation, which helps the Fed do its job of controlling prices. The million-dollar question in financial markets, though, is just how quickly inflation will come down to more normal levels closer to 2%. History shows that Fed rate hikes have no apparent maximum impact on inflation for about 1.5 to 2 years, according to famed economist Milton Friedman, who was cited in an August blog by Atlanta Fed researchers.“The yield curve will likely remain inverted until there is a clear sign of a policy pivot from the Fed,” Rajappa wrote in an email to MarketWatch on Tuesday. Asked whether the deeply inverted curve indicates central bankers will ultimately be successful in curbing inflation, she said, “It is not a question of if, but when. While inflation should steadily decline over the upcoming year, strong employment and sticky services inflation might delay the outcome.”Ordinarily, the Treasury yield curve slopes upward, not downward, when the bond market sees brighter growth prospects ahead. In addition, investors demand more compensation to hold a note or bond for a longer period of time, which also leads to an upward sloping Treasury curve. That’s part of the reason why inversions grab so much attention. And at the moment, multiple parts of the bond market, not just the 2s/10s spread, are inverted. For Ben Jeffery, a rates strategist at BMO Capital Markets, a deeply inverted curve “shows that the Fed has moved aggressively and will keep rates on hold in restrictive territory despite a quickly dimming economic outlook.”The 2s/10s spread hasn’t been this far below zero since the early years of Ronald Reagan’s presidency. In October 1981, when the 2s10s spread shrank to as little as minus 96.8 basis points, the annual headline inflation rate from the consumer-price index was above 10%, the fed-funds rate was around 19% under then-Federal Reserve Chairman Paul Volcker, and the U.S. economy was in the midst of one of its worst downturns since the Great Depression. Volcker’s bold moves paid off, though, with the annual headline CPI rate dropping below 10% the following month and continuing to fall more steeply in the months and years that followed. Inflation hadn’t reared its head again until last year and again this year, when the annual headline CPI rate went above 8% for seven straight months before dipping to 7.7% in October. On Tuesday, Treasury yields were little changed to higher as traders assessed more hawkish rhetoric from Fed policy makers such as St. Louis Fed President James Bullard, who said on Monday that the central bank will likely need to keep its benchmark interest rate above 5% for most of next year and into 2024 to cool inflation.Right now, “a deeply inverted yield curve signals the Fed is somewhat overtightening, but the impact on inflation may take some time to come through,” said Ben Emons, a senior portfolio manager and the head of fixed income/macro strategy at NewEdge Wealth in New York.

Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.


Match ID: 32 Score: 25.00 source: www.marketwatch.com age: 0 days
qualifiers: 25.00 mit

Thurrock council admits disastrous investments caused £500m deficit
Tue, 29 Nov 2022 20:18:58 GMT

Tory-led Essex authority is on brink of bankruptcy and has appealed to government for emergency bailout

A Tory-led council has admitted a series of disastrous investments in risky commercial projects caused it to run up an unprecedented deficit of nearly £500m and brought it to the brink of bankruptcy.

The staggering scale of the catastrophe at Thurrock council in Essex – one of the biggest ever financial disasters in local government – is contained in an internal report made to the council’s cabinet, which reveals it has lost £275m on investments it made in solar energy and other businesses, and has set aside a further £130m this year to pay back investment debts.

Continue reading...
Match ID: 33 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

In a Wisconsin Trump County, and Across the U.S., Progressive Health Care Initiatives Coasted Through
Tue, 29 Nov 2022 20:01:25 +0000

Initiatives to expand health care access and reduce costs won big on election night, rattling for-profit industry lobbyists.

The post In a Wisconsin Trump County, and Across the U.S., Progressive Health Care Initiatives Coasted Through appeared first on The Intercept.


Match ID: 34 Score: 25.00 source: theintercept.com age: 0 days
qualifiers: 25.00 mit

Charities call for Windrush-style inquiry into Manston asylum failings
Tue, 29 Nov 2022 19:41:31 GMT

Letter from 44 charities urges independent investigation into ‘appalling’ treatment of people at Kent processing centre

Suella Braverman, the home secretary, is being urged by 44 leading charities to launch a Windrush-style inquiry into the crisis that engulfed Manston processing centre.

Organisations including the Refugee Council, Save the Children and the International Rescue Committee have written a letter to the Guardian seeking an independent investigation into how people seeking refuge in the UK were forced to live in cramped and insanitary conditions.

Continue reading...
Match ID: 35 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Google has to pay $9.4 million because it paid people to say they liked the Pixel 4
2022-11-29T19:28:39+00:00
Google has to pay $9.4 million because it paid people to say they liked the Pixel 4 submitted by /u/RunOrDieTrying
[link] [comments]

Match ID: 36 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Military Sim Developer Tired of Its Game Being Used to Fake War Footage
2022-11-29T19:26:41+00:00
Military Sim Developer Tired of Its Game Being Used to Fake War Footage submitted by /u/Sorin61
[link] [comments]

Match ID: 37 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Try being trustworthy! Martin Lewis’s advice to MPs seeking to restore trust in politics
Tue, 29 Nov 2022 19:03:22 GMT

The money-saving expert and most trusted man in Britain has hard – and obvious - lessons for culture select committee

Here’s a thought. One that should keep some MPs awake at night. Are politicians intrinsically untrustworthy? Are only people who are predisposed to being economical with the truth attracted to a career in public life?

Or are they a misunderstood bunch? The good guys. Just ordinary men and women who want to make the world a better place. It’s just the nature of the job that forces them into some uneasy compromises with the truth.

Continue reading...
Match ID: 38 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Security researcher says Eufy/Anker has a big security problem
2022-11-29T18:59:38+00:00
Security researcher says Eufy/Anker has a big security problem submitted by /u/Rkeriem
[link] [comments]

Match ID: 39 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Germany agrees 15-year liquid gas supply deal with Qatar
Tue, 29 Nov 2022 18:55:39 GMT

Racing to wean itself off Russian gas supplies, Germany is set to buy 2m tonnes of liquid gas from Gulf state

German firms have signed a 15-year deal to buy 2m tonnes of liquid gas from Qatar, sending out mixed signals over the priority Germany places on human rights in the Gulf and its commitment to a carbon neutral energy supply.

The deal was announced by state-owned Qatar Energy and deliveries will start from 2026. The gas will be sold by Qatar to the US company ConocoPhillips, which will then deliver it to the LNG terminal in Brunsbüttel, Qatar’s energy minister said in the capital, Doha.

Continue reading...
Match ID: 40 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Influencers were paid by Google to promote a Pixel phone they never used | The FTC says Google paid radio DJs to say they loved the Pixel 4.
2022-11-29T18:52:24+00:00
Influencers were paid by Google to promote a Pixel phone they never used | The FTC says Google paid radio DJs to say they loved the Pixel 4. submitted by /u/chrisdh79
[link] [comments]

Match ID: 41 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Sizewell C ‘confirmed’ again – this time it might be the real deal | Nils Pratley
Tue, 29 Nov 2022 18:47:31 GMT

Buying out China’s stake was inevitable but the government still has the onerous task of finding committed investors

Another day, another “confirmation” that the government plans to build the Sizewell C nuclear power plant in Suffolk – surely the “most announced” project in UK infrastructure history. The latest update, though, contained a genuine sign of seriousness: the Chinese are being paid to go away.

China General Nuclear (CGN), a state-backed firm, owned a 20% stake in the fledgling project and had, in effect, a right to subscribe to maintain its holding through the various funding rounds – just as it did at Hinkley Point C in Somerset. In practice, any form of Chinese involvement in Sizewell has been impossible for at least a year.

Continue reading...
Match ID: 42 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

UK faces ‘big, big shortages’ of free-range Christmas poultry
Tue, 29 Nov 2022 18:12:40 GMT

Half of free-range turkeys and geese grown for festive period have died or been culled due to bird flu, MPs told

Half of the free-range poultry grown for Christmas in the UK have died or been culled because of the bird flu epidemic, an industry leader has told MPs.

The British Poultry Council chief executive, Richard Griffiths, told the environment, food and rural affairs committee on Tuesday that free-range poultry had been hit “very, very hard”.

Continue reading...
Match ID: 43 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

‘Not decided yet’: David Miliband hints at political comeback
Tue, 29 Nov 2022 18:04:44 GMT

Former Labour foreign secretary does not rule out return to UK politics before next election and urges greater EU cooperation

The former foreign secretary David Miliband has fuelled speculation that he is preparing a political comeback in Britain after he said nothing had yet been decided on his return and delivered a set piece foreign policy speech urging the UK to make greater cooperation with the EU.

Miliband lost the Labour leadership to his brother Ed in 2010 and resigned from the shadow cabinet. He stood down as an MP and moved to New York in late 2013 with his family to act as the chief executive of the International Rescue Committee.

Continue reading...
Match ID: 44 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Bank of England ‘blindsided’ by Kwasi Kwarteng’s mini-budget, says governor
Tue, 29 Nov 2022 18:00:58 GMT

Andrew Bailey tells Lords committee of ‘extraordinary process’ with ‘no formal communication’ between Treasury and Bank

The governor of the Bank of England has indicated it was left blindsided by Kwasi Kwarteng’s disastrous mini-budget, describing an “extraordinary process” in which there was “no formal communication” before the chancellor unveiled his measures.

In candid evidence to the Lords economic affairs committee, Andrew Bailey said Kwarteng had broken with tradition by failing to brief the central bank, suggesting that even Treasury officials were not fully aware of his plans a day before the event.

Continue reading...
Match ID: 45 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

The Future of the Transistor Is Our Future
Tue, 29 Nov 2022 17:45:32 +0000


This is a guest post in recognition of the 75th anniversary of the invention of the transistor. It is adapted from an essay in the July 2022 IEEE Electron Device Society Newsletter. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

On the 75th anniversary of the invention of the transistor, a device to which I have devoted my entire career, I’d like to answer two questions: Does the world need better transistors? And if so, what will they be like?

I would argue, that yes, we are going to need new transistors, and I think we have some hints today of what they will be like. Whether we’ll have the will and economic ability to make them is the question.

I believe the transistor is and will remain key to grappling with the impacts of global warming. With its potential for societal, economic, and personal upheaval, climate change calls for tools that give us humans orders-of-magnitude more capability.

Semiconductors can raise the abilities of humanity like no other technology. Almost by definition, all technologies increase human abilities. But for most of them, natural resources and energy constrains make orders-of magnitude improvements questionable. Transistor-enabled technology is a unique exception for the following reasons.

  1. As transistors improve, they enable new abilities such as computing and high-speed communication, the Internet, smartphones, memory and storage, robotics, artificial intelligence, and other things no one has thought of yet.
  2. These abilities have wide applications, and they transform all technologies, industries, and sciences.
    a. Semiconductor technology is not nearly as limited in growth by its material and energy usages as other technologies. ICs use relatively small amounts of materials. As a result, they’re being made smaller, and the less materials they use, the faster, more energy efficient, and capable they become.
  3. Theoretically, the energy required for information processing can still be reduced to less than one-thousandth of what is required today . Although we do not yet know exactly how to approach such theoretical efficiency, we know that increasing energy efficiency a thousandfold would not violate physical laws. In contrast, the energy efficiencies of most other technologies, such as motors and lighting, are already at 30 to 80 percent of their theoretical limits.

Transistors: past, present, and future

How we’ll continue to improve transistor technology is relatively clear in the short term, but it gets murkier the farther out you go from today. In the near term, you can glimpse the transistor’s future by looking at its recent past.

The basic planar (2D) MOSFET structure remained unchanged from 1960 until around 2010, when it became impossible to further increase transistor density and decrease the device’s power consumption. My lab at the University of California, Berkeley, saw that point coming more than a decade earlier. We reported the invention of the FinFET, the planar transistor’s successor, in 1999. FinFET, the first 3D MOSFET, changed the flat and wide transistor structure to a tall and narrow one. The benefit is better performance in a smaller footprint, much like the benefit of multistory buildings over single-story ones in a crowded city.

The FinFET is also what’s called a thin-body MOSFET, a concept that continues to guide the development of new devices. It arose from the insight that current will not leak through a transistor within several nanometers of the silicon surface because the surface potential there is well controlled by the gate voltage. FinFETs take this thin-body concept to heart. The device’s body is the vertical silicon fin, which is covered by oxide insulator and gate metal, leaving no silicon outside the range of strong gate control. FinFETs reduced leakage current by orders of magnitude and lowered transistor operating voltage. It also pointed toward the path for further improvement: reducing the body thickness even more.

The fin of the FinFET has become thinner and taller with each new technology node. But this progress has now become too difficult to maintain. So industry is adopting a new 3D thin-body CMOS structure, called gate-all-around (GAA). Here, a stack of ribbons of semiconductor make up the thin body.

Three different configurations of rectangles have blue, yellow, and pink portions. Each evolution of the MOSFET structure has been aimed at producing better control over charge in the silicon by the gate [pink]. Dielectric [yellow] prevents charge from moving from the gate into the silicon body [blue].

The 3D thin-body trend will continue from these 3D transistors to 3D-stacked transistors, 3D monolithic circuits, and multichip packaging. In some cases, this 3D trend has already reached great heights. For instance, the regularity of the charge-trap memory-transistor array allowed NAND flash memory to be the first IC to transition from 2D circuits to 3D circuits. Since the first report of 3D NAND by Toshiba in 2007, the number of stacked layers has grown from 4 to beyond 200.

Monolithic 3D logic ICs will likely start modestly, with stacking the two transistors of a CMOS inverter to reduce all logic gates’ footprints [see “3D-Stacked CMOS Takes Moore’s Law to New Heights”]. But the number of stacks may grow. Other paths to 3D ICs may employ the transfer or deposition of additional layers of semiconductor films, such as silicon, silicon germanium, or indium gallium arsenide onto a silicon wafer.

The thin-body trend might meet its ultimate endpoint in 2D semiconductors, whose thickness is measured in atoms. Molybdenum disulfide molecules, for example, are both naturally thin and relatively large, forming a 2D semiconductor that may be no more than three atoms wide yet have very good semiconductor properties. In 2016, engineers in California and Texas used a film of the 2D-semiconductor molecule molybdenum disulfide and a carbon nanotube to demonstrate a MOSFET with a critical dimension: a gate length just 1 nanometer across. Even with a gate as short as 1 nm, the transistor leakage current was only 10 nanoamperes per millimeter, comparable with today’s best production transistor.

“The progress of transistor technology has not been even or smooth.”

One can imagine that in the distant future, the entire transistor may be prefabricated as a single molecule. These prefabricated building blocks might be brought to their precise locations in an IC through a process called directed-self-assembly (DSA). To understand DSA, it may be helpful to recall that a COVID virus uses its spikes to find and chemically dock itself onto an exact spot at the surface of particular human cells. In DSA, the docking spots, the “spikes,” and the transistor cargo are all carefully designed and manufactured. The initial docking spots may be created with lithography on a substrate, but additional docking spots may be brought in as cargo in subsequent steps. Some of the cargo may be removed by heat or other means if they are needed only during the fabrication process but not in the final product.

Besides making transistors smaller, we’ll have to keep reducing their power consumption. Here we could see an order-of-magnitude reduction through the use of what are called negative-capacitance field-effect transistors (NCFET). These require the insertion of a nanometer-thin layer of ferroelectric material, such as hafnium zirconium oxide, in the MOSFET’s gate stack. Because the ferroelectric contains its own internal electric field, it takes less energy to switch the device on or off. An additional advantage of the thin ferroelectric is the possible use of the ferroelectric’s capacity to store a bit as the state of its electric field, thereby integrating memory and computing in the same device.

Two smiling men in suits. The man on the left wears a large golden medal around his neck. The author [left] received the U.S. National Medal of Technology and Innovation from President Barack Obama [right] in 2016. Kevin Dietsch/UPI/Alamy

To some degree the devices I’ve described arose out of existing trends. But future transistors may have very different materials, structures, and operating mechanisms from those of today’s transistor. For example, the nanoelectromechanical switch is a return to the mechanical relays of decades past rather than an extension of the transistor. Rather than relying on the physics of semiconductors, it uses only metals, dielectrics, and the force between closely spaced conductors with different voltages applied to them.

All these examples have been demonstrated with experiments years ago. However, bringing them to production will require much more time and effort than previous breakthroughs in semiconductor technology.

Getting to the future

Will we be able to achieve these feats? Some lessons from the past indicate that we could.

The first lesson is that the progress of transistor technology has not been even or smooth. Around 1980, the rising power consumption per chip reached a painful level. The adoption of CMOS, replacing NMOS and bipolar technologies—and later, the gradual reduction of operation voltage from 5 volts to 1—gave the industry 30 years of more or less straightforward progress. But again, power became an issue. Between 2000 and 2010, the heat generated per square centimeter of IC was projected by thoughtful researchers to soon reach that of the nuclear-reactor core. The adoption of 3D thin-body FinFET and multicore processor architectures averted the crisis and ushered in another period of relatively smooth progress.

The history of transistor technology may be described as climbing one mountain after another. Only when we got to the top of one were we able see the vista beyond and map a route to climb the next taller and steeper mountain.

The second lesson is that the core strength of the semiconductor industry—nanofabrication—is formidable. History proves that, given sufficient time and economic incentives, the industry has been able to turn any idea into reality, as long as that idea does not violate scientific laws.

But will the industry have sufficient time and economic incentives to continue climbing taller and steeper mountains and keep raising humanity’s abilities?

It’s a fair question. Even as the fab industry’s resources grow, the mountains of technology development grow even faster. A time may come when no one fab company can reach the top of the mountain to see the path ahead. What happens then?

The revenue of all semiconductor fabs (both independent and those, like Intel, that are integrated companies) is about one-third of the semiconductor industry revenue. But fabs make up just 2 percent of the combined revenues of the IT, telecommunications, and consumer-electronics industries that semiconductor technology enables. Yet the fab industry bears most of the growing burden of discovering, producing, and marketing new transistors and nanofabrication technologies. That needs to change.

For the industry to survive, the relatively meager resources of the fab industry must be prioritized in favor of fab building and shareholder needs over scientific exploration. While the fab industry is lengthening its research time horizon, it needs others to take on the burden too. Humanity’s long-term problem-solving abilities deserve targeted public support. The industry needs the help of very-long-term exploratory research, publicly funded, in a Bell Labs–like setting or by university researchers with career-long timelines and wider and deeper knowledge in physics, chemistry, biology, and algorithms than corporate research currently allows. This way, humanity will continue to find new transistors and gain the abilities it will need to face the challenges in the centuries ahead.


Match ID: 46 Score: 25.00 source: spectrum.ieee.org age: 0 days
qualifiers: 25.00 mit

Twitter is no longer enforcing its Covid misinformation policy
2022-11-29T17:42:09+00:00
Twitter is no longer enforcing its Covid misinformation policy submitted by /u/mcmeaningoflife42
[link] [comments]

Match ID: 47 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Schumer says he and McConnell agree on getting bill preventing rail strike done ASAP
Tue, 29 Nov 2022 17:25:04 GMT

Senate Majority Leader Chuck Schumer, a New York Democrat, told reporters Tuesday that he and Senate Minority Leader Mitch McConnell, a Kentucky Republican, have agreed to try to pass legislation that would prevent a U.S. railroad strike as soon as possible. Schumer's comment adds to the optimistic talk about averting a strike, as President Joe Biden said earlier Tuesday that he's confident that it can be avoided, and an analyst said there appears to be bipartisan support in Congress to act. House Speaker Nancy Pelosi, a California Democrat, said a strike must be avoided and her chamber would pass the necessary legislation on Wednesday.

Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.


Match ID: 48 Score: 25.00 source: www.marketwatch.com age: 0 days
qualifiers: 25.00 mit

Left-Wing Voices Are Silenced on Twitter as Far-Right Trolls Advise Elon Musk
Tue, 29 Nov 2022 17:20:17 +0000

Elon Musk appears to have out-sourced decisions about who to ban from Twitter to the platform's right-wing extremists.

The post Left-Wing Voices Are Silenced on Twitter as Far-Right Trolls Advise Elon Musk appeared first on The Intercept.


Match ID: 49 Score: 25.00 source: theintercept.com age: 0 days
qualifiers: 25.00 mit

Balenciaga apologises for ads featuring bondage bears and child abuse papers
Tue, 29 Nov 2022 17:00:49 GMT

Brand admits ‘grievous errors’ after backlash to campaigns that included images of children holding BDSM teddies

The luxury fashion house Balenciaga has apologised over adverts that showed children holding teddy bears in bondage gear and others featuring documents about child sexual abuse law as a prop.

The brand admitted “a series of grievous errors for which Balenciaga takes responsibility” over the now withdrawn campaigns, which have prompted furious criticism online and a $25m (£21m) lawsuit against the team behind one of them.

Continue reading...
Match ID: 50 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Forced Uyghur labor is being used in China's solar panel supply chain, researchers say
2022-11-29T16:53:49+00:00
Forced Uyghur labor is being used in China's solar panel supply chain, researchers say submitted by /u/chrisdh79
[link] [comments]

Match ID: 51 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Facebook (Meta) Fined $275 Million in Data Leak Case
2022-11-29T16:24:54+00:00
Facebook (Meta) Fined $275 Million in Data Leak Case submitted by /u/jms1225
[link] [comments]

Match ID: 52 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Alibaba founder Jack Ma hiding out in Tokyo, reports say
Tue, 29 Nov 2022 16:15:25 GMT

Billionaire rarely seen in public since criticising attitude of China’s regulators towards tech firms in 2020

The billionaire Jack Ma has reportedly been hiding out in Tokyo with his family during Beijing’s crackdown on the country’s star tech firms and its most powerful and wealthy business people.

Ma, the founder of the e-commerce giant Alibaba who until the tech clampdown was China’s richest person, has rarely been seen in public since criticising the attitude of Chinese regulators towards tech companies at a summit in Shanghai two years ago.

Continue reading...
Match ID: 53 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Philly DA Larry Krasner: In Midterms, Democrats Went “Republican-Lite” on Crime
Tue, 29 Nov 2022 16:10:49 +0000

The embattled progressive district attorney said election victories show how Democrats can win big — by leaning into criminal justice reforms.

The post Philly DA Larry Krasner: In Midterms, Democrats Went “Republican-Lite” on Crime appeared first on The Intercept.


Match ID: 54 Score: 25.00 source: theintercept.com age: 0 days
qualifiers: 25.00 mit

Google and YouTube are investing to fight misinformation
2022-11-29T14:49:35+00:00
Google and YouTube are investing to fight misinformation submitted by /u/Wagamaga
[link] [comments]

Match ID: 55 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Orion flies far beyond the Moon, returns an instantly iconic photo — "It’s really hard to articulate what the feeling is."
2022-11-29T14:47:31+00:00
Orion flies far beyond the Moon, returns an instantly iconic photo — "It’s really hard to articulate what the feeling is." submitted by /u/marketrent
[link] [comments]

Match ID: 56 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Riocentro 1981 é aqui e agora: legado de Jair Bolsonaro é o retorno dos atentados terroristas
Tue, 29 Nov 2022 14:07:10 +0000

Uma extrema direita de intricadas conexões com o aparato estatal de segurança pública volta a promover ataques como não se via há quase 40 anos.

The post Riocentro 1981 é aqui e agora: legado de Jair Bolsonaro é o retorno dos atentados terroristas appeared first on The Intercept.


Match ID: 57 Score: 25.00 source: theintercept.com age: 0 days
qualifiers: 25.00 mit

It’s not your imagination: Shopping on Amazon has gotten worse - Everything on Amazon is becoming an ad
2022-11-29T13:51:17+00:00
It’s not your imagination: Shopping on Amazon has gotten worse - Everything on Amazon is becoming an ad submitted by /u/speckz
[link] [comments]

Match ID: 58 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Virtu Financial launches FOIA lawsuit against SEC for failing to provide data on rulemaking process
Tue, 29 Nov 2022 13:47:30 GMT

Market maker Virtu Financial Inc. said Tuesday it has launched a Freedom of Information Act (FOIA) lawsuit against its own regulator, the Securities and Exchange Commission, over the latter's failure to provide information about its rulemaking process and interactions of the chair with interested parties. Virtu said the suit was filed in the U.S. District Court of the Southern District of New York and that it's being represented by Paul, Weiss, Rifkind, Wharton & Garrison LLP. Virtu said it submitted a FOIA request in June seeking to find out if the SEC's rulemaking process "included the legally required evaluation of potential investor harm and market risks, whether the SEC has solicited input from sufficiently broad sources and whether it had considered objective data before the Chair instructed the SEC staff to prepare new rule proposals for retail stock order handling and execution," the company said in a statement. After almost six months, it has not yet received a single responsive document from the SEC. T"he SEC's continued delay raises a significant concern and rather than speculating as to why basic information is being withheld, Virtu was compelled to initiate this action," said the statement. Specifically, the company is seeking to find out who the SEC has met with, the nature of the meetings and how they influenced the direction of the chair's proposals. The SEC is currently seeking to revamp how small retail orders are handled. Virtu shares were up 0.9% premarket, but have fallen 23% in the year to date, while the S&P 500 has fallen 17%.

Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.


Match ID: 59 Score: 25.00 source: www.marketwatch.com age: 0 days
qualifiers: 25.00 mit

Qatar official says ‘400-500’ migrant workers died on World Cup projects
Tue, 29 Nov 2022 13:07:00 GMT
  • Hassan al-Thawadi makes comment in TV interview
  • Hosts accused of ‘inexcusable lack of transparency’

The Qatari official responsible for delivery of the 2022 World Cup has said the number of migrant workers who have died on World Cup-related projects is “between 400 and 500”.

Hassan al-Thawadi, the secretary general of the Supreme Committee for delivery and legacy, made the admission in an interview but said a precise figure for the number of fatalities was still “being discussed”.

Continue reading...
Match ID: 60 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Big polluters given almost €100bn in free carbon permits by EU
Tue, 29 Nov 2022 13:00:48 GMT

Free allowances ‘in direct contradiction with the polluter pays principle’, WWF report says

Big polluting industries have been given almost €100bn (£86bn) in free carbon permits by the EU in the last nine years, according to an analysis by the WWF. The free allowances are “in direct contradiction with the polluter pays principle”, the group said.

Free pollution permits worth €98.5bn were given to energy-intensive sectors including steel, cement, chemicals and aviation from 2013-21. This is more than the €88.5bn that the EU’s emissions trading scheme (ETS) charged polluters, mostly coal and gas power stations, for their CO2 emissions.

Continue reading...
Match ID: 61 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Rolls-Royce successfully tests hydrogen-powered jet engine | Britain's Rolls-Royce said it has successfully run an aircraft engine on hydrogen, a world aviation first that marks a major step towards proving the gas could be key to decarbonising air travel.
2022-11-29T12:42:25+00:00
Rolls-Royce successfully tests hydrogen-powered jet engine | Britain's Rolls-Royce said it has successfully run an aircraft engine on hydrogen, a world aviation first that marks a major step towards proving the gas could be key to decarbonising air travel. submitted by /u/yourSAS
[link] [comments]

Match ID: 62 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

In U.S. Military, Sexual Assault Against Men Is Vastly Underreported
Tue, 29 Nov 2022 12:00:17 +0000

An average of 45 male servicemembers are sexually assaulted every day, according to Pentagon statistics. As with women, shame and stigma suppress the truth.

The post In U.S. Military, Sexual Assault Against Men Is Vastly Underreported appeared first on The Intercept.


Match ID: 63 Score: 25.00 source: theintercept.com age: 0 days
qualifiers: 25.00 mit

Social media firms face big UK fines if they fail to stop sexist and racist content
2022-11-29T11:28:23+00:00
Social media firms face big UK fines if they fail to stop sexist and racist content submitted by /u/diacewrb
[link] [comments]

Match ID: 64 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Philly’s Reform Prosecutor Reacts to His Impeachment
Tue, 29 Nov 2022 11:00:22 +0000

Pennsylvania lawmakers accused Philadelphia District Attorney Larry Krasner of causing a crime “crisis.”

The post Philly’s Reform Prosecutor Reacts to His Impeachment appeared first on The Intercept.


Match ID: 65 Score: 25.00 source: theintercept.com age: 0 days
qualifiers: 25.00 mit

Canada won’t compromise values in relations with China, says foreign minister
Tue, 29 Nov 2022 10:55:45 GMT

Exclusive: as the two nations prepare to co-host Cop15, Mélanie Joly discusses Canada’s long-awaited Indo-Pacific strategy

Canada will work with China when needed – but challenge it when necessary, the country’s foreign minister said, as the two nations prepare to co-host a major environmental summit despite years of diplomatic tensions.

Speaking to the Guardian after her government released its long-awaited “Indo-Pacific Strategy”, Mélanie Joly said that Canada will “promote and defend” its national interests in a region where nations are jockeying for influence and power.

Continue reading...
Match ID: 66 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

UK waters down internet rules plan after free speech outcry
2022-11-29T10:30:39+00:00
UK waters down internet rules plan after free speech outcry submitted by /u/Sorin61
[link] [comments]

Match ID: 67 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Google provided investigators with location data for more than 5,000 devices as part of the federal investigation into the attack on the US Capitol.
2022-11-29T09:07:42+00:00
Google provided investigators with location data for more than 5,000 devices as part of the federal investigation into the attack on the US Capitol. submitted by /u/TacticalBeerCozy
[link] [comments]

Match ID: 68 Score: 25.00 source: www.reddit.com age: 0 days
qualifiers: 25.00 mit

Exército ilude golpistas em São Paulo
Tue, 29 Nov 2022 09:04:26 +0000

Mutirão para moradores de rua frustra bolsonaristas crentes de que as Forças Armadas estão completamente alinhadas ao golpismo.

The post Exército ilude golpistas em São Paulo appeared first on The Intercept.


Match ID: 69 Score: 25.00 source: theintercept.com age: 0 days
qualifiers: 25.00 mit

Onset of winter in Ukraine – a photo essay
Tue, 29 Nov 2022 08:00:22 GMT

After Russian attacks on energy infrastructure, this is expected to be the hardest winter in Ukraine since the second world war

The defining image of the current state of the war in Ukraine – and at the same time a metaphor for the entire country – is the snow-covered rubble of a residential building split in two by a Russian bomb in the town of Borodianka.

Some people who lived in that building have died as a result of the conflict. Others have managed to escape, joining the great exodus of refugees toward Europe. Some are now living in a dormitory on the outskirts of the town, struggling, like millions of other Ukrainians, with the nightmare of what is expected to be the hardest winter in the country’s history since the second world war.

Top left: residents of Borodianka disembark from a bus in a street heavily bombed by Russia. Top right: the rubble of a house in Borodianka. Above: an elderly woman walks by the snow-covered rubble of a residential building in Borodianka

Continue reading...
Match ID: 70 Score: 25.00 source: www.theguardian.com age: 0 days
qualifiers: 25.00 mit

Pence, other Republicans issue rare rebuke of Trump over dinner with Fuentes and Ye
Mon, 28 Nov 2022 21:56:12 EST
Trump's former vice president is the highest-profile Republican to criticize the meal with someone known to espouse antisemitic views.
Match ID: 71 Score: 25.00 source: www.washingtonpost.com age: 1 day
qualifiers: 25.00 mit

Hans Magnus Enzensberger, German poet and intellectual, dies at 93
Mon, 28 Nov 2022 20:49:27 EST
His unorthodox poems and essays made him one of postwar Germany’s leading authors. He also found a global audience with his children’s book “The Number Devil.”
Match ID: 72 Score: 25.00 source: www.washingtonpost.com age: 1 day
qualifiers: 25.00 mit

Comcast’s sneaky Broadcast TV fee hits $27, making a mockery of advertised rates
2022-11-28T23:18:36+00:00
Comcast’s sneaky Broadcast TV fee hits $27, making a mockery of advertised rates submitted by /u/mepper
[link] [comments]

Match ID: 73 Score: 25.00 source: www.reddit.com age: 1 day
qualifiers: 25.00 mit

Patent detects in-game “collusion” by tracking “external connections”
Mon, 28 Nov 2022 23:01:14 +0000
Algorithm also analyzes in-game data to find opponents secretly working together.
Match ID: 74 Score: 25.00 source: arstechnica.com age: 1 day
qualifiers: 25.00 mit

Neuberger wins clearance to manage assets in China for Chinese residents
Mon, 28 Nov 2022 12:39:44 -0500
Neuberger Berman said Monday it became the second global institution to receive final approval from the China Securities Regulatory Commission (CSRC) to launch a wholly owned, newly established mutual fund business in China. Neuberger Berman will now be allowed to manage local assets for local clients, which has not been allowed previously. BlackRock Inc. was the first to receive approval. Patrick Liu, CEO of Neuberger Berman Fund Management (China) (FMC), said the country's commitment to opening up to high-quality financial services "will bring significant opportunities for local investors." Michelle Wei will become chief investment officer - equities of the FMC.
Match ID: 75 Score: 25.00 source: www.marketwatch.com age: 1 day
qualifiers: 25.00 mit

Human rights, LGBTQ+ organizations oppose Kids Online Safety Act
2022-11-28T17:31:43+00:00
Human rights, LGBTQ+ organizations oppose Kids Online Safety Act submitted by /u/EmbarrassedHelp
[link] [comments]

Match ID: 76 Score: 25.00 source: www.reddit.com age: 1 day
qualifiers: 25.00 mit

Lobbyist for Saudi Alfalfa Company Desiccating Arizona Was Elected to Maricopa County Board of Supervisors
Mon, 28 Nov 2022 17:24:10 +0000

Thomas Galvin lobbied on behalf of a Saudi company soaking up Arizona’s groundwater. He is now mediating an ongoing water dispute in neighboring Maricopa County.

The post Lobbyist for Saudi Alfalfa Company Desiccating Arizona Was Elected to Maricopa County Board of Supervisors appeared first on The Intercept.


Match ID: 77 Score: 25.00 source: theintercept.com age: 1 day
qualifiers: 25.00 mit

OneLove armband sends ‘very divisive message’, says Qatar official
Mon, 28 Nov 2022 15:28:54 GMT

Head of World Cup organising committee says he sees rainbow armband as protest against Islamic values

The head of Qatar’s World Cup organising committee has accused teams who wanted to wear the OneLove armband at the World Cup of sending a “very divisive message” to the Islamic and Arab world.

Hassan al-Thawadi’s comments came as the UK sports minister Stuart Andrew said he would wear the rainbow-coloured armband at the England v Wales match on Tuesday.

Continue reading...
Match ID: 78 Score: 25.00 source: www.theguardian.com age: 1 day
qualifiers: 25.00 mit

67 Best Cyber Monday Deals Under $50 (2022): Electric Toothbrushes, Games, and More
Mon, 28 Nov 2022 11:56:00 +0000
Limited budget? These affordable gizmos are WIRED favorites, and now they're even cheaper.
Match ID: 79 Score: 25.00 source: www.wired.com age: 1 day
qualifiers: 25.00 mit

“The Frantic Adding Machine,” by Simon Armitage
Mon, 28 Nov 2022 11:00:00 +0000
Poetry by Simon Armitage: “The waspman cometh // in T-shirt and shorts.”
Match ID: 80 Score: 25.00 source: www.newyorker.com age: 1 day
qualifiers: 25.00 mit

Yam Karkai’s Illustrations Made Her an N.F.T. Sensation. Now What?
Mon, 28 Nov 2022 11:00:00 +0000
World of Women confronts the limits of selling cartoon avatars on the blockchain after the crypto bubble burst.
Match ID: 81 Score: 25.00 source: www.newyorker.com age: 1 day
qualifiers: 25.00 mit

Cartoon Caption Contest
Mon, 28 Nov 2022 05:05:00 +0000
Submit your caption.
Match ID: 82 Score: 25.00 source: www.newyorker.com age: 1 day
qualifiers: 25.00 mit

UK households have cut energy consumption by 10%, say suppliers
Sun, 27 Nov 2022 15:30:08 GMT

E.ON reports up to 15% drop as Grant Shapps writes to firms saying customers cutting back on energy use should not face direct debit rise

Britons have cut their gas and electricity use by more than 10% since October in the first evidence of the impact of the energy crisis on household habits, according to two of Britain’s biggest suppliers.

E.ON, Britain’s second-largest supplier, and Telecom Plus, which owns Utility Warehouse, have reported “double-digit” declines in recent weeks.

Continue reading...
Match ID: 83 Score: 25.00 source: www.theguardian.com age: 2 days
qualifiers: 25.00 mit

A Special Prosecutor Found Kevin Johnson’s Case Was Tainted by Racism. Missouri Is About to Kill Him Anyway.
Sun, 27 Nov 2022 15:15:12 +0000

Kevin Johnson is facing execution for killing a cop when he was 19. A special prosecutor says his sentence should be vacated.

The post A Special Prosecutor Found Kevin Johnson’s Case Was Tainted by Racism. Missouri Is About to Kill Him Anyway. appeared first on The Intercept.


Match ID: 84 Score: 25.00 source: theintercept.com age: 2 days
qualifiers: 25.00 mit

Barbados plans to make Tory MP pay reparations for family’s slave past
Sat, 26 Nov 2022 17:16:51 GMT

Richard Drax reported to have visited Caribbean island for meeting on next steps, including plans for former sugar plantation

The government of Barbados is considering plans to make a wealthy Conservative MP the first individual to pay reparations for his ancestor’s pivotal role in slavery.

The Observer understands that Richard Drax, MP for South Dorset, recently travelled to the Caribbean island for a private meeting with the country’s prime minister, Mia Mottley. A report is now before Mottley’s cabinet laying out the next steps, which include legal action in the event that no agreement is reached with Drax.

Continue reading...
Match ID: 85 Score: 21.43 source: www.theguardian.com age: 3 days
qualifiers: 21.43 mit

Egypt's Climate Summit Was a "Rehearsal" for COP28 in Dubai
Sat, 26 Nov 2022 16:00:12 +0000

Even in a police state, climate justice and human rights advocates found ways to come together and make their voices heard, but some worry about what comes next.

The post Egypt’s Climate Summit Was a “Rehearsal” for COP28 in Dubai appeared first on The Intercept.


Match ID: 86 Score: 21.43 source: theintercept.com age: 3 days
qualifiers: 21.43 mit

A Criminal Ratted Out His Friend to the FBI. Now He's Trying to Make Amends.
Sat, 26 Nov 2022 12:00:23 +0000

The FBI paid a convicted sex offender $90,000 to set up his friend and his friend’s mentally ill buddy in a terrorism sting.

The post A Criminal Ratted Out His Friend to the FBI. Now He’s Trying to Make Amends. appeared first on The Intercept.


Match ID: 87 Score: 21.43 source: theintercept.com age: 3 days
qualifiers: 21.43 mit

Temos que deixar de olhar como piada e enxergar como terrorismo
Sat, 26 Nov 2022 09:03:29 +0000

Atos golpistas nas estradas e em frente aos quartéis avançam na escalada de violência e sobem o tom para desafiar o Judiciário.

The post Temos que deixar de olhar como piada e enxergar como terrorismo appeared first on The Intercept.


Match ID: 88 Score: 21.43 source: theintercept.com age: 3 days
qualifiers: 21.43 mit

NASA Updates Coverage for Next SpaceX Resupply Launch to Space Station
Tue, 22 Nov 2022 17:24 EST
NASA and SpaceX now are targeting 2:20 p.m. EST Saturday, Nov. 26, to launch the company’s 26th commercial resupply mission to the International Space Station.
Match ID: 89 Score: 18.57 source: www.nasa.gov age: 7 days
qualifiers: 18.57 nasa

NASA Invites Public Comment on Plans for Mars Sample Return Campaign
Tue, 22 Nov 2022 11:31 EST
NASA is seeking public comments on a draft environmental impact statement for the agency’s Mars Sample Return (MSR) campaign. Comments are due by Monday, Dec. 19.
Match ID: 90 Score: 18.57 source: www.nasa.gov age: 7 days
qualifiers: 18.57 nasa

Video Friday: Turkey Sandwich
Fri, 25 Nov 2022 17:13:24 +0000


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Happy Thanksgiving, for those who celebrate it. Now spend 10 minutes watching a telepresence robot assemble a turkey sandwich.

[ Sanctuary ]

Ayato Kanada, an assistant professor at Kyushu University, in Japan, wrote in to share “the world’s simplest omnidirectional mobile robot.”

We propose a palm-sized omnidirectional mobile robot with two torus wheels. A single torus wheel is made of an elastic elongated coil spring in which the two ends of the coil connected each other and is driven by a piezoelectric actuator (stator) that can generate 2-degrees-of-freedom (axial and angular) motions. The stator converts its thrust force and torque into longitudinal and meridian motions of the torus wheel, respectively, making the torus work as an omnidirectional wheel on a plane.

[ Paper ]

Thanks, Ayato!

This work, entitled “Virtually turning robotic manipulators into worn devices: opening new horizons for wearable assistive robotics,” proposes a novel hybrid system using a virtually worn robotic arm in augmented reality, and a real robotic manipulator servoed on such a virtual representation. We basically aim at creating the illusion of wearing a robotic system while its weight is fully supported. We believe that this approach could offer a solution to the critical challenge of weight and discomfort caused by robotic sensorimotor extensions—such as supernumerary robotic limbs (SRL), prostheses, or handheld tools—and open new horizons for the development of wearable robotics.

[ Paper ]

Thanks, Nathanaël!

Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jumps, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-size jumping robots.

[ Georgia Tech ]

Thanks, Jason!

The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the moon’s shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800-square-meter “lunar” terrain. The winning team will have the opportunity to have their technology implemented on the moon.

[ ESA ]

Thanks, Arne!

If only cobots were as easy to use as this video from Kuka makes it seem.

The Kuka website doesn’t say how much this thing costs, which means it’s almost certainly not something that you impulse buy.

[ Kuka ]

We present the tensegrity aerial vehicle, a design of collision-resilient rotor robots with icosahedron tensegrity structures. With collision resilience and reorientation ability, the tensegrity aerial vehicles can operate in cluttered environments without complex collision-avoidance strategies. These capabilities are validated by a test of an experimental tensegrity aerial vehicle operating with only onboard inertial sensors in a previously unknown forest.

[ HiPeR Lab ]

The robotics research group Brubotics and the polymer-science and physical-chemistry group FYSC of the University of Brussels have together developed self-healing materials that can be scratched, punctured, or completely cut through and heal themselves back together, with the required heat, or even at room temperature.

[ Brubotics ]

Apparently, the World Cup needs more drone footage, because this is kinda neat.

[ DJI ]

Researchers at MIT’s Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.

[ MIT ]

The researchers from North Carolina State University have recently developed a fast and efficient soft robotic swimmer whose motions resemble a human’s butterfly-stroke style. It can achieve a high average swimming speed of 3.74 body lengths per second, close to five times as fast as the fastest similar soft swimmers, and also a high-power efficiency with a low energy cost.

[ NC State ]

To facilitate sensing and physical interaction in remote and/or constrained environments, high-extension, lightweight robot manipulators are easier to transport and can reach substantially further than traditional serial-chain manipulators. We propose a novel planar 3-degrees-of-freedom manipulator that achieves low weight and high extension through the use of a pair of spooling bistable tapes, commonly used in self-retracting tape measures, which are pinched together to form a reconfigurable revolute joint.

[ Charm Lab ]

SLURP!

[ River Lab ]

This video may encourage you to buy a drone. Or a snowmobile.

[ Skydio ]

Moxie is getting an update for the holidays!

[ Embodied ]

Robotics professor Henny Admoni answers the Internet’s burning questions about robots! How do you program a personality? Can robots pick up a single M&M? Why do we keep making humanoid robots? What is Elon Musk’s goal for the Tesla Optimus robot? Will robots take over my job writing video descriptions...I mean, um, all our jobs? Henny answers all these questions and much more.

[ CMU ]

This GRASP on Robotics talk is from Julie Adams at Oregon State University, on “Towards Adaptive Human-Robot Teams: Workload Estimation.”

The ability for robots, be it a single robot, multiple robots, or a robot swarm, to adapt to the humans with which they are teamed requires algorithms that allow robots to detect human performance in real time. The multidimensional workload algorithm incorporates physiological metrics to estimate overall workload and its components (cognitive, speech, auditory, visual, and physical). The algorithm is sensitive to changes in a human’s individual workload components and overall workload across domains, human-robot teaming relationships (supervisory, peer-based), and individual differences. The algorithm has also been demonstrated to detect shifts in workload in real time in order to adapt the robot’s interaction with the human and autonomously change task responsibilities when the human’s workload is over- or underloaded. Recently, the algorithm was used to analyze post hoc the resulting workload for a single human deploying a heterogeneous robot swarm in an urban environment. Current efforts are focusing on predicting the human’s future workload, recognizing the human’s current tasks, and estimating workload for previously unseen tasks.

[ UPenn ]


Match ID: 91 Score: 17.86 source: spectrum.ieee.org age: 4 days
qualifiers: 17.86 mit

Putin’s grip on regional allies loosens again after Armenia snub
Fri, 25 Nov 2022 14:08:46 GMT

Damaging optics of ‘family’ photo at CSTO summit highlights fragility of Russia’s influence in wake of war in Ukraine

Armenia has asked the French president, Emmanuel Macron, to chair peace talks with Azerbaijan in a fresh challenge to Vladimir Putin’s increasingly loose grip on Russia’s regional allies in the wake of the war in Ukraine.

The snub from a traditional ally to Putin, who had hosted an inconsequential meeting of the warring countries’ leaders last month, comes immediately on the back of his disastrous summit with six former Soviet states.

Continue reading...
Match ID: 92 Score: 17.86 source: www.theguardian.com age: 4 days
qualifiers: 17.86 mit

Can America’s Aging Leadership Deliver the Future?
Fri, 25 Nov 2022 11:00:00 +0000
The Political Scene’s Washington roundtable discusses whether the United States is a gerontocracy, and what that means for the country’s politics.
Match ID: 93 Score: 17.86 source: www.newyorker.com age: 4 days
qualifiers: 17.86 mit

"Tantura" Exposes the Lie at the Heart of Israel's Founding Myth
Fri, 25 Nov 2022 11:00:32 +0000

A new documentary challenges Israel’s narrative about 1948 and the forced displacement of Palestinians.

The post “Tantura” Exposes the Lie at the Heart of Israel’s Founding Myth appeared first on The Intercept.


Match ID: 94 Score: 17.86 source: theintercept.com age: 4 days
qualifiers: 17.86 mit

NASA’s DART Mission Aims to Save the World
Fri, 23 Sep 2022 15:52:53 +0000


Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.

Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.

Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.

DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.

“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.

“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”

So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.

The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.

An infographic showing the orientation of Didymos,  Dimorphos, DART, and LICIACube. The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA

But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.

NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.

The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.

A golden cubesat with a bright light and lines The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA

What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”

So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.

“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”

There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.

“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”


Match ID: 95 Score: 15.71 source: spectrum.ieee.org age: 67 days
qualifiers: 9.29 nasa, 3.57 mit, 2.86 planets

NASA’s Artemis I Revives the Moonshot
Sun, 28 Aug 2022 13:00:00 +0000



Update 5 Sept.: For now, NASA’s giant Artemis I remains on the ground after two launch attempts scrubbed by a hydrogen leak and a balky engine sensor. Mission managers say Artemis will fly when everything's ready—but haven't yet specified whether that might be in late September or in mid-October.

“When you look at the rocket, it looks almost retro,” said Bill Nelson, the administrator of NASA. “Looks like we’re looking back toward the Saturn V. But it’s a totally different, new, highly sophisticated—more sophisticated—rocket, and spacecraft.”

Artemis, powered by the Space Launch System rocket, is America’s first attempt to send astronauts to the moon since Apollo 17 in 1972, and technology has taken giant leaps since then. On Artemis I, the first test flight, mission managers say they are taking the SLS, with its uncrewed Orion spacecraft up top, and “stressing it beyond what it is designed for”—the better to ensure safe flights when astronauts make their first landings, currently targeted to begin with Artemis III in 2025.

But Nelson is right: The rocket is retro in many ways, borrowing heavily from the space shuttles America flew for 30 years, and from the Apollo-Saturn V.

Much of Artemis’s hardware is refurbished: Its four main engines, and parts of its two strap-on boosters, all flew before on shuttle missions. The rocket’s apricot color comes from spray-on insulation much like the foam on the shuttle’s external tank. And the large maneuvering engine in Orion’s service module is actually 40 years old—used on 19 space shuttle flights between 1984 and 1992.

“I have a name for missions that use too much new technology—failures.”
—John Casani, NASA

Perhaps more important, the project inherits basic engineering from half a century of spaceflight. Just look at Orion’s crew capsule—a truncated cone, somewhat larger than the Apollo Command Module but conceptually very similar.

Old, of course, does not mean bad. NASA says there is no need to reinvent things engineers got right the first time.

“There are certain fundamental aspects of deep-space exploration that are really independent of money,” says Jim Geffre, Orion vehicle-integration manager at the Johnson Space Center in Houston. “The laws of physics haven’t changed since the 1960s. And capsule shapes happen to be really good for coming back into the atmosphere at Mach 32.”

Roger Launius, who served as NASA’s chief historian from 1990 to 2002 and as a curator at the Smithsonian Institution from then until 2017, tells of a conversation he had with John Casani, a veteran NASA engineer who managed the Voyager, Galileo, and Cassini probes to the outer planets.

“I have a name for missions that use too much new technology,” he recalls Casani saying. “Failures.”

The Artemis I flight is slated for about six weeks. (Apollo 11 lasted eight days.) The ship roughly follows Apollo’s path to the moon’s vicinity, but then puts itself in what NASA calls a distant retrograde orbit. It swoops within 110 kilometers of the lunar surface for a gravity assist, then heads 64,000 km out—taking more than a month but using less fuel than it would in closer orbits. Finally, it comes home, reentering the Earth’s atmosphere at 11 km per second, slowing itself with a heatshield and parachutes, and splashing down in the Pacific not far from San Diego.

If all four, quadruply redundant flight computer modules fail, there is a fifth, entirely separate computer onboard, running different code to get the spacecraft home.

“That extra time in space,” says Geffre, “allows us to operate the systems, give more time in deep space, and all those things that stress it, like radiation and micrometeoroids, thermal environments.”

There are, of course, newer technologies on board. Orion is controlled by two vehicle-management computers, each composed of two flight computer modules (FCMs) to handle guidance, navigation, propulsion, communications, and other systems. The flight control system, Geffre points out, is quad-redundant; if at any point one of the four FCMs disagrees with the others, it will take itself offline and, in a 22-second process, reset itself to make sure its outputs are consistent with the others’. If all four FCMs fail, there is a fifth, entirely separate computer running different code to get the spacecraft home.

Guidance and navigation, too, have advanced since the sextant used on Apollo. Orion uses a star tracker to determine its attitude, imaging stars and comparing them to an onboard database. And an optical navigation camera shoots Earth and the moon so that guidance software can determine their distance and position and keep the spacecraft on course. NASA says it’s there as backup, able to get Orion to a safe splashdown even if all communication with Earth has been lost.

But even those systems aren’t entirely new. Geffre points out that the guidance system’s architecture is derived from the Boeing 787. Computing power in deep space is limited by cosmic radiation, which can corrupt the output of microprocessors beyond the protection of Earth’s atmosphere and magnetic field.

Beyond that is the inevitable issue of cost. Artemis is a giant project, years behind schedule, started long before NASA began to buy other launches from companies like SpaceX and Rocket Lab. NASA’s inspector general, Paul Martin, testified to Congress in March that the first four Artemis missions would cost US $4.1 billion each—“a price tag that strikes us as unsustainable.”

Launius, for one, rejects the argument that government is inherently wasteful. “Yes, NASA’s had problems in managing programs in the past. Who hasn’t?” he says. He points out that Blue Origin and SpaceX have had plenty of setbacks of their own—they’re just not obliged to be public about them. “I could go on and on. It’s not a government thing per se and it’s not a NASA thing per se.”

So why return to the moon with—please forgive the pun—such a retro rocket? Partly, say those who watch Artemis closely, because it’s become too big to fail, with so much American money and brainpower invested in it. Partly because it turns NASA’s astronauts outward again, exploring instead of maintaining a space station. Partly because new perspectives could come of it. And partly because China and Russia have ambitions in space that threaten America’s.

“Apollo was a demonstration of technological verisimilitude—to the whole world,” says Launius. “And the whole world knew then, as they know today, that the future belongs to the civilization that can master science and technology.”

Update 7 Sept.: Artemis I has been on launchpad 39B, not 39A as previously reported, at Kennedy Space Center.


Match ID: 96 Score: 15.71 source: spectrum.ieee.org age: 93 days
qualifiers: 9.29 nasa, 3.57 mit, 2.86 planets

Inside the Universe Machine: The Webb Space Telescope’s Staggering Vision
Wed, 06 Jul 2022 13:00:00 +0000


For a deep dive into the engineering behind the James Webb Space Telescope, see our collection of posts here.

“Build something that will absolutely, positively work.” This was the mandate from NASA for designing and building the James Webb Space Telescope—at 6.5 meters wide the largest space telescope in history. Last December, JWST launched famously and successfully to its observing station out beyond the moon. And now according to NASA, as soon as next week, the JWST will at long last begin releasing scientific images and data.

Mark Kahan, on JWST’s product integrity team, recalls NASA’s engineering challenge as a call to arms for a worldwide team of thousands that set out to create one of the most ambitious scientific instruments in human history. Kahan—chief electro-optical systems engineer at Mountain View, Calif.–based Synopsys—and many others in JWST’s “pit crew” (as he calls the team) drew hard lessons from three decades ago, having helped repair another world-class space telescope with a debilitating case of flawed optics. Of course the Hubble Space Telescope is in low Earth orbit, and so a special space-shuttle mission to install corrective optics ( as happened in 1993) was entirely possible.

Not so with the JWST.

The meticulous care NASA demanded of JWST’s designers is all the more a necessity because Webb is well out of reach of repair crews. Its mission is to study the infrared universe, and that requires shielding the telescope and its sensors from both the heat of sunlight and the infrared glow of Earth. A good place to do that without getting too far from Earth is an empty patch of interplanetary space 1.5 million kilometers away (well beyond the moon’s orbit) near a spot physicists call the second Lagrange point, or L2.

The pit crew’s job was “down at the detail level, error checking every critical aspect of the optical design,” says Kahan. Having learned the hard way from Hubble, the crew insisted that every measurement on Webb’s optics be made in at least two different ways that could be checked and cross-checked. Diagnostics were built into the process, Kahan says, so that “you could look at them to see what to kick” to resolve any discrepancies. Their work had to be done on the ground, but their tests had to assess how the telescope would work in deep space at cryogenic temperatures.

Three New Technologies for the Main Mirror

Superficially, Webb follows the design of all large reflecting telescopes. A big mirror collects light from stars, galaxies, nebulae, planets, comets, and other astronomical objects—and then focuses those photons onto a smaller secondary mirror that sends it to a third mirror that then ultimately directs the light to instruments that record images and spectra.

Webb’s 6.5-meter primary mirror is the first segmented mirror to be launched into space. All the optics had to be made on the ground at room temperature but were deployed in space and operated at 30 to 55 degrees above absolute zero. “We had to develop three new technologies” to make it work, says Lee D. Feinberg of the NASA Goddard Space Flight Center, the optical telescope element manager for Webb for the past 20 years.

The longest wavelengths that Hubble has to contend with were 2.5 micrometers, whereas Webb is built to observe infrared light that stretches to 28 μm in wavelength. Compared with Hubble, whose primary mirror is a circle of an area 4.5 square meters, “[Webb’s primary mirror] had to be 25 square meters,” says Feinberg. Webb also “needed segmented mirrors that were lightweight, and its mass was a huge consideration,” he adds. No single-component mirror that could provide the required resolution would have fit on the Ariane 5 rocket that launched JWST. That meant the mirror would have to be made in pieces, assembled, folded, secured to withstand the stress of launch, then unfolded and deployed in space to create a surface that was within tens of nanometers of the shape specified by the designers.

Images of the James Webb Space Telescope and Hubble Space Telescope to scale, compared to a human figure, who is dwarfed by their size The James Webb Space Telescope [left] and the Hubble Space Telescope side by side—with Hubble’s 2.4-meter-diameter mirror versus Webb’s array of hexagonal mirrors making a 6.5-meter-diameter light-collecting area. NASA Goddard Space Flight Center

NASA and the U.S. Air Force, which has its own interests in large lightweight space mirrors for surveillance and focusing laser energy, teamed up to develop the technology. The two agencies narrowed eight submitted proposals down to two approaches for building JWST’s mirrors: one based on low-expansion glass made of a mixture of silicon and titanium dioxides similar to that used in Hubble and the other the light but highly toxic metal beryllium. The most crucial issue came down to how well the materials could withstand temperature changes from room temperature on the ground to around 50 K in space. Beryllium won because it could fully release stress after cooling without changing its shape, and it’s not vulnerable to the cracking that can occur in glass. The final beryllium mirror was a 6.5-meter array of 18 hexagonal beryllium mirrors, each weighing about 20 kilograms. The weight per unit area of JWST’s mirror was only 10 percent of that in Hubble. A 100-nanometer layer of pure gold makes the surface reflect 98 percent of incident light from JWST’s main observing band of 0.6 to 28.5 μm. “Pure silver has slightly higher reflectivity than pure gold, but gold is more robust,” says Feinberg. A thin layer of amorphous silica protects the metal film from surface damage.

In addition, a wavefront-sensing control system keeps mirror segment surfaces aligned to within tens of nanometers. Built on the ground, the system is expected to keep mirror alignment stabilized throughout the telescope’s operational life. A backplane kept at a temperature of 35 K holds all 2.4 tonnes of the telescope and instruments rock-steady to within 32 nm while maintaining them at cryogenic temperatures during observations.

Metal superstructure of cages and supports stands on a giant platform in a warehouse-sized clean-room. A man in a cleanroom suit watches the operations. The JWST backplane, the “spine” that supports the entire hexagonal mirror structure and carries more than 2,400 kg of hardware, is readied for assembly to the rest of the telescope. NASA/Chris Gunn

Hubble’s amazing, long-exposure images of distant galaxies are possible through the use of gyroscopes and reaction wheels. The gyroscopes are used to sense unwanted rotations, and reaction wheels are used to counteract them.

But the gyroscopes used on Hubble have had a bad track record and have had to be replaced repeatedly. Only three of Hubble’s six gyros remain operational today, and NASA has devised plans for operating with one or two gyros at reduced capability. Hubble also includes reaction wheels and magnetic torquers, used to maintain its orientation when needed or to point at different parts of the sky.

Webb uses reaction wheels similarly to turn across the sky, but instead of using mechanical gyros to sense direction, it uses hemispherical resonator gyroscopes, which have no moving parts. Webb also has a small fine-steering mirror in the optical path, which can tilt over an angle of just 5 arc seconds. Those very fine adjustments of the light path into the instruments keep the telescope on target. “It’s a really wonderful way to go,” says Feinberg, adding that it compensates for small amounts of jitter without having to move the whole 6-tonne observatory.

Instruments

Other optics distribute light from the fine-steering mirror among four instruments, two of which can observe simultaneously. Three instruments have sensors that observe wavelengths of 0.6 to 5 μm, which astronomers call the near-infrared. The fourth, called the Mid-InfraRed Instrument (MIRI), observes what astronomers call the mid-infrared spectrum, from 5 to 28.5 μm. Different instruments are needed because sensors and optics have limited wavelength ranges. (Optical engineers may blanch slightly at astronomers’ definitions of what constitutes the near- and mid-infrared wavelength ranges. These two groups simply have differing conventions for labeling the various regimes of the infrared spectrum.)

Mid-infrared wavelengths are crucial for observing young stars and planetary systems and the earliest galaxies, but they also pose some of the biggest engineering challenges. Namely, everything on Earth and planets out to Jupiter glow in the mid-infrared. So for JWST to observe distant astronomical objects, it must avoid recording extraneous mid-infrared noise from all the various sources inside the solar system. “I have spent my whole career building instruments for wavelengths of 5 μm and longer,” says MIRI instrument scientist Alistair Glasse of the Royal Observatory, in Edinburgh. “We’re always struggling against thermal background.”

Mountaintop telescopes can see the near-infrared, but observing the mid-infrared sky requires telescopes in space. However, the thermal radiation from Earth and its atmosphere can cloud their view, and so can the telescopes themselves unless they are cooled far below room temperature. An ample supply of liquid helium and an orbit far from Earth allowed the Spitzer Space Telescope’s primary observing mission to last for five years, but once the last of the cryogenic fluid evaporated in 2009, its observations were limited to wavelengths shorter than 5 μm.

Webb has an elaborate solar shield to block sunlight, and an orbit 1.5 million km from Earth that can keep the telescope to below 55 K, but that’s not good enough for low-noise observations at wavelengths longer than 5 μm. The near-infrared instruments operate at 40 K to minimize thermal noise. But for observations out to 28.5 μm, MIRI uses a specially developed closed-cycle, helium cryocooler to keep MIRI cooled below 7 K. “We want to have sensitivity limited by the shot noise of astronomical sources,” says Glasse. (Shot noise occurs when optical or electrical signals are so feeble that each photon or electron constitutes a detectable peak.) That will make MIRI 1,000 times as sensitive in the mid-infrared as Spitzer.

Another challenge is the limited transparency of optical materials in the mid-infrared. “We use reflective optics wherever possible,” says Glasse, but they also pose problems, he adds. “Thermal contraction is a big deal,” he says, because the instrument was made at room temperature but is used at 7 K. To keep thermal changes uniform throughout MIRI, they made the whole structure of gold-coated aluminum lest other metals cause warping.

Detectors are another problem. Webb’s near-infrared sensors use mercury cadmium telluride photodetectors with a resolution of 2,048 x 2,048 pixels. This resolution is widely used at wavelengths below 5 μm, but sensing at MIRI’s longer wavelengths required exotic detectors that are limited to offering only 1,024 x 1,024 pixels.

Glasse says commissioning “has gone incredibly well.” Although some stray light has been detected, he says, “we are fully expecting to meet all our science goals.”

NIRCam Aligns the Whole Telescope

The near-infrared detectors and optical materials used for observing at wavelengths shorter than 5 μm are much more mature than those for the mid-infrared, so the Near-Infrared Camera (NIRCam) does double duty by both recording images and aligning all the optics in the whole telescope. That alignment was the trickiest part of building the instrument, says NIRCam principal investigator Marcia Rieke of the University of Arizona.

Alignment means getting all the light collected by the primary mirror to get to the right place in the final image. That’s crucial for Webb, because it has 18 separate segments that have to overlay their images perfectly in the final image, and because all those segments were built on the ground at room temperature but operate at cryogenic temperatures in space at zero gravity. When NASA recorded a test image of a single star after Webb first opened its primary mirror, it showed 18 separate bright spots, one from each segment. When alignment was completed on 11 March, the image from NIRcam showed a single star with six spikes caused by diffraction.

Image of a star with six-pointed spikes caused by diffraction Even when performing instrumental calibration tasks, JWST couldn’t help but showcase its stunning sensitivity to the infrared sky. The central star is what telescope technicians used to align JWST’s mirrors. But notice the distant galaxies and stars that photobombed the image too!NASA/STScI

Building a separate alignment system would have added to both the weight and cost of Webb, Rieke realized, and in the original 1995 plan for the telescope she proposed designing NIRCam so it could align the telescope optics once it was up in space as well as record images. “The only real compromise was that it required NIRCam to have exquisite image quality,” says Rieke, wryly. From a scientific point, she adds, using the instrument to align the telescope optics “is great because you know you’re going to have good image quality and it’s going to be aligned with you.” Alignment might be just a tiny bit off for other instruments. In the end, it took a team at Lockheed Martin to develop the computational tools to account for all the elements of thermal expansion.

Escalating costs and delays had troubled Webb for years. But for Feinberg, “commissioning has been a magical five months.” It began with the sight of sunlight hitting the mirrors. The segmented mirror deployed smoothly, and after the near-infrared cameras cooled, the mirrors focused one star into 18 spots, then aligned them to put the spots on top of each other. “Everything had to work to get it to [focus] that well,” he says. It’s been an intense time, but for Feinberg, a veteran of the Hubble repair mission, commissioning Webb was “a piece of cake.”

NASA announced that between May 23rd and 25th, one segment of the primary mirror had been dinged by a micrometeorite bigger than the agency had expected when it analyzed the potential results of such impacts. “Things do degrade over time,” Feinberg said. But he added that Webb had been engineered to minimize damage, and NASA said the event had not affected Webb’s operation schedule.

Corrections 26-28 July 2022: The story was updated a) to reflect the fact that the Lagrange point L2 where Webb now orbits is not that of the "Earth-moon system" (as the story had originally reported) but rather the Earth-sun system
and b) to correct misstatements in the original posting about Webb's hardware for controlling its orientation.

Corrections 12 Aug. 2022: Alistair Glasse's name was incorrectly spelled in a previous version of this story, as was NIRCam (which we'd spelled as NIRcam); Webb's tertiary mirror (we'd originally reported only its primary and secondary mirrors) was also called out in this version.

This article appears in the September 2022 print issue as “Inside the Universe Machine.”


Match ID: 97 Score: 15.71 source: spectrum.ieee.org age: 146 days
qualifiers: 9.29 nasa, 3.57 mit, 2.86 planets

U.N. Kills Any Plans to Use Mercury as a Rocket Propellant
Tue, 19 Apr 2022 18:00:01 +0000


A recent United Nations provision has banned the use of mercury in spacecraft propellant. Although no private company has actually used mercury propellant in a launched spacecraft, the possibility was alarming enough—and the dangers extreme enough—that the ban was enacted just a few years after one U.S.-based startup began toying with the idea. Had the company gone through with its intention to sell mercury propellant thrusters to some of the companies building massive satellite constellations over the coming decade, it would have resulted in Earth’s upper atmosphere being laced with mercury.

Mercury is a neurotoxin. It’s also bio-accumulative, which means it’s absorbed by the body at a faster rate than the body can remove it. The most common way to get mercury poisoning is through eating contaminated seafood. “It’s pretty nasty,” says Michael Bender, the international coordinator of the Zero Mercury Working Group (ZMWG). “Which is why this is one of the very few instances where the governments of the world came together pretty much unanimously and ratified a treaty.”

Bender is referring to the 2013 Minamata Convention on Mercury, a U.N. treaty named for a city in Japan whose residents suffered from mercury poisoning from a nearby chemical factory for decades. Because mercury pollutants easily find their way into the oceans and the atmosphere, it’s virtually impossible for one country to prevent mercury poisoning within its borders. “Mercury—it’s an intercontinental pollutant,” Bender says. “So it required a global treaty.”

Today, the only remaining permitted uses for mercury are in fluorescent lighting and dental amalgams, and even those are being phased out. Mercury is otherwise found as a by-product of other processes, such as the burning of coal. But then a company hit on the idea to use it as a spacecraft propellant.

In 2018, an employee at Apollo Fusion approached the Public Employees for Environmental Responsibility (PEER), a nonprofit that investigates environmental misconduct in the United States. The employee—who has remained anonymous—alleged that the Mountain View, Calif.–based space startup was planning to build and sell thrusters that used mercury propellant to multiple companies building low Earth orbit (LEO) satellite constellations.

Four industry insiders ultimately confirmed that Apollo Fusion was building thrusters that utilized mercury propellant. Apollo Fusion, which was acquired by rocket manufacturing startup Astra in June 2021, insisted that the composition of its propellant mixture should be considered confidential information. The company withdrew its plans for a mercury propellant in April 2021. Astra declined to respond to a request for comment for this story.

Apollo Fusion wasn’t the first to consider using mercury as a propellant. NASA originally tested it in the 1960s and 1970s with two Space Electric Propulsion Tests (SERT), one of which was sent into orbit in 1970. Although the tests demonstrated mercury’s effectiveness as a propellant, the same concerns over the element’s toxicity that have seen it banned in many other industries halted its use by the space agency as well.

“I think it just sort of fell off a lot of folks’ radars,” says Kevin Bell, the staff counsel for PEER. “And then somebody just resurrected the research on it and said, ‘Hey, other than the environmental impact, this was a pretty good idea.’ It would give you a competitive advantage in what I imagine is a pretty tight, competitive market.”

That’s presumably why Apollo Fusion was keen on using it in their thrusters. Apollo Fusion as a startup emerged more or less simultaneously with the rise of massive LEO constellations that use hundreds or thousands of satellites in orbits below 2,000 kilometers to provide continual low-latency coverage. Finding a slightly cheaper, more efficient propellant for one large geostationary satellite doesn’t move the needle much. But doing the same for thousands of satellites that need to be replaced every several years? That’s a much more noticeable discount.

Were it not for mercury’s extreme toxicity, it would actually make an extremely attractive propellant. Apollo Fusion wanted to use a type of ion thruster called a Hall-effect thruster. Ion thrusters strip electrons from the atoms that make up a liquid or gaseous propellant, and then an electric field pushes the resultant ions away from the spacecraft, generating a modest thrust in the opposite direction. The physics of rocket engines means that the performance of these engines increases with the mass of the ion that you can accelerate.

Mercury is heavier than either xenon or krypton, the most commonly used propellants, meaning more thrust per expelled ion. It’s also liquid at room temperature, making it efficient to store and use. And it’s cheap—there’s not a lot of competition with anyone looking to buy mercury.

Bender says that ZMWG, alongside PEER, caught wind of Apollo Fusion marketing its mercury-based thrusters to at least three companies deploying LEO constellations—One Web, Planet Labs, and SpaceX. Planet Labs, an Earth-imaging company, has at least 200 CubeSats in low Earth orbit. One Web and SpaceX, both wireless-communication providers, have many more. One Web plans to have nearly 650 satellites in orbit by the end of 2022. SpaceX already has nearly 1,500 active satellites aloft in its Starlink constellation, with an eye toward deploying as many as 30,000 satellites before its constellation is complete. Other constellations, like Amazon’s Kuiper constellation, are also planning to deploy thousands of satellites.

In 2019, a group of researchers in Italy and the United States estimated how much of the mercury used in spacecraft propellant might find its way back into Earth’s atmosphere. They figured that a hypothetical LEO constellation of 2,000 satellites, each carrying 100 kilograms of propellant, would emit 20 tonnes of mercury every year over the course of a 10-year life span. Three quarters of that mercury, the researchers suggested, would eventually wind up in the oceans.

That amounts to 1 percent of global mercury emissions from a constellation only a fraction of the size of the one planned by SpaceX alone. And if multiple constellations adopted the technology, they would represent a significant percentage of global mercury emissions—especially, the researchers warned, as other uses of mercury are phased out as planned in the years ahead.

Fortunately, it’s unlikely that any mercury propellant thrusters will even get off the ground. Prior to the fourth meeting of the Minamata Convention, Canada, the European Union, and Norway highlighted the dangers of mercury propellant, alongside ZMWG. The provision to ban mercury usage in satellites was passed on 26 March 2022.

The question now is enforcement. “Obviously, there aren’t any U.N. peacekeepers going into space to shoot down” mercury-based satellites, says Bell. But the 137 countries, including the United States, who are party to the convention have pledged to adhere to its provisions—including the propellant ban.

The United States is notable in that list because as Bender explains, it did not ratify the Minamata Convention via the U.S. Senate but instead deposited with the U.N. an instrument of acceptance. In a 7 November 2013 statement (about one month after the original Minamata Convention was adopted), the U.S. State Department said the country would be able to fulfill its obligations “under existing legislative and regulatory authority.”

Bender says the difference is “weedy” but that this appears to mean that the U.S. government has agreed to adhere to the Minamata Convention’s provisions because it already has similar laws on the books. Except there is still no existing U.S. law or regulation banning mercury propellant. For Bender, that creates some uncertainty around compliance when the provision goes into force in 2025.

Still, with a U.S. company being the first startup to toy with mercury propellant, it might be ideal to have a stronger U.S. ratification of the Minamata Convention before another company hits on the same idea. “There will always be market incentives to cut corners and do something more dangerously,” Bell says.

Update 19 April 2022: In an email, a spokesperson for Astra stated that the company's propulsion system, the Astra Spacecraft Engine, does not use mercury. The spokesperson also stated that Astra has no plans to use mercury propellant and that the company does not have anything in orbit that uses mercury.

Updated 20 April 2022 to clarify that Apollo Fusion was building thrusters that used mercury, not that they had actually used them.


Match ID: 98 Score: 15.71 source: spectrum.ieee.org age: 224 days
qualifiers: 9.29 nasa, 3.57 mit, 2.86 planets

Meet the Lunar Gateway’s Robot Caretakers
Thu, 07 Apr 2022 18:40:09 +0000


An integral part of NASA’s plan to return astronauts to the moon this decade is the Lunar Gateway, a space station that will be humanity’s first permanent outpost outside of low Earth orbit. Gateway, a partnership between NASA, the Canadian Space Agency (CSA), the European Space Agency (ESA), and the Japan Aerospace Exploration Agency (JAXA), is intended to support operations on the lunar surface while also serving as a staging point for exploration to Mars.

Gateway will be significantly smaller than the International Space Station (ISS), initially consisting of just two modules with additional modules to be added over time. The first pieces of the station to reach lunar orbit will be the Power and Propulsion Element (PPE) attached to the Habitation and Logistics Outpost (HALO), scheduled to launch together on a SpaceX Falcon Heavy rocket in November 2024. The relatively small size of Gateway is possible because the station won’t be crewed most of the time—astronauts may pass through for a few weeks, but the expectation is that Gateway will spend about 11 months out of the year without anyone on board.


This presents some unique challenges for Gateway. On the ISS, astronauts spend a substantial amount of time on station upkeep, but Gateway will have to keep itself functional for extended periods without any direct human assistance.

“The things that the crew does on the International Space Station will need to be handled by Gateway on its own,” explains Julia Badger, Gateway autonomy system manager at NASA’s Johnson Space Center. “There’s also a big difference in the operational paradigm. Right now, ISS has a mission control that’s full time. With Gateway, we’re eventually expecting to have just 8 hours a week of ground operations.” The hundreds of commands that the ISS receives every day to keep it running will still be necessary on Gateway—they’ll just have to come from Gateway itself, rather than from humans back on Earth.

“It’s a new way of thinking compared to ISS. If something breaks on Gateway, we either have to be able to live with it for a certain amount of time, or we’ve got to have the ability to remotely or autonomously fix it.” —Julia Badger, NASA JSC

To make this happen, NASA is developing a vehicle system manager, or VSM, that will act like the omnipresent computer system found on virtually every science-fiction starship. The VSM will autonomously manage all of Gateway’s functionality, taking care of any problems that come up, to the extent that they can be managed with clever software and occasional input from a distant human. “It’s a new way of thinking compared to ISS,” explains Badger. “If something breaks on Gateway, we either have to be able to live with it for a certain amount of time, or we’ve got to have the ability to remotely or autonomously fix it.”

While Gateway itself can be thought of as a robot of sorts, there’s a limited amount that can be reasonably and efficiently done through dedicated automated systems, and NASA had to find a compromise between redundancy and both complexity and mass. For example, there was some discussion about whether Gateway’s hatches should open and close on their own, and NASA ultimately decided to leave the hatches manually operated. But that doesn’t necessarily mean that Gateway won’t be able to open its hatches without human assistance; it just means that there will be a need for robotic hands rather than human ones.

“I hope eventually we have robots up there that can open the hatches,” Badger tells us. She explains that Gateway is being designed with potential intravehicular robots (IVRs) in mind, including things like adding visual markers to important locations, placing convenient charging ports around the station interior, and designing the hatches such that the force required to open them is compatible with the capabilities of robotic limbs. Parts of Gateway’s systems may be modular as well, able to be removed and replaced by robots if necessary. “What we’re trying to do,” Badger says, “is make smart choices about Gateway’s design that don’t add a lot of mass but that will make it easier for a robot to work within the station.”

A human-sized white humanoid robot with a golden helmet and visor, in front of a task board on the International Space Station Robonaut at its test station in front of a manipulation task board on the ISS.JSC/NASA

NASA already has a substantial amount of experience with IVR. Robonaut 2, a full-size humanoid robot, spent several years on the International Space Station starting in 2011, learning how to perform tasks that would otherwise have to be done by human astronauts. More recently, a trio of cubical, toaster-size, free-flying robots called Astrobees have taken up residence on the ISS, where they’ve been experimenting with autonomous sensing and navigation. A NASA project called ISAAC (Integrated System for Autonomous and Adaptive Caretaking) is currently exploring how robots like Astrobee could be used for a variety of tasks on Gateway, from monitoring station health to autonomously transferring cargo, although at least in the near term, in Badger’s opinion, “maintenance of Gateway, like using robots that can switch out broken components, is going to be more important than logistics types of tasks.”

Badger believes that a combination of a generalized mobile manipulator like Robonaut 2 and a free flyer like Astrobee make for a good team, and this combination is currently the general concept for Gateway IVR. This is not to say that the intravehicular robots that end up on Gateway will look like the robots that have been working on the ISS, but they’ll be inspired by them, and will leverage all of the experience that NASA has gained with its robots on ISS so far. It might also be useful to have a limited number of specialized robots, Badger says. “For example, if there was a reason to get behind a rack, you may want a snake-type of robot for that.”

A casually dressed astronaut holds a toaster-sized cubical robot on the International Space Station An Astrobee robot (this one is named Bumble) on the ISS.JSC/NASA

While NASA is actively preparing for intravehicular robots on Gateway, such robots do not yet exist, and the agency may not be building these robots itself, instead relying on industry partners to deliver designs that meet NASA’s requirements. At launch, and likely for the first several years at least, Gateway will have to take care of itself without internal robotic assistants. However, one of the goals of Gateway is to operate itself completely autonomously for up to three weeks without any contact with Earth at all, mimicking the three-week solar conjunction between Earth and Mars where the sun blocks any communications between the two planets. “I think that we will get IVR on board,” Badger says. “If we really want Gateway to be able to take care of itself for 21 days, IVR is going to be a very important part of that. And having a robot is absolutely something that I think is going to be necessary as we move on to Mars.”

“Having a robot is absolutely something that I think is going to be necessary as we move on to Mars.” —Julia Badger, NASA JSC

Intravehicular robots are just half of the robotic team that will be necessary to keep Gateway running autonomously long-term. Space stations rely on complex external infrastructure for power, propulsion, thermal control, and much more. Since 2001, the ISS has been home to Canadarm2, a 17.6-meter robotic arm, which is able to move around the station to grasp and manipulate objects while under human control from either inside the station or from the ground.

The Canadian Space Agency, in partnership with space technology company MDA, is developing a new robotic-arm system for Gateway, called Canadarm3, scheduled to launch in 2027. Canadarm3 will include an 8.5-meter-long arm for grappling spacecraft and moving large objects, as well as a smaller, more dexterous robotic arm that can be used for delicate tasks. The smaller arm can even repair the larger arm if necessary. But what really sets Canadarm3 apart from its predecessors is how it’s controlled, according to Daniel Rey, Gateway chief engineer and systems manager at CSA. “One of the very novel things about Canadarm3 is its ability to operate autonomously, without any crew required,” Rey says. This capability relies on a new generation of software and hardware that gives the arm a sense of touch as well as the ability to react to its environment without direct human supervision.

“With Canadarm3, we realize that if we want to get ready for Mars, more autonomy will be required.” —Daniel Rey, CSA

Even though Gateway will be a thousand times farther away from Earth than the ISS, Rey explains that the added distance (about 400,000 kilometers) isn’t what really necessitates Canadarm3’s added autonomy. “Surprisingly, the location of Gateway in its orbit around the moon has a time delay to Earth that is not all that different from the time delay in low Earth orbit when you factor in various ground stations that signals have to pass through,” says Rey. “With Canadarm3, we realize that if we want to get ready for Mars, where that will no longer be the case, more autonomy will be required.”

Canadarm3’s autonomous tasks on Gateway will include external inspection, unloading logistics vehicles, deploying science payloads, and repairing Gateway by swapping damaged components with spares. Rey tells us that there will also be a science logistics airlock, with a moving table that can be used to pass equipment in and out of Gateway. “It’ll be possible to deploy external science, or to bring external systems inside for repair, and for future internal robotic systems to cooperate with Canadarm3. I think that’ll be a really exciting thing to see.”

Even though it’s going to take a couple of extra years for Gateway’s robotic residents to arrive, the station will be operating mostly autonomously (by necessity) as soon as the Power and Propulsion Element and the Habitation and Logistics Outpost begin their journey to lunar orbit in November o2024. Several science payloads will be along for the ride, including heliophysics and space weather experiments.

Gateway itself, though, is arguably the most important experiment of all. Its autonomous systems, whether embodied in internal and external robots or not, will be undergoing continual testing, and Gateway will need to prove itself before we’re ready to trust its technology to take us into deep space. In addition to being able to operate for 21 days without communications, one of Gateway’s eventual requirements is to be able to function for up to three years without any crew visits. This is the level of autonomy and reliability that we’ll need to be prepared for our exploration of Mars, and beyond.


Match ID: 99 Score: 15.71 source: spectrum.ieee.org age: 236 days
qualifiers: 9.29 nasa, 3.57 mit, 2.86 planets

The big picture: Bruno Barbey captures life on the road in 1960s Palermo
Sun, 27 Nov 2022 07:00:14 GMT

The Magnum photographer’s image of a family in Sicily recalls Fellini and Visconti in its romantic depiction of everyday Italian life

Bruno Barbey chanced upon this family defying gravity on their dad’s scooter in Palermo in 1963. The French-Moroccan photographer had been travelling in Italy for a couple of years by then, restless for exactly this kind of image, with its seductive mix of humour and authenticity. Has there ever been a better articulation of contrasting roles in the patriarchal family? Father sitting comfortably in his jacket and cap and smiling for the camera, while behind him his possibly pregnant wife sees trouble ahead, as she and their three kids and their big checked bag compete for precarious discomfort.

Barbey, then 22, had gone to Italy to try to find pictures that captured “a national spirit” as the country sought to rediscover the dolce vita in cities still recovering from war. He travelled in an old VW van and in Palermo in particular he located scenes that might have been choreographed for the working-class heroes of the Italian neorealist films, the self-absorbed dreamers of Fellini and Visconti (The Leopard, the latter’s Hollywood epic set in Sicily was released in the same year). Barbey’s camera with its wide angle lens picked up the detail of vigorous crowd scenes among street children and barflies and religious processions. His book, The Italians, now republished, is a time capsule of that already disappearing black-and-white world of priests and mafiosi and nightclub girls and nuns.

Les Italiens (French edition) by Bruno Barbey is republished by delpire & co

Continue reading...
Match ID: 100 Score: 15.00 source: www.theguardian.com age: 2 days
qualifiers: 15.00 aliens

Como plataformas de inteligência artificial podem discriminar mulheres, idosos e faculdades populares em processos seletivos
Thu, 24 Nov 2022 14:53:19 +0000

Profissionais de RH suspeitam que sistemas automatizados reprovam indevidamente candidatos, reproduzindo os preconceitos do mercado.

The post Como plataformas de inteligência artificial podem discriminar mulheres, idosos e faculdades populares em processos seletivos appeared first on The Intercept.


Match ID: 101 Score: 14.29 source: theintercept.com age: 5 days
qualifiers: 14.29 mit

Tickle Pill Bug Toes With These Haptic Microfingers
Thu, 24 Nov 2022 14:00:00 +0000


All things considered, we humans are kind of big, which is very limiting to how we can comfortably interact with the world. The practical effect of this is that we tend to prioritize things that we can see and touch and otherwise directly experience, even if those things are only a small part of the world in which we live. A recent study conservatively estimates that there are 2.5 million ants for every one human on Earth. And that’s just ants. There are probably something like 7 million different species of terrestrial insects, and humans have only even noticed like 10 percent of them. The result of this disconnect is that when (for example) insect populations around the world start to crater, it takes us much longer to first notice, care, and act.

To give the small scale the attention that it deserves, we need a way of interacting with it. In a paper recently published in Scientific Reports, roboticists from Ritsumeikan University in Japan demonstrate a haptic teleoperation system that connects a human hand on one end with microfingers on the other, letting the user feel what it’s like to give a pill bug a tummy rub.


Three images showing a top view of the microfinger, which is clear with with liquid metal channels running through it, and side views of the microfinger straight and bent. At top, a microfinger showing the pneumatic balloon actuator (PBA) and liquid metal strain gauge. At bottom left, when the PBA is deflated, the microfinger is straight. At bottom right, inflating the PBA causes the finger to bend downwards.

These microfingers are just 12 millimeters long, 3 mm wide, and 490 microns (μm) thick. Inside of each microfinger is a pneumatic balloon actuator, which is just a hollow channel that can be pressurized with air. Since the channel is on the top of the microfinger, when the channel is inflated, it bulges upward, causing the microfinger to bend down. When pressure is reduced, the microfinger returns to its original position. Separate channels in the microfinger are filled with liquid metal, and as the microfinger bends, the channels elongate, thinning out the metal. By measuring the resistance of the metal, you can tell how much the finger is being bent. This combination of actuation and force sensing means that a human-size haptic system can be used as a force feedback interface: As you move your fingers, the microfingers will move, and forces can be transmitted back to you, allowing you to feel what the microfingers feel.

Two images showing a concept drawing of the microfingers interacting with a pill bug, and a human hand enclosed in sensors and actuators. The microfingers (left) can be connected to a haptic feedback and control system for use by a human.

Fans of the golden age of science fiction will recognize this system as a version of Waldo F. Jones' Synchronous Reduplicating Pantograph, although the concept has even deeper roots in sci-fi:

The thought suddenly struck me: I can make micro hands for my little hands. I can make the same gloves for them as I did for my living hands, use the same system to connect them to the handles ten times smaller than my micro arms, and then ... I will have real micro arms, they will chop my movements two hundred times. With these hands I will burst into such a smallness of life that they have only seen, but where no one else has disposed of their own hands. And I got to work.

With their very real and not science fiction system, the researchers were able to successfully determine that pill bugs can exert about 10 micro-Newtons of force through their legs, which is about the same as what has been estimated using other techniques. This is just a proof of concept study, but I’m excited about the potential here, because there is still so much of the world that humans haven’t yet been able to really touch. And besides just insect-scale tickling, there’s a broader practical context here around the development of insect-scale robots. Insects have had insect-scale sensing and mobility and whatnot pretty well figured out for a long time now, and if we’re going to make robots that can do insect-like things, we’re going to do it by learning as much as we can directly from insects themselves.

“With our strain-sensing microfinger, we were able to directly measure the pushing motion and force of the legs and torso of a pill bug—something that has been impossible to achieve previously. We anticipate that our results will lead to further technological development for microfinger-insect interactions, leading to human-environment interactions at much smaller scales.”
—Satoshi Konishi, Ritsumeikan University

I should also be clear that despite the headline, I don’t know if it’s actually possible to tickle a bug. A Google search for “are insects ticklish” turns up one single result, from someone asking this question on the "StonerThoughts" subreddit. There is some suggestion that tickling, or more specifically the kind of tickling that is surprising and can lead to laughter called gargalesis, has evolved in social mammals to promote bonding. The other kind of tickling is called knismesis, which is more of an unpleasant sensation that causes irritation or distress. You know, like the feeling of a bug crawling on you. It seems plausible (to me, anyway) that bugs may experience some kind of knismesis—but I think that someone needs to get in there and do some science, especially now that we have the tools to make it happen.
Match ID: 102 Score: 14.29 source: spectrum.ieee.org age: 5 days
qualifiers: 14.29 mit

The Data Guy Who Got the Midterms Right
Thu, 24 Nov 2022 03:43:32 +0000

Tom Bonier of TargetSmart on how Republican polls were able to skew media predictions.

The post The Data Guy Who Got the Midterms Right appeared first on The Intercept.


Match ID: 103 Score: 14.29 source: theintercept.com age: 5 days
qualifiers: 14.29 mit

Video Friday: Little Robot, Big Stairs
Fri, 18 Nov 2022 16:43:36 +0000


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs, and even operate in the dark.

[ CMU ]

This robot is designed as a preliminary platform for humanoid robot research. The platform will be further extended with soles as well as upper limbs. In this video, the current lower limb version of the platform shows its capability in traversing uneven terrains without an active or passive ankle joint. The underactuation nature of the robot system has been well addressed with our locomotion-control framework, which also provides a new perspective on the leg design of bipedal robots.

[ CLEAR Lab ]

Thanks, Zejun!

Inbiodroid is a startup “dedicated to the development of fully immersive telepresence technologies that create a deeper connection between people and their environment.” Hot off the ANA Avatar XPrize competition, they’re doing a Kickstarter to fund the next generation of telepresence robots.

[ Kickstarter ] via [ Inbiodroid ]

Thanks, Alejandro!

A robot that can feel what a therapist feels when treating a patient, that can adjust the intensity of rehabilitation exercises at any time according to the patient's abilities and needs, and that can thus go on for hours without getting tired: It seems like fiction, and yet researchers from the Vrije Universiteit Brussel and Imec have now finished a prototype that unites all these skills in one robot.

[ VUB ]

Thanks, Bram!

Self-driving bikes present some special challenges, as this excellent video graphically demonstrates.

[ Paper ]

Pickle robots unload trucks. This is a short overview of the Pickle Robot Unload System in action at the end of October 2022—autonomously picking floor-loaded freight to unload a trailer. As a robotic system built on AI and advanced sensors, the system gets better and faster all the time.

[ Pickle ]

Learning agile skills can be challenging with reward shaping. Imitation learning provides an alternative solution by assuming access to decent expert references. However, such experts are not always available. We propose Wasserstein Adversarial Skill Imitation (WASABI), which acquires agile behaviors from partial and potentially physically incompatible demonstrations. In our work, Solo, a quadruped robot, learns highly dynamic skills (for example, backflips) from only handheld human demonstrations.

WASABI!

[ WASABI ]

NASA and the European Space Agency are developing plans for one of the most ambitious campaigns ever attempted in space: bringing the first samples of Mars material safely back to Earth for detailed study. The diverse set of scientifically curated samples now being collected by NASA’s Mars Perseverance rover could help scientists answer the question of whether ancient life ever arose on the Red Planet.

I thought I was promised some helicopters?

[ NASA ]

A Sanctuary general-purpose robot picks up and sorts medicine pills.

Remotely controlled, if that wasn’t clear.

[ Sanctuary ]

I don’t know what’s going on here, but it scares me.

[ KIMLAB ]

The Canadian Space Agency plans to send a rover to the moon as early as 2026 to explore a polar region. The mission will demonstrate key technologies and accomplish meaningful science. Its objectives are to gather imagery, measurements, and data on the surface of the moon, as well as to have the rover survive an entire night on the moon. Lunar nights, which last about 14 Earth days, are extremely cold and dark, posing a significant technological challenge.

[ CSA ]

Covariant Robotic Induction automates previously manual induction processes. This video shows the Covariant Robotic Induction solution picking a wide range of item types from totes, scanning bar codes, and inducting items onto a unit sorter. Note the robot’s ability to effectively handle items that are traditionally difficult to pick, such as transparent polybagged apparel and small, oddly shaped health and beauty items, and place them precisely onto individual trays.

[ Covariant ]

The solution will integrate Boston Dynamics’ Spot robot; the ExynPak, powered by ExynAI; and the Trimble X7 total station. It will enable fully autonomous missions inside complex and dynamic construction environments, which can result in consistent and precise reality capture for production and quality-control workflows.

[ Exyn ]

Our most advanced programmable robot yet is back and better than ever. Sphero RVR+ includes an advanced gearbox to improve torque and payload capacity; enhanced sensors, including an improved color sensor; and an improved rechargeable and swappable battery.

$279.

[ Sphero ]

I’m glad Starship is taking this seriously, although it’s hard to know from this video how well the robots behave when conditions are less favorable.

[ Starship ]

Complexity, cost, and power requirements for the actuation of individual robots can play a large factor in limiting the size of robotic swarms. Here we present PCBot, a minimalist robot that can precisely move on an orbital shake table using a bi-stable solenoid actuator built directly into its PCB. This allows the actuator to be built as part of the automated PCB manufacturing process, greatly reducing the impact it has on manual assembly.

[ Paper ]

Drone-racing world champion Thomas Bitmatta designed an indoor drone-racing track for ETH Zurich’s autonomous high-speed racing drones, and in something like half an hour, the autonomous drones were able to master the track at superhuman speeds (with the aid of a motion-capture system).

[ ETH RSL ] via [ BMS Racing ]

Thanks, Paul!

Moravec’s paradox is the observation that many things that are difficult for robots to do come easily to humans, and vice versa. Stanford University professor Chelsea Finn has been tasked to explain this concept to 5 different people: a child, a teen, a college student, a grad student, and an expert.

[ Wired ]

Roberto Calandra from Meta AI gives a talk about “Perceiving, Understanding, and Interacting Through Touch.”

[ UPenn ]

AI advancements have been motivated and inspired by human intelligence for decades. How can we use AI to expand our knowledge and understanding of the world and ourselves? How can we leverage AI to enrich our lives? In his Tanner Lecture, Eric Horvitz, chief science officer at Microsoft, will explore these questions and more, tracing the arc of intelligence from its origins and evolution in humans to its manifestations and prospects in the tools we create and use.

[ UMich ]


Match ID: 104 Score: 12.86 source: spectrum.ieee.org age: 11 days
qualifiers: 9.29 nasa, 3.57 mit

ISS Daily Summary Report – 11/17/2022
Thu, 17 Nov 2022 16:00:52 +0000
RS Extravehicular Activity (EVA) #55: During today’s EVA, FE-4 Sergey Prokopyev (EV1) and FE-5 Dmitry Petelin (EV2) performed a multitude of tasks including preparation of a radiator and an airlock for installation on the Nauka multipurpose laboratory module (MLM). The MRM2 hatch opened at 8:41 AM CT. The following tasks were completed: Installation of STRELA adapter …
Match ID: 105 Score: 12.86 source: blogs.nasa.gov age: 12 days
qualifiers: 9.29 nasa, 3.57 mit

NASA Leaders to Participate in Annual Global Climate Conference
Fri, 04 Nov 2022 14:52 EDT
NASA will participate in the 27th United Nations Climate Change Conference of the Parties (COP27) in Sharm El Sheikh, Egypt, which begins Sunday, Nov. 6, and runs through Friday, Nov. 18. The COP27 summit brings together countries from around the world to increase ambition by implementing existing goals and strengthening commitments to solutions th
Match ID: 106 Score: 12.86 source: www.nasa.gov age: 25 days
qualifiers: 9.29 nasa, 3.57 mit

Aligning the Eyes of the Universe Machine
Fri, 09 Sep 2022 17:39:58 +0000


The James Webb Space Telescope, in just a few months of operation, has begun to change our view of the universe. Its images—more detailed than what was possible before—show space aglow with galaxies, some of them formed very soon after the big bang.

None of this would be possible without the work of a team led by Scott Acton, the lead wavefront sensing and control scientist for the Webb at Ball Aerospace & Technologies in Colorado. He and his colleagues developed the systems that align the 18 separate segments of the Webb’s primary mirror with its smaller secondary mirror and science instruments. To produce clear images in the infrared wavelengths the telescope uses, the segments have to be within tens of nanometers of the shape specified in the spacecraft design.

Acton grew up in Wyoming and spent more than 20 years on the Webb team. IEEE Spectrum spoke with Acton after his team had finished aligning the telescope’s optics in space. This transcript has been edited for clarity and brevity.

Tell your story. What got you started?

Scott Acton: When I was seven-years-old, my dad brought home a new television. And he gave me the old television to take apart. I was just enthralled by what I saw inside this television. And from that moment on I was defined by electronics. You look inside an old television and there are mechanisms, there are smells and colors and sights and for a seven-year-old kid, it was just the most amazing thing I’d ever seen.

Fast-forward 25 years and I’m working in the field of adaptive optics. And eventually that led to wavefront sensing and controls, which led to the Webb telescope.

sky with bright stars and gaseous activity Called the Cosmic Cliffs, Webb’s seemingly three-dimensional picture looks like craggy mountains on a moonlit evening. In reality, it is the edge of the giant, gaseous cavity within NGC 3324, and the tallest “peaks” in this image are about 7 light-years high. NASA/ESA/CSA/STScI

Talk about your work getting the telescope ready for flight. You worked on it for more than 20 years.

Acton: Well, we had to invent all of the wavefront sensing and controls. None of that technology really existed in 2001, so we started from the ground up with concepts and simple experiments. Then more complicated, very complicated experiments and eventually something known as TRL 6 technology—Technology Readiness Level 6—which demonstrated that we could do this in a flightlike environment. And then it was a question of taking this technology, algorithms, understanding it and implementing it into very robust procedures, documentation, and software, so that it could then be applied on the flight telescope.

What was it like finally to launch?

Acton: Well, I’ve got to say, there was a lot of nervousness, at least on my part. I was thinking we had a 70 percent chance of mission success, or something like that. It’s like sending your kid off to college—this instrument that we’d been looking at and thinking about.

The Ariane 5 vehicle is so reliable. I didn’t think there was going to be any problem with it, but deployment starts, basically, minutes after launch. So, for me, the place to be was at a computer console [at the Space Telescope Science Institute in Baltimore].

And then there were a lot of things that had to work.

Acton: Yes, right. But there are some things that that are interesting. They have these things called nonexplosive actuators [used to secure the spacecraft during launch]. There are about 130 of them. And you actually can’t test them. You build them and they get used, basically, once. If you do reuse one, well, it’s now a different actuator because you have to solder it back together. So you can’t qualify the part, but what you can do is qualify the process.

We could have still had a mission if some didn’t fire, but most of them were absolutely necessary for the success of the mission. So just ask yourself, let’s suppose you want to have a 95 percent chance of success. What number raised to the 130th power is equal to 0.95? That number is basically one. These things had to be perfect.

I remember walking home one night, talking on the phone to my wife, Heidi, and saying, “If I’m wrong about this I’ve just completely screwed up the telescope.” She said, “Scott, that’s why you’re there.” That was her way of telling me to cowboy up. The responsibility had to come down to somebody and in that moment, it was me.

I think the public perception was that the Webb was in very good shape and the in-flight setup all went very well. Would you say that’s accurate?

Acton: Early on in the mission there were hiccups, but other than that, I’d say things just went beyond our wildest expectations. Part of that comes down to the fact that my team and I had commissioned the telescope 100 times in simulations. And we always made it a little harder. I think that served us well because when we got to the real telescope, it was quite robust. It just worked.

Take us through the process of aligning the telescope.

Acton: The first image we got back from the telescope was 2 February, in the middle of the night. Most people had gone home, but I was there, and a lot of other people were too. We just pointed the telescope at the Large Magellanic Cloud, which has lots and lots of stars in it, and took images on the near-infrared cameras. People were really happy to see these images because they were looking basically to make sure that the science instruments worked.

But some of us were really concerned with that image, because you could see some very significant astigmatism—stronger than we were expecting to see from our simulations. Later we would learn that the telescope’s secondary mirror was off in translation—about 1.5 millimeters along the deployment axis and about a millimeter in the other axis. And the primary mirror segments were clocked a bit from the perfectly aligned state.

Lee Feinberg, the telescope lead at NASA Goddard, texted me and said, “Scott, why can’t you just simulate this to see if you can get some images that bad?” So that morning I ran a simulation and was able to reproduce almost exactly what we were seeing in these images. We realized that we were not going to have any major problems with the wavefront.

Describe the cadence of your work during commissioning. What would a day be like?

Acton: One of the rules we set up very early on was that in terms of wavefront sensing and control, we would always have two people sitting in front of the computers at any given time. Anytime anything significant happened, I always wanted to make sure that I was there, so I got an apartment [near the institute in Baltimore]. From my door to the door of the of the Mission Operations Center was a 7-minute walk.

sky with bright star in middle with gases In this mosaic image stretching 340 light-years across, Webb’s Near-Infrared Camera (NIRCam) displays the Tarantula Nebula star-forming region in a new light, including tens of thousands of never-before-seen young stars that were previously shrouded in cosmic dust.NASA/ESA/CSA/STScI/Webb ERO Production Team

There were certainly times during the process where it had a very large pucker factor, if you will. We couldn’t point the telescope reliably at the very beginning. And a lot of our software, for the early steps of commissioning, depended on the immutability of telescope pointing. We wanted to have the telescope repeatedly pointed to within a couple of arc-seconds and it was closer to 20 or 30. Because of that, some of the initial moves to align the telescope had to be calculated, if you will, by hand.

I remember walking home one night, talking on the phone to my wife, Heidi, and saying, “If I’m wrong about this I’ve just completely screwed up the telescope.” She said, “Scott, that’s why you’re there.” That was her way of telling me to cowboy up. The responsibility had to come down to somebody and in that moment, it was me.

But when the result came back, we could see the images. We pointed the telescope at a bright isolated star and then we could see, one at a time, 18 spots appearing in the middle of our main science detector. I remember a colleague saying, “I now believe we’re going to completely align the telescope.” He felt in his mind that if we could get past that step, that everything else was downhill.

You’re trying to piece together the universe. It’s hard to get it right, and very easy to make mistakes. But we did it.

Building the Webb was, of course, a big, complicated project. Do you think there are any particular lessons to be drawn from it that people in the future might find useful?

Acton: Here are a couple of really big ones that apply to wavefront sensing and control. One is that there are multiple institutions involved—Northrop Grumman, Ball Aerospace, the Goddard Space Flight Center, the Space Telescope Science Institute—and the complication of having all these institutional lines. It could have been very, very difficult to navigate. So very early on we decided not to have any lines. We were a completely badgeless team. Anybody could talk to anybody. If someone said, “No, I think this is wrong, you should do it this way,” even if they didn’t necessarily have contractual responsibility, everybody listened.

Another big lesson we learned was about the importance of the interplay between experimentation and simulation. We built a one-sixth scale model, a fully functional optical model of the telescope, and it’s still working. It allowed us, very early on, to know what was going to be difficult. Then we could address those issues in simulation. That understanding, the interplay between experimentation and modeling and simulations, was absolutely essential.

Recognizing of course, that it’s very early, do you yet have a favorite image?

Acton: My favorite image, so far, was one that was taken during the last real wavefront activity that we did as part of commissioning. It was called a thermal slew test. The telescope has a large sunshield, but the sunshield can be at different angles with respect to the sun. So to make sure it was stable, we aimed it at a bright star we used as a guide star, put it in one orientation, and stayed there for five or six days. And then we switched to a different orientation for five or six days. It turned out to be quite stable. But how do you know that the telescope wasn’t rolling about the guide star? To check this, we took a series of test images with the redundant fine-guidance sensor. As you can imagine, when you have a 6-1/2 meter telescope at L2 away from any competing light sources that is cooled to 50 kelvins, yes, it is sensitive. Even just one 20-minute exposure is going to just have unbelievable detail regarding the deep universe. Imagine what happens if you take 100 of those images and average them together. We came up with an image of just some random part of the sky.

james webb telescope image of bright lights against a dark background Scott Acton’s favorite Webb image: A test image of a random part of the sky, shot with the Webb’s fine-guidance sensor. The points with six-pointed diffraction patterns are stars; all other points are galaxies. NASA/CSA/FGS

I sent this image to James Larkin at UCLA, and he looked at it and estimated that that single image had 15,000 galaxies in it. Every one of those galaxies probably has between 100 [billion] and 200 billion stars.

I don’t talk about religion too much when it comes to this, but I must have had in my mind a Biblical reference to the stars singing. I pictured all of those galaxies as singing, as if this was a way for the universe to express joy that after all these years, we could finally see them. It was quite an emotional experience for me and for many people.

You realized that there was so much out there, and you weren’t even really looking for it yet? You were still phasing the telescope?

Acton: That’s right. I guess I I’m not sure what I expected. I figured you’d just see dark sky. Well, there is no dark sky. Dark sky is a myth. Galaxies are everywhere.

Finally, we got to our first diffraction-limited image [with the telescope calibrated for science observations for the first time]. And that’s the way the telescope is operating now.

Several days later, about 70 of us got together—astronomers, engineers, and other team members. A member of the team—his name is Anthony Galyer—and I had gone halves several years earlier and purchased a bottle of cognac from 1906, the year that James Webb was born. We toasted James Webb and the telescope that bears his name.


Match ID: 107 Score: 12.86 source: spectrum.ieee.org age: 81 days
qualifiers: 9.29 nasa, 3.57 mit

MOXIE Shows How to Make Oxygen on Mars
Thu, 08 Sep 2022 15:27:59 +0000


Planning for the return journey is an integral part of the preparations for a crewed Mars mission. Astronauts will require a total mass of about 50 tonnes of rocket propellent for the ascent vehicle that will lift them off the planet’s surface, including 31 tonnes of oxygen approximately. The less popular option is for crewed missions to carry the required oxygen themselves. But scientists are optimistic that it could instead be produced from the carbon dioxide–rich Martian atmosphere itself, using a system called MOXIE.

The Mars Oxygen ISRU (In-Situ Resource Utilization) Experiment is an 18-kilogram unit housed within the Perseverance rover on Mars. The unit is “the size of a toaster,” adds Jeffrey Hoffman, professor of aerospace engineering at MIT. Its job is to electrochemically break down carbon dioxide collected from the Martian atmosphere into oxygen and carbon monoxide. It also tests the purity of the oxygen.

Between February 2021, when it arrived on Mars aboard the Perseverance, and the end of the year, MOXIE has had several successful test runs. According to a review of the system by Hoffman and colleagues, published in Science Advances, it has demonstrated its ability to produce oxygen during both night and day, when temperatures can vary by over 100 ºC. The generation and purity rates of oxygen also meet requirements to produce rocket propellent and for breathing. The authors assert that a scaled-up version of MOXIE could produce the required oxygen for lift-off as well as for the astronauts to breathe.

Next question: How to power any oxygen-producing factories that NASA can land on Mars? Perhaps via NASA’s Kilopower fission reactors?

MOXIE is a first step toward a much larger and more complex system to support the human exploration of Mars. The researchers estimate a required generation rate of 2 to 3 kilograms per hour, compared with the current MOXIE rate of 6 to 8 grams per hour, to produce enough oxygen for lift-off for a crew arriving 26 months later. “So we’re talking about a system that’s a couple of hundred times bigger than MOXIE,” Hoffman says.

They calculate this rate accounting for eight months to get to Mars, followed by some time to set up the system. “We figure you'd probably have maybe 14 months to make all the oxygen.” Further, he says, the produced oxygen would have to be liquefied to be used a rocket propellant, something the current version of MOXIE doesn’t do.

MOXIE also currently faces several design constraints because, says Hoffman, a former astronaut, “our only ride to Mars was inside the Perseverance rover.” This limited the amount of power available to operate the unit, the amount of heat they could produce, the volume and the mass.

“MOXIE does not work nearly as efficiently as a stand-alone system that was specifically designed would,” says Hoffman. Most of the time, it’s turned off. “Every time we want to make oxygen, we have to heat it up to 800 ºC, so most of the energy goes into heating it up and running the compressor, whereas in a well-designed stand-alone system, most of the energy will go into the actual electrolysis, into actually producing the oxygen.”

However, there are still many kinks to iron out for the scaling-up process. To begin with, any oxygen-producing system will need lots of power. Hoffman thinks nuclear power is the most likely option, maybe NASA’s Kilopower fission reactors. The setup and the cabling would certainly be challenging, he says. “You’re going to have to launch to all of these nuclear reactors, and of course, they’re not going to be in exactly the same place as the [other] units,” he says. "So, robotically, you’re going to have to connect to the electrical cables to bring power to the oxygen-producing unit.”

Then there is the solid oxide electrolysis units, which Hoffman points out are carefully machined systems. Fortunately, the company that makes them, OxEon, has already designed, built, and tested a full-scale unit, a hundred times bigger than the one on MOXIE. “Several of those units would be required to produce oxygen at the quantities that we need,” Hoffman says.

He also adds that at present, there is no redundancy built into MOXIE. If any part fails, the whole system dies. “If you’re counting on a system to produce oxygen for rocket propellant and for breathing, you need very high reliability, which means you’re going to need quite a few redundant units.”

Moreover, the system has to be pretty much autonomous, Hoffman says. “It has to be able to monitor itself, run itself.” For testing purposes, every time MOXIE is powered up, there is plenty of time to plan. A full-scale MOXIE system, though, would have to run continuously, and for that it has to be able to adjust automatically to changes in the Mars atmosphere, which can vary by a factor of two over a year, and between nighttime and daytime temperature differences.


Match ID: 108 Score: 12.86 source: spectrum.ieee.org age: 82 days
qualifiers: 9.29 nasa, 3.57 mit

China’s Moon Missions Shadow NASA Artemis’s Pace
Wed, 07 Sep 2022 16:56:57 +0000


This past weekend, NASA scrubbed the Artemis I uncrewed mission to the moon and back. Reportedly, the space agency will try again to launch the inaugural moon mission featuring the gargantuan Space Launch System (SLS) at the end of this month or sometime in October. Meanwhile, half a world away, China is progressing on its own step-by-step program to put both robotic and, eventually, crewed spacecraft on the lunar surface and keep pace with NASA-led achievements.

Asia’s rapidly growing space power has already made a number of impressive lunar leaps but will need to build on these in the coming years. Ambitious sample-return missions, landings at the lunar south pole, testing the ability to 3D print using materials from regolith, and finally sending astronauts on a short-term visit to our celestial neighbor are in the cards before the end of the decade.

The next step, expected around 2024, is Chang’e-6: an unprecedented attempt to collect rock samples from the far side of the moon.

The mission will build on two recent major space achievements. In 2019, China became the first country to safely land a spacecraft on the far side of the moon, a hemisphere which cannot be seen from Earth—as the moon is tidally locked. The mission was made possible by a relay satellite out beyond the moon at Earth-moon Lagrange point 2, where it can bounce signals between Chang’e-4 and ground stations in China.

Chang’e-5 in 2020 performed the first sampling of lunar material in over four decades. The complex, four-spacecraft mission used an orbiter, lander, ascent vehicle, and return capsule to successfully deliver 1.731 grams of lunar rocks to Earth. The automated rendezvous and docking in lunar orbit of the orbiter and ascent spacecraft was also seen as a test of the technology for getting astronauts off the moon and back to Earth.

Chang’e-6 will again attempt to collect new samples, this time from the South pole-Aitken basin, a massive and ancient impact crater on the far side of the moon. The science return of such a mission could likewise be huge as its rocks have the potential to answer some significant questions about the moon’s geological past, says planetary scientist Katherine Joy of the University of Manchester, in England.

“We think that the basin-formation event was so large that the moon’s mantle could have been excavated from tens of kilometers deep,” says Joy. Fragments of this mantle material originating from deep in the moon would help us to understand how the Moon differentiated early in its history, the nature of its interior, and how volcanism on the far side of the moon is different or similar to that on the nearside.

Chang’e-7, also scheduled for 2024, will look at a different set of questions geared toward lunar resources. It will target the lunar south pole, a region where NASA’s Artemis 3 crewed mission is also looking to land.

The mission will involve a flotilla of spacecraft, including a new relay satellite, an orbiter, lander, rover and a small “hopping” spacecraft designed to inspect permanently shadowed craters which are thought to contain water ice which could be used in the future to provide breathable oxygen, rocket fuel, or drinking water to lunar explorers.

Following this Chang’e-8 is expected to launch around 2027 to test in situ resource utilization and conduct other experiments and technology tests such as oxygen extraction and 3D printing related to building a permanent lunar base—for both robots and crew—in the 2030s, named the International Lunar Research Station (ILRS).

The upcoming Chang’e-6, 7 and 8 missions are expected to launch on China’s largest current rocket, the Long March 5. But, as with NASA and Artemis, China will need its own megarockets to make human lunar exploration and ultimately, perhaps, crewed lunar bases a reality.

In part in reaction to the achievements of SpaceX, the China Aerospace Science and Technology Corporation (CASC), the country’s main space contractor, is developing a new rocket specifically for launching astronauts beyond low Earth orbit.

The “new generation crew launch vehicle” will essentially bundle three Long March 5 core stages together (which will be no mean feat of engineering) while also improving the performance of its kerosene engines. The result will be a roughly 90-meter-tall rocket resembling a Long March version of SpaceX’s Falcon Heavy, capable of sending 27 tonnes of payload into translunar injection.

Two launches of the rocket will by 2030, according to leading Chinese space officials, be able to put a pair of astronauts on the moon for a 6-hour stay. Such a mission also requires developing a lunar lander and a new spacecraft capable of keeping astronauts safe in deep space.

For building infrastructure on the moon, China is looking to the future Long March 9, an SLS-class rocket capable of sending 50 tonnes into translunar injection. The project will require CASC to make breakthroughs in a number of areas, including manufacturing new, wider rocket bodies of up to 10 meters in diameter, mastering massive, higher-thrust rocket engines, and building a new launch complex at Wenchang, Hainan island, to handle the monster.

Once again NASA is leading humanity’s journey to the moon, but China’s steady accumulation of capabilities and long-term ambitions means it will likely not be far behind.


Match ID: 109 Score: 12.86 source: spectrum.ieee.org age: 83 days
qualifiers: 9.29 nasa, 3.57 mit

X-Rays Could Carry Quantum Signals Across the Stars
Mon, 18 Jul 2022 15:07:14 +0000


Quantum signals may possess a number of advantages over regular forms of communication, leading scientists to wonder if humanity was not alone in discovering such benefits. Now a new study suggests that, for hypothetical extraterrestrial civilizations, quantum transmissions using X-rays may be possible across interstellar distances.

Quantum communication relies on a quantum phenomenon known as entanglement. Essentially, two or more particles such as photons that get “linked” via entanglement can, in theory, influence each other instantly no matter how far apart they are.

Entanglement is essential to quantum teleportation, in which data can essentially disappear one place and reappear someplace else. Since this information does not travel across the intervening space, there is no chance the information will be lost.

To accomplish quantum teleportation, one would first entangle two photons. Then, one of the photons—the one to be teleported—is kept at one location while the other is beamed to whatever destination is desired.

Next, the photon at the destination's quantum state—which defines its key characteristics—is analyzed, an act that also destroys its quantum state. Entanglement will lead the destination photon to prove identical to its partner. For all intents and purposes, the photon at the origin point “teleported” to the destination point—no physical matter moved, but the two photons are physically indistinguishable.

And to be clear, quantum teleportation cannot send information faster than the speed of light, because the destination photon must still be transmitted via conventional means.

One weakness of quantum communication is that entanglement is fragile. Still, researchers have successfully transmitted entangled photons that remained stable or “coherent” enough for quantum teleportation across distances as great as 1,400 kilometers.

Such findings led theoretical physicist Arjun Berera at the University of Edinburgh to wonder just how far quantum signals might stay coherent. First, he discovered quantum coherence might survive interstellar distances within our galaxy, and then he and his colleagues found quantum coherence might survive intergalactic distances.

“If photons in Earth’s atmosphere don’t decohere to 100 km, then in interstellar space where the medium is much less dense then our atmosphere, photons won’t decohere up to even the size of the galaxy,” Berera says.

In the new study, the researchers investigated whether and how well quantum communication might survive interstellar distances. Quantum signals might face disruption from a number of factors, such as the gravitational pull of interstellar bodies, they note.

The scientists discovered the best quantum communication channels for interstellar messages are X-rays. Such frequencies are easier to focus and detect across interstellar distances. (NASA has tested deep-space X-ray communication with its XCOM experiment.) The researchers also found that the optical and microwave bands could enable communication across large distances as well, albeit less effectively than X-rays.

Although coherence might survive interstellar distances, Berera does note quantum signals might lose fidelity. “This means the quantum state is sustained, but it can have a phase shift, so although the quantum information is preserved in these states, it has been altered by the effect of gravity.” Therefore, it may “take some work at the receiving end to account for these phase shifts and be able to assess the information contained in the original state.”

Why might an interstellar civilization transmit quantum signals as opposed to regular ones? The researchers note that quantum communication may allow greater data compression and, in some cases, exponentially faster speeds than classical channels. Such a boost in efficiency might prove very useful for civilizations separated by interstellar distances.

“It could be that quantum communication is the main communication mode in an extraterrestrial's world, so they just apply what is at hand to send signals into the cosmos,” Berera says.

The scientists detailed their findings online 28 June in the journal Physical Review D.


Match ID: 110 Score: 12.86 source: spectrum.ieee.org age: 134 days
qualifiers: 9.29 nasa, 3.57 mit

The Webb Space Telescope’s Profound Data Challenges
Fri, 08 Jul 2022 18:03:45 +0000


For a deep dive into the engineering behind the James Webb Space Telescope, see our collection of posts here.

When the James Webb Space Telescope (JWST) reveals its first images on 12 July, they will be the by-product of carefully crafted mirrors and scientific instruments. But all of its data-collecting prowess would be moot without the spacecraft’s communications subsystem.

The Webb’s comms aren’t flashy. Rather, the data and communication systems are designed to be incredibly, unquestionably dependable and reliable. And while some aspects of them are relatively new—it’s the first mission to use Ka-band frequencies for such high data rates so far from Earth, for example—above all else, JWST’s comms provide the foundation upon which JWST’s scientific endeavors sit.


As previous articles in this series have noted, JWST is parked at Lagrange point L2. It’s a point of gravitational equilibrium located about 1.5 million kilometers beyond Earth on a straight line between the planet and the sun. It’s an ideal location for JWST to observe the universe without obstruction and with minimal orbital adjustments.

Being so far away from Earth, however, means that data has farther to travel to make it back in one piece. It also means the communications subsystem needs to be reliable, because the prospect of a repair mission being sent to address a problem is, for the near term at least, highly unlikely. Given the cost and time involved, says Michael Menzel, the mission systems engineer for JWST, “I would not encourage a rendezvous and servicing mission unless something went wildly wrong.”

According to Menzel, who has worked on JWST in some capacity for over 20 years, the plan has always been to use well-understood K a-band frequencies for the bulky transmissions of scientific data. Specifically, JWST is transmitting data back to Earth on a 25.9-gigahertz channel at up to 28 megabits per second. The Ka-band is a portion of the broader K-band (another portion, the Ku-band, was also considered).

An illustration depicting different Lagrange points and where the Webb Telescope is. The Lagrange points are equilibrium locations where competing gravitational tugs on an object net out to zero. JWST is one of three craft currently occupying L2 (Shown here at an exaggerated distance from Earth). IEEE Spectrum

Both the data-collection and transmission rates of JWST dwarf those of the older Hubble Space Telescope. Compared to Hubble, which is still active and generates 1 to 2 gigabytes of data daily, JWST can produce up to 57 GB each day (although that amount is dependent on what observations are scheduled).

Menzel says he first saw the frequency selection proposals for JWST around 2000, when he was working at Northrop Grumman. He became the mission systems engineer in 2004. “I knew where the risks were in this mission. And I wanted to make sure that we didn’t get any new risks,” he says.

IEEE Spectrum

Besides, K a-band frequencies can transmit more data than X-band (7 to 11.2 GHz) or S-band (2 to 4 GHz), common choices for craft in deep space. A high data rate is a necessity for the scientific work JWST will be undertaking. In addition, according to Carl Hansen, a flight systems engineer at the Space Telescope Science Institute (the science operations center for JWST), a comparable X-band antenna would be so large that the spacecraft would have trouble remaining steady for imaging.

Although the 25.9-GHz K a-band frequency is the telescope’s workhorse communication channel, it also employs two channels in the S-band. One is the 2.09-GHz uplink that ferries future transmission and scientific observation schedules to the telescope at 16 kilobits per second. The other is the 2.27-GHz, 40-kb/s downlink over which the telescope transmits engineering data—including its operational status, systems health, and other information concerning the telescope’s day-to-day activities.

Any scientific data the JWST collects during its lifetime will need to be stored on board, because the spacecraft doesn’t maintain round-the-clock contact with Earth. Data gathered from its scientific instruments, once collected, is stored within the spacecraft’s 68-GB solid-state drive (3 percent is reserved for engineering and telemetry data). Alex Hunter, also a flight systems engineer at the Space Telescope Science Institute, says that by the end of JWST’s 10-year mission life, they expect to be down to about 60 GB because of deep-space radiation and wear and tear.

The onboard storage is enough to collect data for about 24 hours before it runs out of room. Well before that becomes an issue, JWST will have scheduled opportunities to beam that invaluable data to Earth.

JWST will stay connected via the Deep Space Network (DSN)—a resource it shares with the Parker Solar Probe, Transiting Exoplanet Survey Satellite, the Voyager probes, and the entire ensemble of Mars rovers and orbiters, to name just a few of the other heavyweights. The DSN consists of three antenna complexes: Canberra, Australia; Madrid, Spain; and Barstow, Calif. JWST needs to share finite antenna time with plenty of other deep-space missions, each with unique communications needs and schedules.

IEEE Spectrum

Sandy Kwan, a DSN systems engineer, says that contact windows with spacecraft are scheduled 12 to 20 weeks in advance. JWST had a greater number of scheduled contact windows during its commissioning phase, as instruments were brought on line, checked, and calibrated. Most of that process required real-time communication with Earth.

All of the communications channels use the Reed-Solomon error-correction protocol—the same error-correction standard as used in DVDs and Blu-ray discs as well as QR codes. The lower data-rate S-band channels use binary phase-shift key modulation—involving phase shifting of a signal’s carrier wave. The K-band channel, however, uses a quadrature phase-shift key modulation. Quadrature phase-shift keying can double a channel’s data rate, at the cost of more complicated transmitters and receivers.

JWST’s communications with Earth incorporate an acknowledgement protocol—only after the JWST gets confirmation that a file has been successfully received will it go ahead and delete its copy of the data to clear up space.

The communications subsystem was assembled along with the rest of the spacecraft bus by Northrop Grumman, using off-the-shelf components sourced from multiple manufacturers.

JWST has had a long and often-delayed development, but its communications system has always been a bedrock for the rest of the project. Keeping at least one system dependable means it’s one less thing to worry about. Menzel can remember, for instance, ideas for laser-based optical systems that were invariably rejected. “I can count at least two times where I had been approached by people who wanted to experiment with optical communications,” says Menzel. “Each time they came to me, I sent them away with the old ‘Thank you, but I don’t need it. And I don’t want it.’”


Match ID: 111 Score: 12.86 source: spectrum.ieee.org age: 144 days
qualifiers: 9.29 nasa, 3.57 mit

Companies Vie to Build NASA’s Next Communications Network
Fri, 13 May 2022 15:28:16 +0000


It was a great idea for its time—a network of NASA communications satellites high in geostationary orbit, providing nearly continuous radio contact between controllers on the ground and some of the agency’s highest-profile missions: the space shuttles, the International Space Station, the Hubble Space Telescope, and dozens of others.

The satellites were called TDRS—short for Tracking and Data Relay Satellite—and the first was launched in 1983 on the maiden voyage of the space shuttle Challenger. Twelve more would follow, quietly providing a backbone for NASA’s orbital operations. But they’ve gotten old, they’re expensive, and in the 40 years since they began, they’ve been outpaced by commercial satellite networks.

So what comes next? That’s the 278-million-dollar question—but, importantly, it’s not a multibillion-dollar question.

“Now it’ll be just plug and play. They can concentrate on the mission, and they don’t have to worry about comms, because we provide that for them.”
—Craig Miller, Viasat

NASA, following its mantra to get out of the business of routine space operations, has now awarded US $278.5 million in contracts to six companies: Amazon’s Project Kuiper, Inmarsat Government, SES Government Solutions, SpaceX, Telesat, and Viasat. The agency is asking them to offer services that are reliable, adaptable for all sorts of missions, easy for NASA to use, and—ideally—orders of magnitude less expensive than TDRS.

“It’s an ambitious wish list,” says Eli Naffah, communications services project manager at NASA’s Glenn Research Center, in Cleveland. “We’re looking to have industry tell us, based on their capabilities and their business interests, what they would like to provide to us as a service that they would provide to others broadly.”

A satellite  and Earth. Inmarsat now operates a number of geostationary satellites in their GX fleet. The projected GX7 satellite [left] is expected to launch in 2023.Inmarsat Government

Satellite communication is one area that has taken off as a business proposition, independent of NASA’s space efforts. Internet and television transmission, GPS, phone service—all of these have become giant enterprises, ubiquitous in people’s lives. Economy of scale and competition have brought prices down dramatically. (That’s very different from, say, space tourism, which attracts a lot of attention but for now is still something that only the very wealthy can afford.)

NASA benefits, in the case of communications, from being a relatively small player, especially if it can get out from under the costs of running something like the TDRS system. The commercial satellite companies take over those costs—which, they say, is fine, since they were spending the money anyway.

“We love having customers like NASA,” says Craig Miller, president for government systems at Viasat. “They’re a joy to work with, their mission is in alignment with a lot of our core values, but we make billions of dollars a year selling Internet to other sources.”

Each of the six companies under the new NASA contract takes a different approach. Inmarsat, SES, and Viasat, for instance, would use large relay satellites, like TDRS, each seeming to hover over a fixed spot on Earth’s equator because, at an altitude of 35,786 kilometers, one orbit takes precisely 24 hours. Amazon and SpaceX, by contrast, would use swarms of smaller satellites in low Earth orbit, only 3,700 km in altitude. (SpaceX, at last count, had launched more than 2,200 of its Starlink satellites.) SES and Telesat would offer two-for-one packages, with service both from high and lower orbits. As for radio frequencies, the companies might use C band, Ka band, L band, optical—whatever their existing clients have needed. And so on.

Two rows of stacked objects seen over the background of Earth. Sixty SpaceX Starlink satellites wait for deployment from their launch rocket in low Earth orbit, in this photograph from 2019.SpaceX

It may sound like an alphabet soup of ways to solve one basic need—being in contact with its satellites—but engineers say that’s a minor trade-off for NASA if it can piggyback on others’ communications networks. “This allows NASA and our other government users to achieve their missions without the upfront capital expenditure and the full life-cycle cost” of running the TDRS system, said Britt Lewis, a senior vice president of Inmarsat Government, in an email to IEEE Spectrum.

One major advantage to the space agency would be the sheer volume of service available to it. In years past, the TDRS system could handle only so many transmissions at a time; if a particular mission needed to send a large volume of data, it had to book time in advance.

“Now it’ll be just plug and play,” says Miller at Viasat. “They can concentrate on the mission, and they don’t have to worry about comms, because we provide that for them.”

NASA says it expects each company will complete technology development and in-space demonstrations by 2025, with the most successful starting to take over operations for the agency by 2030. There will probably be no single winner: “We’re not really looking to have any one particular company be able to provide all the services on our list,” says NASA’s Naffah.

picture of a satellite in earth orbit NASA's TDRS-M communications satellite launched in 2017. NASA

The TDRS satellites have proved durable; TDRS-3, launched by the space shuttle Discovery in 1988, is still usable as a spare if newer satellites break down. NASA says it will probably continue to use the system into the 2030s, but it planned no more launches after the last (of TDRS-13 a.k.a. TDRS-M) in 2017.

If everything works out, says Amazon in an email, “This model would allow organizations like NASA to rely on commercial operators for near-Earth communications while shifting their focus to more ambitious operations, like solving technical challenges for deep space exploration and science missions.”

At which point the sky's the limit. NASA focuses on the moon, Mars, and other exploration, while it buys routine services from the private sector.

“We can provide the same kind of broadband capabilities that you’re used to having on Earth,” says Viasat’s Miller. He smiles at this thought. “We can provide Netflix to the ISS.”


Match ID: 112 Score: 12.86 source: spectrum.ieee.org age: 200 days
qualifiers: 9.29 nasa, 3.57 mit

Declaración del administrador de la NASA sobre la solicitud de presupuesto del presidente para el año fiscal 2023
Mon, 28 Mar 2022 13:53 EDT
El presupuesto del presidente para el año fiscal 2023 permitiría a la NASA mantener el liderazgo en innovación global de Estados Unidos y mantener a la NASA a la vanguardia de la exploración y el descubrimiento al regresar a la Luna con el programa Artemis, entre otros esfuerzos.
Match ID: 113 Score: 12.86 source: www.nasa.gov age: 246 days
qualifiers: 9.29 nasa, 3.57 mit

Inside the Universe Machine: The Webb Space Telescope’s Chilly Sun Shield
Thu, 07 Jul 2022 13:48:57 +0000


For a deep dive into the engineering behind the James Webb Space Telescope, see our collection of posts here.

When Apollo astronauts sent back the first pictures of Earth as a disk in space, the poet Archibald MacLeish wrote of it as “that bright loveliness in the eternal cold.” He was not far off. Deep space has a temperature of 2.7 kelvins—just 2.7 degrees above absolute zero.

If the James Webb Space Telescope is to work—looking so far out and therefore so far back in time that it can see the first galaxies forming after the big bang—it will have to image objects so faint that they barely stand out from the cold around them. The world will begin finding out how well the observatory works as soon as next week, when JWST is expected to release its first trove of scientific images and spectroscopic data.

So, for argument’s sake, let’s assume all indications so far do in fact point to a successful kickoff of the (hopefully long and storied) scientific data-gathering phase of Webb’s mission. How then did the engineers and designers of this massive telescope ever make it possible to cool the telescope down enough—all at a remove of nearly four times the distance from Earth to the moon—to possibly do its job?

After more than 25 years’ work and technological hurdles beyond counting, the Webb team has launched and stationed its mammoth observatory in solar orbit—and brought its instruments below 40 kelvins (-233 °C), cold enough to see the early universe more than 13.5 billion years ago. Remarkably, most of the cooling has been done passively, by shielding the telescope from the sun and letting physics take care of the rest.

“Webb is not just the product of a group of people. It’s not the product of some smart astronomers—Webb is truly the product of our entire world’s capability,” says Keith Parrish, a leader on the Webb team at NASA’s Goddard Space Flight Center in Maryland. “Taken as a whole, Webb is truly the result of our entire know-how of how to build complex machines.”

Parrish joined the project in 1997, ultimately becoming its commissioning manager through the years of design, assembly, testing, delay and, finally, launch on 25 December 2021. He says almost everything about it—its shape and location, the materials from which it’s made—was dictated by the need to have an observatory that would survive for years at supercold temperatures.

Photo of clean room with five giant foil-like sheets stacked atop one another, with three scientists in the distance, inspecting the sunshield In this photo, the five-layered JWST sunshield is being unfurled and inspected in a clean room. The layers of coated Kapton E never touch, minimizing the transmission of heat from one layer to the next. Alex Evers/Northrop Grumman

The Webb is an infrared observatory for many reasons, not the least of which is that as the universe expands, the wavelength of light from distant objects is lengthened, causing dramatic redshift. Infrared is also good for seeing through cosmic dust and gas, and for imaging cold things such as comets, Kuiper Belt objects, and perhaps planets orbiting other stars.

But infrared radiation is often best measured as heat, which is why it’s important for the Webb to be so cold. If, like the Hubble Telescope, it were in low Earth orbit, and it had no shielding from the sun, most of its targets would be drowned out by the sun and ground, and by heat in the telescope itself.

“If my signal is heat—and infrared is heat—then what I can’t have is other heat sources that are noise in the system,” says Jim Flynn, the sunshield manager at Northrop Grumman, the prime contractor for the Webb.

So the Webb has been sent to circle a spot in space called L2, 1.5 million kilometers away, opposite the sun, one of the locations known as Lagrange points. These "L" points are where the gravity of Earth and the sun exactly conspire to keep it in a stable and relatively "fixed" orbit with respect to the Earth as it makes its way around its 365.256-day course circling the sun. It’s a good compromise: Earth is distant enough that it doesn’t interfere with observations, but close enough that communication with the spacecraft can be relatively fast. And since the ship isn’t flying from day to night and back on every orbit, its temperature is relatively stable. All it needs is a really, really good sunshade.

“Four [layers of sunshield] would have probably done the job. Five gave us a little bit of an insurance policy. I’d like to say it was way more sophisticated than that, but that’s really not what it was at all.”
—Keith Parrish, NASA Goddard Space Flight Center

“The engineering was pushed above and beyond to meet the scientific goals,” says Alexandra Lockwood, a project scientist at the Space Telescope Science Institute, which operates the Webb. “It is specifically designed the way that it is because they wanted to do intensive infrared science.”

It makes for an ungainly-looking ship in many renderings, with the telescope assembly, intentionally open to space to prevent heat buildup, attached to its silvery sunshield, about 14 meters wide and 21 meters long, with five layers of insulating film to keep the telescope in almost total darkness.

From its sunlit side the sunshield roughly resembles a kite. The elongated shape, engineers found, would be the most efficient way to keep the Webb’s optics out of the sun. They considered a square or octagon, but the final version covers more area without much more mass.

“It’s no larger than it needs to be to meet the science field-of-view requirements, and that unique kite shape is the result,” says Parrish. “Any larger than it is now, it just makes everything more complex.”

The shield’s five layers are made of Kapton E, a plastic film first developed by DuPont in the 1960s and used for spacecraft insulation and printed circuits. The layers are coated in aluminum and silicon. Each is thinner than a human hair. But engineers say they are, together, very effective in blocking the sun’s heat. The first layer reduces its strength by about an order of magnitude (or 90 percent), the second layer removes another order of magnitude, and so on. The layers never touch, and they’re slightly flared as one gets away from the center of the shield, so that heat will escape out the sides.

The result: Temperatures on the sunny side of the shield approach 360 K (87 °C), but on the dark side they’re below that all-important 40 K (-233 °C). Or, put another way: More than 200 kilowatts of solar energy fall on the first layer, but only 23 milliwatts make it all the way through the fifth.

The Two Sides of the Webb Telescope


illustration depicting features of JWST's sunshield

Why five layers? There was a lot of computer modeling, but it was hard to simulate the shield’s thermal behavior before flight. “Four would have probably done the job. Five gave us a little bit of an insurance policy,” says Parrish. “I’d like to say it was way more sophisticated than that, but that’s really not what it was at all.”

The ability to cool the telescope naturally, first calculated in the 1980s to be possible, was a major advance. It meant the Webb would not have to rely on a heavy, complex cryogenic apparatus, with refrigerants that could leak and shorten the mission. Of its four main scientific instruments, only one, a midinfrared detector called MIRI, needs to be cooled to 6.7 K. It’s chilled by a multistage cryocooler, which pumps cold helium gas through pulse tubes to draw heat away from the instrument’s sensor. It uses the Joule-Thomson effect, reducing the temperature of the helium by making it expand after it’s forced through a 1-millimeter valve. Pressure comes from two pistons—the cryocooler system’s only moving parts—facing opposite directions so their movements will cancel each other out and not disturb observations.

Building the telescope proved immensely complicated; it fell years behind while its budget ballooned toward US $10 billion. The sunshield needed lengthy redesign after testing, when Kapton tore and fasteners came loose.

“We just bit off way more than we could chew,” Parrish says now. “That’s exactly what NASA should be doing. It should be pushing the envelope. The problem is that eventually Webb got too big to fail.”

But it’s finally deployed, sending data, and surprising engineers who expected at least some failures as it began to operate. Keith Parrish, his work done, is moving on to other projects at Goddard.

“I think Webb,” he says, “is just a great product of what it means to be an advanced civilization.”


Update: 26 July 2022: The story was updated to clarify that the gravity at Lagrange point L2 does not "cancel" (as the story had previously stated) but in fact adds to keep an object at L2 orbiting at the precise same orbital period as, in this case, the Earth—i.e. at 365.256 days.


Match ID: 114 Score: 12.14 source: spectrum.ieee.org age: 145 days
qualifiers: 9.29 nasa, 2.86 planets

New Deep Learning Method Adds 301 Planets to Kepler's Total Count
Mon, 22 Nov 2021 20:36 EST
Scientists recently added a whopping 301 newly confirmed exoplanets to the total exoplanet tally.
Match ID: 115 Score: 12.14 source: www.nasa.gov age: 372 days
qualifiers: 9.29 nasa, 2.86 planets

About Half of Sun-Like Stars Could Host Rocky, Potentially Habitable Planets
Thu, 29 Oct 2020 07:00 EDT
According to new research using data from NASA’s retired planet-hunting mission, the Kepler space telescope, about half the stars similar in temperature to our Sun could have a rocky planet capable of supporting liquid water on its surface.
Match ID: 116 Score: 12.14 source: www.nasa.gov age: 761 days
qualifiers: 9.29 nasa, 2.86 planets

Gravity Assist: Puffy Planets, Powerful Telescopes, with Knicole Colon
Fri, 12 Jun 2020 09:01 EDT
NASA astrophysicist Knicole Colon describes her work on the Kepler, Hubble, TESS and Webb missions, and takes us on a tour of some of her favorite planets.
Match ID: 117 Score: 12.14 source: www.nasa.gov age: 900 days
qualifiers: 9.29 nasa, 2.86 planets

IEEE SIGHT Founder Amarnath Raja Dies at 65
Wed, 23 Nov 2022 19:00:01 +0000


Amarnath Raja

Founder of IEEE Special Interest Group on Humanitarian Technology

Senior member, 65; died 5 September

Raja founded the IEEE Special Interest Group on Humanitarian Technology (SIGHT) in 2011. The global network partners with underserved communities and local organizations to leverage technology for sustainable development.


He began his career in 1980 as a management trainee at the National Dairy Development Board, in Anand, India. A year later he joined Milma, a state government marketing cooperative for the dairy industry, in Thiruvananthapuram, as a manager of planning and systems. After 15 years with Milma, he joined IBM in Tokyo as a manager of technology services.

In 2000 he helped found InApp, a company in Palo Alto, Calif., that provides software development services. He served as its CEO and executive chairman until he died.

Raja was the 2011–2012 chair of the IEEE Humanitarian Activities Committee. He wanted to find a way to mobilize engineers to apply their expertise to develop sustainable solutions that help their local community. To achieve the goal, in 2011 he founded IEEE SIGHT. Today there are more than 150 SIGHT groups in 50 countries that are working on projects such as sustainable irrigation and photovoltaic systems.

For his efforts, he received the 2015 Larry K. Wilson Transnational Award from IEEE Member and Geographic Activities. The award honors effective efforts to fulfill one or more of the MGA goals and strategic objectives related to transnational activities.

For the past two years, Rajah chaired the IEEE Admission and Advancement Review Panel, which approves applications for new members and elevations to higher membership grades.

He was a member of the International Centre for Free and Open Source Software’s advisory board. The organization was established by the government of Kerala, India, to facilitate the development and distribution of free, open-source software.

Raja also served as one of the directors of the nongovernmental organization Bedroc.in, which was established to continue the disaster rehabilitation work started by him and his team after the 2004 Indian Ocean tsunami.

He earned his bachelor’s degree in chemical engineering in 1979 from the Indian Institute of Technology in Delhi.

Donn S. Terry

Software engineer

Life member, 74; died 14 September

Terry was a computer engineer at Hewlett-Packard in Fort Collins, Colo., for 18 years.

He joined HP in 1978 as a software developer, and he chaired the Portable Operating System Interface (POSIX) working group. POSIX is a family of standards specified by the IEEE Computer Society for maintaining compatibility among operating systems. While there, he also developed software for the Motorola 68000 microprocessor.

Terry left HP in 1997 to join Softway Solutions, also in Fort Collins, where he developed tools for Interix, a Unix subsystem of the Windows NT operating system. After Microsoft acquired Softway in 1999, he stayed on as a senior software development engineer at its Seattle location. There he worked on static analysis, a method of computer-program debugging that is done by examining the code without executing the program. He also helped to create SAL, a Microsoft source-code annotation language, which was developed to make code design easier to understand and analyze.

Terry retired in 2014. He loved science fiction, boating, cooking, and spending time with his family, according to his daughter, Kristin.

He earned a bachelor’s degree in electrical engineering in 1970 and a Ph.D. in computer science in 1978, both from the University of Washington in Seattle.

William Sandham

Signal processing engineer

Life senior member, 70; died 25 August

Sandham applied his signal processing expertise to a wide variety of disciplines including medical imaging, biomedical data analysis, and geophysics.

He began his career in 1974 as a physicist at the University of Glasgow. While working there, he pursued a Ph.D. in geophysics. He earned his degree in 1981 at the University of Birmingham in England. He then joined the British National Oil Corp. (now Britoil) as a geophysicist.

In 1986 he left to join the University of Strathclyde, in Glasgow, as a lecturer in the signal processing department. During his time at the university, he published more than 200 journal papers and five books that addressed blood glucose measurement, electrocardiography data analysis and compression, medical ultrasound, MRI segmentation, prosthetic limb fitting, and sleep apnea detection.

Sandham left the university in 2003 and founded Scotsig, a signal processing consulting and research business, also in Glasgow.

He served on the editorial board of IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing and the EURASIP Journal on Advances in Signal Processing.

He was a Fellow of the Institution of Engineering and Technology and a member of the European Association of Geoscientists and Engineers and the Society of Exploration Geophysicists.

Sandham earned his bachelor’s degree in electrical engineering in 1974 from the University of Glasgow.

Stephen M. Brustoski

Loss-prevention engineer

Life member, 69; died 6 January

For 40 years, Brustoski worked as a loss-prevention engineer for insurance company FM Global. He retired from the company, which was headquartered in Johnston, R.I., in 2014.

He was an elder at his church, CrossPoint Alliance, in Akron, Ohio, where he oversaw administrative work and led Bible studies and prayer meetings. He was an assistant scoutmaster for 12 years, and he enjoyed hiking and traveling the world with his family, according to his wife, Sharon.

Brustoski earned a bachelor’s degree in electrical engineering in 1973 from the University of Akron.

Harry Letaw

President and CEO of Essex Corp.

Life senior member, 96; died 7 May 2020

As president and CEO of Essex Corp., in Columbia, Md., Letaw handled the development and commercialization of optoelectronic and signal processing solutions for defense, intelligence, and commercial customers. He retired in 1995.

He had served in World War II as an aviation engineer for the U.S. Army. After he was discharged, he earned a bachelor’s degree in chemistry, then a master’s degree and Ph.D., all from the University of Florida in Gainesville, in 1949, 1951, and 1952.

After he graduated, he became a postdoctoral assistant at the University of Illinois at Urbana-Champaign. He left to become a researcher at Raytheon Technologies, an aerospace and defense manufacturer, in Wayland, Mass.

Letaw was a member of the American Physical Society and the Phi Beta Kappa and Sigma Xi honor societies.


Match ID: 118 Score: 10.71 source: spectrum.ieee.org age: 6 days
qualifiers: 10.71 mit

I Experienced Jack Smith's Zeal Firsthand. Will Trump Get the Same Treatment?
Wed, 23 Nov 2022 16:15:01 +0000

His handling of the ex-president will show whether Smith really is an aggressive prosecutor — or just aggressive against the powerless.

The post I Experienced Jack Smith’s Zeal Firsthand. Will Trump Get the Same Treatment? appeared first on The Intercept.


Match ID: 119 Score: 10.71 source: theintercept.com age: 6 days
qualifiers: 10.71 mit

NASA Awards Contract for NASA Financial Support Services
Mon, 21 Nov 2022 15:33 EST
NASA has awarded the NASA Financial Support Services (NFSS) contract to MM Technologies LLC (MMT) of Huntsville, Alabama, to provide financial support and related services...
Match ID: 120 Score: 9.29 source: www.nasa.gov age: 8 days
qualifiers: 9.29 nasa

CAPSTONE Forges New Path for NASA’s Future Artemis Moon Missions
Mon, 21 Nov 2022 12:39 EST
NASA's CAPSTONE spacecraft has completed final maneuvers to place it in its target orbit around the Moon, refining its path in the orbit it arrived to last week.
Match ID: 121 Score: 9.29 source: www.nasa.gov age: 8 days
qualifiers: 9.29 nasa

ISS Daily Summary Report – 11/21/2022
Mon, 21 Nov 2022 16:00:30 +0000
Payloads: Japanese Aerospace Exploration Agency (JAXA) Media Take Part 2: Using a JAXA camcorder, the crew participated in a live interview and recorded a session to discuss several subjects. The JAXA Public Relations Activity (JAXA EPO) includes conducting cultural activities such as writing reports about and filming video of activities aboard the ISS. These tools …
Match ID: 122 Score: 9.29 source: blogs.nasa.gov age: 8 days
qualifiers: 9.29 nasa

NASA to Share Orion Update Today After Spacecraft Flies by Moon
Mon, 21 Nov 2022 09:49 EST
NASA will host a news conference at 5 p.m. EST today, Monday, Nov. 21, from the agency’s Johnson Space Center in Houston to discuss Orion’s lunar flyby as part of the Artemis I mission.
Match ID: 123 Score: 9.29 source: www.nasa.gov age: 8 days
qualifiers: 9.29 nasa

Nasa's Artemis spacecraft arrives at the Moon
Mon, 21 Nov 2022 12:48:08 GMT
Nasa's Orion capsule sweeps past the Moon before entering into a larger orbit.
Match ID: 124 Score: 9.29 source: www.bbc.co.uk age: 8 days
qualifiers: 9.29 nasa

NASA Awards Extension for Space, Earth Science Data Analysis Contract
Fri, 18 Nov 2022 14:30 EST
NASA has awarded a six-month extension to ADNET Systems Inc. of Bethesda, Maryland, for the Space and Earth Science Data Analysis (SESDA IV) contract.
Match ID: 125 Score: 9.29 source: www.nasa.gov age: 11 days
qualifiers: 9.29 nasa

India’s First Private Space Rocket Blasts Off
Fri, 18 Nov 2022 17:51:56 +0000


A rocket built by Indian startup Skyroot has become the country’s first privately developed launch vehicle to reach space, following a successful maiden flight earlier today. The suborbital mission is a major milestone for India’s private space industry, say experts, though more needs to be done to nurture the fledgling sector.

The Vikram-S rocket, named after the founder of the Indian space program, Vikram Sarabhai, lifted off from the Indian Space Research Organization’s (ISRO) Satish Dhawan Space Centre, on India’s east coast, at 11:30 a.m. local time (1 a.m. eastern time). It reached a peak altitude of 89.5 kilometers (55.6 miles), crossing the 80-km line that NASA counts as the boundary of space, but falling just short of the 100 km recognized by the Fédération Aéronautique Internationale.

In the longer run, India’s space industry has ambitions of capturing a significant chunk of the global launch market.

Pawan Kumar Chandana, cofounder of the Hyderabad-based startup, says the success of the launch is a major victory for India’s nascent space industry, but the buildup to the mission was nerve-racking. “We were pretty confident on the vehicle, but, as you know, rockets are very notorious for failure,” he says. “Especially in the last 10 seconds of countdown, the heartbeat was racing up. But once the vehicle had crossed the launcher and then went into the stable trajectory, I think that was the moment of celebration.”

At just 6 meters (20 feet) long and weighing only around 550 kilograms (0.6 tonnes), the Vikram-S is not designed for commercial use. Today’s mission, called Prarambh, which means “the beginning” in Sanskrit, was designed to test key technologies that will be used to build the startup’s first orbital rocket, the Vikram I. The rocket will reportedly be capable of lofting as much as 480 kg up to an 500-km altitude and is slated for a maiden launch next October.

man standing in front of a rocket behind him Skyroot cofounder Pawan Kumar Chandana standing in front of the Vikram-S rocket at the Satish Dhawan Space Centre, on the east coast of India.Skyroot

In particular, the mission has validated Skyroot’s decision to go with a novel all-carbon fiber structure to cut down on weight, says Chandana. It also allowed the company to test 3D-printed thrusters, which were used for spin stabilization in Vikram-S but will power the upper stages of its later rockets. Perhaps the most valuable lesson, though, says Chandana, was the complexity of interfacing Skyroot's vehicle with ISRO’s launch infrastructure. “You can manufacture the rocket, but launching it is a different ball game,” he says. “That was a great learning experience for us and will really help us accelerate our orbital vehicle.”

Skyroot is one of several Indian space startups looking to capitalize on recent efforts by the Indian government to liberalize its highly regulated space sector. Due to the dual-use nature of space technology, ISRO has historically had a government-sanctioned monopoly on most space activities, says Rajeswari Pillai Rajagopalan, director of the Centre for Security, Strategy and Technology at the Observer Research Foundation think tank, in New Delhi. While major Indian engineering players like Larsen & Toubro and Godrej Aerospace have long supplied ISRO with components and even entire space systems, the relationship has been one of a supplier and vendor, she says.

But in 2020, Finance Minister Nirmala Sitharaman announced a series of reforms to allow private players to build satellites and launch vehicles, carry out launches, and provide space-based services. The government also created the Indian National Space Promotion and Authorisation Centre (InSpace), a new agency designed to act as a link between ISRO and the private sector, and affirmed that private companies would be able to take advantage of ISRO’s facilities.

The first launch of a private rocket from an ISRO spaceport is a major milestone for the Indian space industry, says Rajagopalan. “This step itself is pretty crucial, and it’s encouraging to other companies who are looking at this with a lot of enthusiasm and excitement,” she says. But more needs to be done to realize the government’s promised reforms, she adds. The Space Activities Bill that is designed to enshrine the country’s space policy in legislation has been languishing in draft form for years, and without regulatory clarity, it’s hard for the private sector to justify significant investments. “These are big, bold statements, but these need to be translated into actual policy and regulatory mechanisms,” says Rajagopalan.

Skyroot’s launch undoubtedly signals the growing maturity of India’s space industry, says Saurabh Kapil, associate director in PwC’s space practice. “It’s a critical message to the Indian space ecosystem, that we can do it, we have the necessary skill set, we have those engineering capabilities, we have those manufacturing or industrialization capabilities,” he says.

rocket launching into the sky with fire tail The Vikram-S rocket blasting off from the Satish Dhawan Space Centre, on the east coast of India.Skyroot

However, crossing this technical milestone is only part of the challenge, he says. The industry also needs to demonstrate a clear market for the kind of launch vehicles that companies like Skyroot are building. While private players are showing interest in launching small satellites for applications like agriculture and infrastructure monitoring, he says, these companies will be able to build sustainable businesses only if they are allowed to compete for more lucrative government and defense-sector contacts.

In the longer run, though, India’s space industry has ambitions of capturing a significant chunk of the global launch market, says Kapil. ISRO has already developed a reputation for both reliability and low cost—its 2014 mission to Mars cost just US $74 million, one-ninth the cost of a NASA Mars mission launched the same week. That is likely to translate to India’s private space industry, too, thanks to a considerably lower cost of skilled labor, land, and materials compared with those of other spacefaring nations, says Kapil. “The optimism is definitely there that because we are low on cost and high on reliability, whoever wants to build and launch small satellites is largely going to come to India,” he says.


Match ID: 126 Score: 9.29 source: spectrum.ieee.org age: 11 days
qualifiers: 9.29 nasa

ISS Daily Summary Report – 11/18/2022
Fri, 18 Nov 2022 16:00:37 +0000
Payloads: Nanoracks Module-9: Several photos were taken of the Nanoracks Module-9 hardware while in the Cupola area. NanoRacks Module-9 consists of multiple student experiments in individual Mixture Tubes all housed within a NanoRacks Module container.  Some experiments require crew interaction to remove clamps in order to mix materials to activate and/or deactivate the experiments. The …
Match ID: 127 Score: 9.29 source: blogs.nasa.gov age: 11 days
qualifiers: 9.29 nasa

Successful Hack of Time-Triggered Ethernet
2022-11-18T15:04:41Z

Time-triggered Ethernet (TTE) is used in spacecraft, basically to use the same hardware to process traffic with different timing and criticality. Researchers have defeated it:

On Tuesday, researchers published findings that, for the first time, break TTE’s isolation guarantees. The result is PCspooF, an attack that allows a single non-critical device connected to a single plane to disrupt synchronization and communication between TTE devices on all planes. The attack works by exploiting a vulnerability in the TTE protocol. The work was completed by researchers at the University of Michigan, the University of Pennsylvania, and NASA’s Johnson Space Center...


Match ID: 128 Score: 9.29 source: www.schneier.com age: 11 days
qualifiers: 9.29 nasa

Bringing Mars rock samples back to Earth
Fri, 18 Nov 2022 11:07:00 +0100
Video: 00:01:46

Short animation featuring key moments of the Mars Sample Return campaign: from landing on Mars and securing the sample tubes to launching them off the surface and ferrying them back to Earth.

NASA and the European Space Agency (ESA) are developing plans for one of the most ambitious campaigns ever attempted in space: bringing the first samples of Mars material safely back to Earth for detailed study. European scientists are part of an international team giving advice on what samples to choose for return and the best analysis methods to use once they land on Earth.

The diverse set of scientifically curated samples being collected by NASA’s Mars Perseverance rover could help scientists answer the question of whether ancient life ever arose on the Red Planet. 

Bringing samples of Mars to Earth for future study would happen in several steps with multiple spacecraft, and in close collaboration between ESA and NASA.

The first step of the campaign began with the arrival of the Perseverance rover at Jezero Crater on 18 February 2021.

ESA will give robotic assistance with the Sample Transfer Arm. The 2.5 m robotic arm will pick up the tubes filled with precious material from Mars and transfer them to a rocket for a launch into martian orbit.

The European Earth Return Orbiter will then be the first interplanetary spacecraft to capture samples in orbit and make a return trip between Earth and Mars.

This strategic partnership with NASA will be the first to return samples from another planet. The samples to be returned are thought to be the best opportunity to reveal the early evolution of Mars, including the potential for life. 


Match ID: 129 Score: 9.29 source: www.esa.int age: 11 days
qualifiers: 9.29 nasa

Here’s All the Science Hitching a Ride on Artemis I
Wed, 16 Nov 2022 16:28:52 +0000


NASA’s Artemis I mission launched early in the predawn hours this morning, at 1:04 a.m. eastern time, carrying with it the hopes of a space program aiming now to land American astronauts back on the moon. The Orion spacecraft now on its way to the moon also carries with it a lot of CubeSat-size science. (As of press time, some satellites have even begun to tweet.)

And while the objective of Artemis I is to show that the launch system and spacecraft can make a trip to the moon and return safely to Earth, the mission is also a unique opportunity to send a whole spacecraft-load of science into deep space. In addition to the interior of the Orion capsule itself, there are enough nooks and crannies to handle a fair number of CubeSats, and NASA has packed as many experiments as it can into the mission. From radiation phantoms to solar sails to algae to a lunar surface payload, Artemis I has a lot going on.


Most of the variety of the science on Artemis I comes in the form of CubeSats, little satellites that are each the size of a large shoebox. The CubeSats are tucked snugly into berths inside the Orion stage adapter, which is the bit that connects the interim cryogenic propulsion stage to the ESA service module and Orion. Once the propulsion stage lifts Orion out of Earth orbit and pushes it toward the moon, the stage and adapter will separate from Orion, and the CubeSats will launch themselves.

A metal cylinder five meters across rests in a clean room with scaffolding around it, with ten small boxes mounted on platforms inside Ten CubeSats rest inside the Orion stage adapter at NASA’s Kennedy Space Center.NASA KSC

While the CubeSats look identical when packed up, each one is totally unique in both hardware and software, with different destinations and mission objectives. There are 10 in total (three weren’t ready in time for launch, which is why there are a couple of empty slots in the image above).

Here is what each one is and does:

While the CubeSats head off to do their own thing, inside the Orion capsule itself will be the temporary home of a trio of mannequins. The first, a male-bodied version provided by NASA, is named Commander Moonikin Campos, after NASA electrical engineer Arturo Campos, who was the guy who wrote the procedures that allowed the Apollo 13 command module to steal power from the lunar module’s batteries, one of many actions that saved the Apollo 13 crew.

A mannequin in an orange flight suit lies on its back in a testing room Moonikin Campos prepares for placement in the Orion capsule.NASA

Moonikin Campos will spend the mission in the Orion commander’s seat, wearing an Orion crew survival system suit. Essentially itself a spacecraft, the suit is able to sustain its occupant for up to six days if necessary. Moonikin Campos’s job will be to pretend to be an astronaut, and sensors inside him will measure radiation, acceleration, and vibration to help NASA prepare to launch human astronauts in the next Artemis mission.

Two blue female mannequins, one wearing a bulky black vest, strapped into the interior of a space capsule Hel­ga and Zo­har in place on the flight deck of the Ori­on space­craft.NASA/DLR

Accompanying Moonikin Campos are two female-bodied mannequins, named Helga and Zohar, developed by the German Aerospace Center (DLR) along with the Israel Space Agency. These are more accurately called “anthropomorphic phantoms,” and their job is to provide a detailed recording of the radiation environment inside the capsule over the course of the mission. The phantoms are female because women have more radiation-sensitive tissue than men. Both Helga and Zohar have over 6,000 tiny radiation detectors placed throughout their artificial bodies, but Zohar will be wearing an AstroRad radiation protection vest to measure how effective it is.

A dozen researchers in masks stand in front of two blue bags in a NASA laboratory NASA’s Biology Experiment-1 is transferred to the Orion team.NASA/KSC

The final science experiment to fly onboard Orion is NASA’s Biology Experiment-1. The experiment is really just seeing what time in deep space does to some specific kinds of biology, so all that has to happen is for Orion to successfully haul some packages of sample tubes around the moon and back. Samples include:

  • Plant seeds to characterize how spaceflight affects nutrient stores
  • Photosynthetic algae to identify genes that contribute to its survival in deep space
  • Aspergillus fungus to investigate radioprotective effects of melanin and DNA damage response
  • Yeast used as a model organism to identify genes that enable adaptations to conditions in both low Earth orbit and deep space

There is some concern that because of the extensive delays with the Artemis launch, the CubeSats have been sitting so long that their batteries may have run down. Some of the CubeSats could be recharged, but for others, recharging was judged to be so risky that they were left alone. Even for CubeSats that don’t start right up, though, it’s possible that after deployment, their solar panels will be able to get them going. But at this point, there’s still a lot of uncertainty, and the CubeSats’ earthbound science teams are now pinning their hopes on everything going well after launch.

For the rest of the science payloads, success mostly means Orion returning to Earth safe and sound, which will also be a success for the Artemis I mission as a whole. And assuming it does so, there will be a lot more science to come.


Match ID: 130 Score: 9.29 source: spectrum.ieee.org age: 13 days
qualifiers: 9.29 nasa

ISS Daily Summary Report – 11/16/2022
Wed, 16 Nov 2022 16:00:38 +0000
Payloads: Snowcone Cloud Edge Compute Demonstration (Snowcone): To meet yearly certification requirements and prepare for later science operations, the Snowcone science hardware was powered on, appropriate files were loaded, and the system checked out.  Snowcone demonstrates technology to screen astronaut images and identify those that may contain sensitive information not for public release.  The technology …
Match ID: 131 Score: 9.29 source: blogs.nasa.gov age: 13 days
qualifiers: 9.29 nasa

Artemis I launch
Wed, 16 Nov 2022 08:00:00 +0100
Video: 00:02:53

The Space Launch System (SLS) rocket with the Orion spacecraft aboard lifted off at 07:47 CET from NASA’s Kennedy Space Center in Florida, USA on 16 November 2022.

The most powerful rocket ever built sent NASA’s Orion spacecraft and ESA’s European Service Module (ESM) to a journey beyond the Moon and back. No crew will be on board Orion this time, and the spacecraft will be controlled by teams on Earth.

ESM provides for all astronauts’ basic needs, such as water, oxygen, nitrogen, temperature control, power and propulsion.

Much like a train engine pulls passenger carriages and supplies power, the European Service Module will take the Orion capsule to its destination and back.


Match ID: 132 Score: 9.29 source: www.esa.int age: 13 days
qualifiers: 9.29 nasa

Experts Available to Discuss NASA Webb Telescope Science Results
Tue, 15 Nov 2022 16:41 EST
Experts from NASA and other institutions will be available by teleconference at 11 a.m. EST on Thursday, Nov. 17, to answer media questions about early science results from the agency’s James Webb Space Telescope.
Match ID: 133 Score: 9.29 source: www.nasa.gov age: 14 days
qualifiers: 9.29 nasa

ISS Daily Summary Report – 11/15/2022
Tue, 15 Nov 2022 16:00:06 +0000
USOS Extravehicular Activity (EVA) #81 (1B IROSA Prep): Today, Josh Cassada (EV1) and Frank Rubio (EV2) performed the 1B ISS Roll Out Solar Array (IROSA) Prep EVA. Hatch opening occurred at 8:13 AM CT. The main goal of this EVA was to route cables on both the 3A and 1B Mod kits and install the …
Match ID: 134 Score: 9.29 source: blogs.nasa.gov age: 14 days
qualifiers: 9.29 nasa

GO for Artemis I
Tue, 15 Nov 2022 16:28:00 +0100
Image:

‘Twas the day before launch and all across the globe, people await liftoff for Artemis I with hope.

NASA’s Space Launch System (SLS) rocket and the Orion spacecraft with its European Service Module, is seen here on Launch Pad 39B at NASA's Kennedy Space Center in Florida, USA, on 12 November.

After much anticipation, NASA launch authorities have given the GO for the first opportunity for launch: tomorrow, 16 November with a two-hour launch window starting at 07:04 CET (06:04 GMT, 1:04 local time).

Artemis I is the first mission in a large programme to send astronauts around and on the Moon sustainably. This uncrewed first launch will see the Orion spacecraft travel to the Moon, enter an elongated orbit around our satellite and then return to Earth, powered by the European-built service module that supplies electricity, propulsion, fuel, water and air as well as keeping the spacecraft operating at the right temperature. 

The European Service Modules are made from components supplied by over 20 companies in ten ESA Member States and USA. As the first European Service Module sits atop the SLS rocket on the launchpad, the second is only 8 km away being integrated with the Orion crew capsule for the first crewed mission – Artemis II. The third and fourth European Service Modules – that will power astronauts to a Moon landing – are in production in Bremen, Germany. 

With a 16 November launch, the three-week Artemis I mission would end on 11 December with a splashdown in the Pacific Ocean. The European Service Module detaches from the Orion Crew Module before splashdown and burns up harmlessly in the atmosphere, its job complete after taking Orion to the Moon and back safely. 

Backup Artemis I launch dates include 19 November. Check ESA’s Orion blog for updates and more details. Watch the launch live on ESA Web TV from 15 Nov, 20:30 GMT (21:30 CET) when the rocket fuelling starts, and from 16 November 00:00 GMT/01:00 CET for the launch coverage. 


Match ID: 135 Score: 9.29 source: www.esa.int age: 14 days
qualifiers: 9.29 nasa

NASA Awards Commercial Small Satellite Data Acquisition Agreement
Mon, 14 Nov 2022 15:15 EST
NASA has selected GeoOptics Inc. of Pasadena, California, to provide commercial small constellation satellite data products that may augment NASA-collected data in the future.
Match ID: 136 Score: 9.29 source: www.nasa.gov age: 15 days
qualifiers: 9.29 nasa

ISS Daily Summary Report – 11/14/2022
Mon, 14 Nov 2022 16:00:44 +0000
Payloads: Cellbox-3: Two Cellbox-3 Spheroid Aggregation and Viability in Space (SHAPE) experiment containers (ECs) were removed from the Space Technology and Advanced Research Systems-1 (STaARS-1) experiment facility and placed into Minus Eighty-Degree Laboratory Freezer for ISS (MELFI) +4° C cold stowage. Cellbox-3 contains two experiments that investigate cell behavior in microgravity, forming 3D structures that …
Match ID: 137 Score: 9.29 source: blogs.nasa.gov age: 15 days
qualifiers: 9.29 nasa

NASA, ULA Successfully Launch Weather Satellite, Re-entry Tech Demo
Thu, 10 Nov 2022 19:28 EST
NASA successfully launched the third in a series of polar-orbiting weather satellites for the National Oceanic and Atmospheric Administration (NOAA) at 1:49 a.m. PST Thursday, as well as an agency technology demonstration on a United Launch Alliance Atlas V rocket from Vandenberg Space Force Base in California.
Match ID: 138 Score: 9.29 source: www.nasa.gov age: 19 days
qualifiers: 9.29 nasa

NASA to Brief Media on First Earth Water-Monitoring Satellite Mission
Thu, 10 Nov 2022 14:22 EST
NASA will host a virtual media briefing at 10:30 a.m. EST (7:30 a.m. PST) Nov. 14, at the agency’s Jet Propulsion Laboratory in Southern California, to discuss the upcoming launch of the Surface Water and Ocean Topography (SWOT) satellite.
Match ID: 139 Score: 9.29 source: www.nasa.gov age: 19 days
qualifiers: 9.29 nasa

ISS Daily Summary Report – 11/10/2022
Thu, 10 Nov 2022 16:00:21 +0000
Payloads: Antioxidant Protection: Experiment Containers were installed into Kubik-5. The PROtection MEdiated by antioxidant nanoTEchnOlogy against neuronal damage in space (PROMETEO) (Antioxidant Protection) investigation proposes the use of biocompatible and biodegradable polydopamine-based nanoparticles to provide antioxidant protection to neurons undergoing exposure to altered gravity and cosmic radiation. Cellbox-3: Cellbox-3 Experiment Containers were installed into the …
Match ID: 140 Score: 9.29 source: blogs.nasa.gov age: 19 days
qualifiers: 9.29 nasa

What is ESA’s Moonlight initiative?
Wed, 09 Nov 2022 16:00:00 +0100
Video: 00:03:21

Going to the Moon was the first step. Staying there is the next ambition.

ESA is a key partner in NASA’s Artemis programme, which aims to return people to the Moon by the end of the decade. Dozens of other international public and private missions are setting their sights on the lunar surface in the coming years.  

But to achieve a permanent and sustainable presence on the Moon, reliable and autonomous lunar communications and navigation services are required.

This is why ESA is working with its industrial partners on the Moonlight initiative, to become the first off-planet commercial telecoms and satellite navigation provider.

Following their launch, three or four satellites will be carried into lunar orbit by a space tug and deployed one by one, to form a constellation of lunar satellites. The number and specification of these satellites are currently being defined.

The constellation's orbits are optimised to give coverage to the lunar south pole, whose sustained sunlight and polar ice make it the focus of upcoming missions.

Moonlight will provide data capacities sufficient to serve these planned and future missions, with a navigation service that enables accurate real-time positioning for all lunar missions.


Match ID: 141 Score: 9.29 source: www.esa.int age: 20 days
qualifiers: 9.29 nasa

NASA Updates Coverage of JPSS-2 Weather Satellite, LOFTID Tech Demo
Mon, 07 Nov 2022 17:23 EST
NASA will provide coverage of the upcoming prelaunch and launch activities for the National Oceanic and Atmospheric Administration’s (NOAA’s) Joint Polar Satellite System-2 (JPSS-2) mission and NASA’s Low-Earth Orbit Flight Test of an Inflatable Decelerator (LOFTID) technology demonstration.
Match ID: 142 Score: 9.29 source: www.nasa.gov age: 22 days
qualifiers: 9.29 nasa

Full-Scale Drop Test of Urban Air Passenger Vehicle Model
Mon, 07 Nov 2022 11:24 EST
Full-Scale Drop Test of Urban Air Passenger Vehicle Model
Match ID: 143 Score: 9.29 source: www.nasa.gov age: 22 days
qualifiers: 9.29 nasa

NASA, USAID Partnership Strengthens Global Development
Fri, 04 Nov 2022 16:08 EDT
NASA and the U.S. Agency for International Development (USAID) signed an agreement Friday strengthening the collaboration between the two agencies, including efforts that advance the federal response to climate change.
Match ID: 144 Score: 9.29 source: www.nasa.gov age: 25 days
qualifiers: 9.29 nasa

NASA Coverage Set for NOAA’s Joint Polar Satellite System-2 Launch
Fri, 21 Oct 2022 17:31 EDT
NASA will provide coverage of the upcoming prelaunch and launch activities for the National Oceanic and Atmospheric Administration’s (NOAA) Joint Polar Satellite System-2 (JPSS-2) mission. JPSS-2 is the third satellite in the polar satellite series and is expected to capture data to improve weather forecasts, helping scientists predict and prepare
Match ID: 145 Score: 9.29 source: www.nasa.gov age: 39 days
qualifiers: 9.29 nasa

Who Will Fix Hubble and Chandra?
Thu, 20 Oct 2022 18:06:36 +0000


Elon Musk, step aside. You may be the richest rich man in the space business, but you’re not first. Musk’s SpaceX corporation is a powerful force, with its weekly launches and visions of colonizing Mars. But if you want a broader view of how wealthy entrepreneurs have shaped space exploration, you might want to look at George Ellery Hale, James Lick, William McDonald or—remember this name—John D. Hooker.

All this comes up now because SpaceX, joining forces with the billionaire Jared Isaacman, has made what sounds at first like a novel proposal to NASA: It would like to see if one of the company’s Dragon spacecraft can be sent to service the fabled, invaluable (and aging) Hubble Space Telescope, last repaired in 2009.

Private companies going to the rescue of one of NASA’s crown jewels? NASA’s mantra in recent years has been to let private enterprise handle the day-to-day of space operations—communications satellites, getting astronauts to the space station, and so forth—while pure science, the stuff that makes history but not necessarily money, remains the province of government. Might that model change?

“We’re working on crazy ideas all the time,” said Thomas Zurbuchen, NASA’s space science chief. "Frankly, that’s what we’re supposed to do.”

It’s only a six-month feasibility study for now; no money will change hands between business and NASA. But Isaacman, who made his fortune in payment-management software before turning to space, suggested that if a Hubble mission happens, it may lead to other things. “Alongside NASA, exploration is one of many objectives for the commercial space industry,” he said on a media teleconference. “And probably one of the greatest exploration assets of all time is the Hubble Space Telescope.”

So it’s possible that at some point in the future, there may be a SpaceX Dragon, perhaps with Isaacman as a crew member, setting out to grapple the Hubble, boost it into a higher orbit, maybe even replace some worn-out components to lengthen its life.

Aerospace companies say privately mounted repair sounds like a good idea. So good that they’ve proposed it already.

The Chandra X-ray telescope, photographed by space shuttle astronauts after they deployed it in July 1999. The Chandra X-ray telescope, as photographed by space-shuttle astronauts after they deployed it in July 1999. It is attached to a booster that moved it into an orbit 10,000 by 100,000 kilometers from Earth.NASA

Northrop Grumman, one of the United States’ largest aerospace contractors, has quietly suggested to NASA that it might service one of the Hubble’s sister telescopes, the Chandra X-ray Observatory. Chandra was launched into Earth orbit by the space shuttle Columbia in 1999 (Hubble was launched from the shuttle Discovery in 1990), and the two often complement each other, observing the same celestial phenomena at different wavelengths.

As in the case of the SpaceX/Hubble proposal, Northrop Grumman’s Chandra study is at an early stage. But there are a few major differences. For one, Chandra was assembled by TRW, a company that has since been bought by Northrop Grumman. And another company subsidiary, SpaceLogistics, has been sending what it calls Mission Extension Vehicles (MEVs) to service aging Intelsat communications satellites since 2020. Two of these robotic craft have launched so far. The MEVs act like space tugs, docking with their target satellites to provide them with attitude control and propulsion if their own systems are failing or running out of fuel. SpaceLogistics says it is developing a next-generation rescue craft, which it calls a Mission Robotic Vehicle, equipped with an articulated arm to add, relocate, or possibly repair components on orbit.

“We want to see if we can apply this to space-science missions,” says Jon Arenberg, Northrop Grumman’s chief mission architect for science and robotic exploration, who worked on Chandra and, later, the James Webb Space Telescope. He says a major issue for servicing is the exacting specifications needed for NASA’s major observatories; Chandra, for example, records the extremely short wavelengths of X-ray radiation (0.01–10 nanometers).

“We need to preserve the scientific integrity of the spacecraft,” he says. “That’s an absolute.”

But so far, the company says, a mission seems possible. NASA managers have listened receptively. And Northrop Grumman says a servicing mission could be flown for a fraction of the cost of a new telescope.

New telescopes need not be government projects. In fact, NASA’s chief economist, Alexander MacDonald, argues that almost all of America’s greatest observatories were privately funded until Cold War politics made government the major player in space exploration. That’s why this story began with names from the 19th and 20th centuries—Hale, Lick, and McDonald—to which we should add Charles Yerkes and, more recently, William Keck. These were arguably the Elon Musks of their times—entrepreneurs who made millions in oil, iron, or real estate before funding the United States’ largest telescopes. (Hale’s father manufactured elevators—highly profitable in the rebuilding after the Great Chicago Fire of 1871.) The most ambitious observatories, MacDonald calculated for his book The Long Space Age, were about as expensive back then as some of NASA’s modern planetary probes. None of them had very much to do with government.

To be sure, government will remain a major player in space for a long time. “NASA pays the cost, predominantly, of the development of new commercial crew vehicles, SpaceX’s Dragon being one,” MacDonald says. “And now that those capabilities exist, private individuals can also pay to utilize those capabilities.” Isaacman doesn’t have to build a spacecraft; he can hire one that SpaceX originally built for NASA.

“I think that creates a much more diverse and potentially interesting space-exploration future than we have been considering for some time,” MacDonald says.

So put these pieces together: Private enterprise has been a driver of space science since the 1800s. Private companies are already conducting on-orbit satellite rescues. NASA hasn’t said no to the idea of private missions to service its orbiting observatories.

And why does John D. Hooker’s name matter? In 1906, he agreed to put up US $45,000 (about $1.4 million today) to make the mirror for a 100-inch reflecting telescope at Mount Wilson, Calif. One astronomer made the Hooker Telescope famous by using it to determine that the universe, full of galaxies, was expanding.

The astronomer’s name was Edwin Hubble. We’ve come full circle.


Match ID: 146 Score: 9.29 source: spectrum.ieee.org age: 40 days
qualifiers: 9.29 nasa

El impacto de DART cambió el movimiento de un asteroide en el espacio
Tue, 11 Oct 2022 13:28 EDT
El análisis de los datos obtenidos en las últimas dos semanas por el equipo de investigación de la Prueba de redireccionamiento del asteroide doble (DART, por sus siglas en inglés) de la NASA muestra que el impacto cinético de la nave espacial contra su asteroide objetivo, Dimorphos, alteró con éxito la órbita del asteroide. Esto marca la primera v
Match ID: 147 Score: 9.29 source: www.nasa.gov age: 49 days
qualifiers: 9.29 nasa

NASA Confirms DART Mission Impact Changed Asteroid’s Motion in Space
Tue, 11 Oct 2022 13:12 EDT
Analysis of data obtained over the past two weeks by NASA’s Double Asteroid Redirection Test (DART) investigation team shows the spacecraft's kinetic impact with its target asteroid, Dimorphos, successfully altered the asteroid’s orbit. This marks humanity’s first time purposely changing the motion of a celestial object and the first full-scale dem
Match ID: 148 Score: 9.29 source: www.nasa.gov age: 49 days
qualifiers: 9.29 nasa

NASA to Provide Update on DART, World’s First Planetary Defense Test
Fri, 07 Oct 2022 15:34 EDT
NASA will host a media briefing at 2 p.m. EDT, Tuesday, Oct. 11, to discuss the agency’s Double Asteroid Redirection Test (DART) mission and its intentional collision with its target asteroid, Dimorphos.
Match ID: 149 Score: 9.29 source: www.nasa.gov age: 53 days
qualifiers: 9.29 nasa

How a Dual Curing Adhesive Works
Tue, 04 Oct 2022 17:50:36 +0000


This sponsored article is brought to you by Master Bond.

Master Bond UV22DC80-1 is a nanosilica filled, dual cure epoxy based system. Nanosilica filled epoxy formulations are designed to further improve performance and processing properties.

The specific filler will play a crucial role in determining key parameters such as viscosity, flow, aging characteristics, strength, shrinkage, hardness, and exotherm. As a dual curing system, UV22DC80-1 cures readily upon exposure to UV light, and will cross link in shadowed out areas when heat is added.


See Master Bond's UV22DC80-1 in Action


Dual cure systems are effective for rapidly fixturing parts with the UV portion of the cure, and then concluding the process by adding heat. Watch this video to see a dual cured epoxy in action.


This compound features exceptionally low shrinkage upon cure, outstanding dimensional stability, and resists abrasion. It is not oxygen inhibited. It withstands chemicals such as acids, bases, fuels and solvents. It is electrically insulative with a volume resistivity greater than 1014 ohm-cm. It is optically clear, with a refractive index of 1.52.

The low viscosity ranges from 500 cps to 3500 cps. The temperature serviceability extends from -100°F to 300°F. UV22DC80-1 bonds well to metals, ceramics, glass, rubber, and many plastics. It passes NASA low outgassing certification and is used in high tech applications including aerospace, optical and opto-electronics.

Contact Master Bond to request a technical data sheet or discuss your application.


Match ID: 150 Score: 9.29 source: spectrum.ieee.org age: 56 days
qualifiers: 9.29 nasa

NASA’s DART Mission Hits Asteroid in First-Ever Planetary Defense Test
Mon, 26 Sep 2022 20:09 EDT
After 10 months flying in space, NASA’s Double Asteroid Redirection Test (DART) – the world’s first planetary defense technology demonstration – successfully impacted its asteroid target on Monday, the agency’s first attempt to move an asteroid in space.
Match ID: 151 Score: 9.29 source: www.nasa.gov age: 64 days
qualifiers: 9.29 nasa

Celebrate 'International Observe the Moon Night' with NASA
Fri, 23 Sep 2022 10:00 EDT
The public is invited to participate in NASA’s celebration of "International Observe the Moon Night" on Saturday, Oct. 1.
Match ID: 152 Score: 9.29 source: www.nasa.gov age: 67 days
qualifiers: 9.29 nasa

NASA to Host Briefing on Perseverance Mars Rover Mission Operations
Mon, 12 Sep 2022 09:49 EDT
NASA will host a briefing at 11:30 a.m. EDT (8:30 a.m. PDT) on Thursday, Sept. 15, at the agency’s Jet Propulsion Laboratory in Southern California to provide highlights from the first year and a half of the Perseverance rover’s exploration of Mars.
Match ID: 153 Score: 9.29 source: www.nasa.gov age: 78 days
qualifiers: 9.29 nasa

La NASA invita a la prensa a la primera prueba de defensa planetaria
Tue, 23 Aug 2022 11:47 EDT
La misión Prueba de redireccionamiento del asteroide doble (DART, por sus siglas en inglés) de la NASA, la primera en el mundo que pone a prueba una tecnología para defender a la Tierra de posibles peligros de asteroides o cometas, impactará con su objetivo, un asteroide que no supone ninguna amenaza para la Tierra, a las 7:14 pm EDT del lunes 26 d
Match ID: 154 Score: 9.29 source: www.nasa.gov age: 98 days
qualifiers: 9.29 nasa

NASA Invites Media to Witness World’s First Planetary Defense Test
Tue, 23 Aug 2022 11:38 EDT
NASA’s Double Asteroid Redirection Test (DART), the world’s first mission to test technology for defending Earth against potential asteroid or comet hazards, will impact its target asteroid—which poses no threat to Earth—at 7:14 p.m. EDT on Monday, Sept. 26.
Match ID: 155 Score: 9.29 source: www.nasa.gov age: 98 days
qualifiers: 9.29 nasa

NASA Identifies Candidate Regions for Landing Next Americans on Moon
Fri, 19 Aug 2022 11:07 EDT
As NASA prepares to send astronauts back to the Moon under Artemis, the agency has identified 13 candidate landing regions near the lunar South Pole.
Match ID: 156 Score: 9.29 source: www.nasa.gov age: 102 days
qualifiers: 9.29 nasa

NASA Administrator Statement on Agency Authorization Bill
Thu, 28 Jul 2022 15:22 EDT
NASA Administrator Bill Nelson released this statement Thursday following approval by the U.S. Congress for the NASA Authorization Act of 2022, which is part of the Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022.
Match ID: 157 Score: 9.29 source: www.nasa.gov age: 124 days
qualifiers: 9.29 nasa

NASA Administrator, Deputy to Attend Farnborough Airshow
Fri, 15 Jul 2022 16:13 EDT
NASA Administrator Bill Nelson and Deputy Administrator Pam Melroy will attend the Farnborough International Airshow in the United Kingdom on Monday, July 18.
Match ID: 158 Score: 9.29 source: www.nasa.gov age: 137 days
qualifiers: 9.29 nasa

NASA to Industry: Let’s Develop Flight Tech to Reduce Carbon Emissions
Wed, 29 Jun 2022 14:25 EDT
NASA announced Wednesday the agency is seeking partners to develop technologies needed to shape a new generation of lower-emission, single-aisle airliners that passengers could see in airports in the 2030s.
Match ID: 159 Score: 9.29 source: www.nasa.gov age: 153 days
qualifiers: 9.29 nasa

Pentagon Aims to Demo a Nuclear Spacecraft Within 5 Years
Thu, 09 Jun 2022 16:44:41 +0000


In the latest push for nuclear power in space, the Pentagon’s Defense Innovation Unit (DIU) awarded a contract in May to Seattle-based Ultra Safe Nuclear to advance its nuclear power and propulsion concepts. The company is making a soccer ball–size radioisotope battery it calls EmberCore. The DIU’s goal is to launch the technology into space for demonstration in 2027.

Ultra Safe Nuclear’s system is intended to be lightweight, scalable, and usable as both a propulsion source and a power source. It will be specifically designed to give small-to-medium-size military spacecraft the ability to maneuver nimbly in the space between Earth orbit and the moon. The DIU effort is part of the U.S. military’s recently announced plans to develop a surveillance network in cislunar space.

Besides speedy space maneuvers, the DIU wants to power sensors and communication systems without having to worry about solar panels pointing in the right direction or batteries having enough charge to work at night, says Adam Schilffarth, director of strategy at Ultra Safe Nuclear. “Right now, if you are trying to take radar imagery in Ukraine through cloudy skies,” he says, “current platforms can only take a very short image because they draw so much power.”

Radioisotope power sources are well suited for small, uncrewed spacecraft, adds Christopher Morrison, who is leading EmberCore’s development. Such sources rely on the radioactive decay of an element that produces energy, as opposed to nuclear fission, which involves splitting atomic nuclei in a controlled chain reaction to release energy. Heat produced by radioactive decay is converted into electricity using thermoelectric devices.

Radioisotopes have provided heat and electricity for spacecraft since 1961. The Curiosity and Perseverance rovers on Mars, and deep-space missions including Cassini, New Horizons, and Voyager all use radioisotope batteries that rely on the decay of plutonium-238, which is nonfissile—unlike plutonium-239, which is used in weapons and power reactors.

For EmberCore, Ultra Safe Nuclear has instead turned to medical isotopes such as cobalt-60 that are easier and cheaper to produce. The materials start out inert, and have to be charged with neutrons to become radioactive. The company encapsulates the material in a proprietary ceramic for safety.

Cobalt-60 has a half-life of five years (compared to plutonium-238’s 90 years), which is enough for the cislunar missions that the DOD and NASA are looking at, Morrison says. He says that EmberCore should be able to provide 10 times as much power as a plutonium-238 system, providing over 1 million kilowatt-hours of energy using just a few pounds of fuel. “This is a technology that is in many ways commercially viable and potentially more scalable than plutonium-238,” he says.

One downside of the medical isotopes is that they can produce high-energy X-rays in addition to heat. So Ultra Safe Nuclear wraps the fuel with a radiation-absorbing metal shield. But in the future, the EmberCore system could be designed for scientists to use the X-rays for experiments. “They buy this heater and get an X-ray source for free,” says Schilffarth. “We’ve talked with scientists who right now have to haul pieces of lunar or Martian regolith up to their sensor because the X-ray source is so weak. Now we’re talking about a spotlight that could shine down to do science from a distance.”

Ultra Safe Nuclear’s contract is one of two awarded by the DIU—which aims to speed up the deployment of commercial technology through military use—to develop nuclear power and propulsion for spacecraft. The other contract was awarded to Avalanche Energy, which is making a lunchbox-size fusion device it calls an Orbitron. The device will use electrostatic fields to trap high-speed ions in slowly changing orbits around a negatively charged cathode. Collisions between the ions can result in fusion reactions that produce energetic particles.

Both companies will use nuclear energy to power high-efficiency electric propulsion systems. Electric propulsion technologies such as ion thrusters, which use electromagnetic fields to accelerate ions and generate thrust, are more efficient than chemical rockets, which burn fuel. Solar panels typically power the ion thrusters that satellites use today to change their position and orientation. Schilffarth says that the higher power from EmberCore should give a greater velocity change of 10 kilometers per second in orbit than today’s electric propulsion systems.

Ultra Safe Nuclear is also one of three companies developing nuclear fission thermal propulsion systems for NASA and the Department of Energy. Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is seeking companies to develop a fission-based nuclear thermal rocket engine, with demonstrations expected in 2026.

This article appears in the August 2022 print issue as “Spacecraft to Run on Radioactive Decay.”


Match ID: 160 Score: 9.29 source: spectrum.ieee.org age: 173 days
qualifiers: 9.29 nasa

NASA Supports Small Business Research to Power Future Exploration
Thu, 26 May 2022 14:01 EDT
NASA has selected hundreds of small businesses and dozens of research institutions to develop technology to help drive the future of space exploration, ranging from novel sensors and electronics to new types of software and cutting-edge materials.
Match ID: 161 Score: 9.29 source: www.nasa.gov age: 187 days
qualifiers: 9.29 nasa

A Helicopter Will Try to Catch a Rocket Booster in Midair
Mon, 18 Apr 2022 18:28:03 +0000


The longest journey begins with a single step, and that step gets expensive when you’re in the space business. Take, for example, the Electron booster made by Rocket Lab, a company with two launch pads on the New Zealand coast and another awaiting use in Virginia. Earth’s gravity is so stubborn that, by necessity, two-thirds of the rocket is its first stage—and it has historically ended up as trash on the ocean floor after less than 3 minutes of flight.

Making those boosters reusable—saving them from a saltwater grave, and therefore saving a lot of money—has been a goal of aerospace engineers since the early space age. Elon Musk’s SpaceX has famously been landing its Falcon 9 boosters on drone ships off the Florida coast—mind-bending to watch but very hard to pull off.

Rocket Lab says it has another way. Iits next flight will carry 34 commercial satellites—and instead of being dropped in the Pacific, the spent first stage will be snared in midair by a helicopter as it descends by parachute. It will then be brought back to base, seared by the heat of reentry but inwardly intact, for possible refurbishment and reuse. The team, in its determination to minimize its odds of dropping the ball, so to speak, has pushed back the launch several times in order to wait out inclement weather. They reason that because this isn’t a game of horseshoes, close is not good enough.

“It’s a very complex thing to do,” says Morgan Bailey of Rocket Lab. “You have to position the helicopter in exactly the right spot, you have to know exactly where the stage is going to be coming down, you have to be able to slow it enough,” she says. “We’ve practiced and practiced all of the individual puzzle pieces, and now it’s putting them together. It’s not a foregone conclusion that the first capture attempt will be a success.”

Still, people in the space business will be watching, since Rocket Lab has established a niche for itself as a viable space company. This will be its 26th Electron launch. The company says it has launched 112 satellites so far, many of them so-called smallsats that are relatively inexpensive to fly. “Right now, there are two companies taking payloads to orbit: SpaceX and Rocket Lab,” says Chad Anderson, CEO of Space Capital, a firm that funds space startups.

Here's the flight profile. The Electron is 18 meters tall; the bottom 12 meters are the first stage. For this mission it will lift off from New Zealand on its way to a sun-synchronous orbit 520 kilometers high. The first stage burns out after the first 70 km. Two minutes and 32 seconds into the flight, it drops off, following a long arc that in the past would have sent it crashing into the ocean, about 280 km downrange.

But Rocket Lab has now equipped its booster with heat shielding, protecting it as it falls tail-first at up to 8,300 kilometers per hour. Temperatures should reach 2,400 °C as the booster is slowed by the air around it.

At an altitude of 13 km, a small drogue parachute is deployed from the top end of the rocket stage, followed by a main chute at about 6 km, less than a minute later. The parachute slows the rocket substantially, so that it is soon descending at only about 36 km/h.

Rendering of helicopter after catching the spent Electron rocket first stage in midair. An artist’s conception shows the helicopter after catching the spent Electron rocket’s first stage in midair.Rocket Lab

But even that would make for a hard splashdown—which is why a Sikorsky S-92 helicopter hovers over the landing zone, trailing a grappling hook on a long cable. The plan is for the helicopter to fly over the descending rocket and snag the parachute cables. The rocket never gets wet; the chopper secures it and either lowers it onto a ship or carries it back to land. Meanwhile—let’s not lose sight of the prime mission—the second stage of the rocket should reach orbit about 10 minutes after launch.

“You have to keep the booster out of the water,” says Anderson. “If they can do that, it’s a big deal.” Many space people will recall NASA’s solid rocket boosters, which helped launch the space shuttles and then parachuted into the Atlantic; towing them back to port and cleaning them up for reuse was slow and expensive. NASA’s giant SLS rocket uses the same boosters, but there are no plans to recover them.

So midair recovery is far better, though it’s not new. As long ago as 1960, the U.S. Air Force snagged a returning capsule from a mission called Discoverer 14. But that had nothing to do with economy; the Discoverers were actually Corona reconnaissance satellites, and they were sending back film of the Soviet Union—priceless for Cold War intelligence.

Rocket Lab tries to sound more playful about its missions: It gives them names like “A Data With Destiny” or “Without Mission a Beat.” This newest flight, with its booster-recovery attempt, is called “There and Back Again.”

A teenager tweeted to CEO Peter Beck: “It would have been cool if the mission was called ‘Catch Me If You Can.’”

“Oh…that’s good!” Beck replied. “Congratulations, you have just named the very next recovery mission.”

Update 22 April 2022: In a tweet, Rocket Lab announced that due to weather, the planned launch and recovery would be rescheduled for 27 April at the earliest.

This article appears in the July 2022 print issue as “Rocket Lab Catches Rocket Booster in Midair.”


Match ID: 162 Score: 9.29 source: spectrum.ieee.org age: 225 days
qualifiers: 9.29 nasa

NASA Awards Contracts for Aerospace Testing and Facilities Operations
Mon, 11 Apr 2022 17:44 EDT
NASA has awarded a contract to Jacobs Technology Inc. of Tullahoma, Tennessee, to provide the agency’s Ames Research Center in Silicon Valley, California with support services for ground-based aerospace test facilities at the center.
Match ID: 163 Score: 9.29 source: www.nasa.gov age: 232 days
qualifiers: 9.29 nasa

University Teams Fly High in Airport Designs of the Future
Thu, 07 Apr 2022 12:50 EDT
Concepts from eight teams participating in the 2022 Gateways to Blue Skies: Airports of Tomorrow (Blue Skies) Competition have been selected to present their design concepts to a panel of industry experts
Match ID: 164 Score: 9.29 source: www.nasa.gov age: 236 days
qualifiers: 9.29 nasa

NASA Administrator Statement on President’s FY 2023 Budget Request
Mon, 28 Mar 2022 11:51 EDT
The President’s fiscal year 2023 budget would allow NASA to sustain America’s global innovation leadership and keep NASA at the forefront of exploration and discovery by returning to the Moon with the Artemis program, among other efforts.
Match ID: 165 Score: 9.29 source: www.nasa.gov age: 246 days
qualifiers: 9.29 nasa

NASA to Host Virtual Future of Aviation Showcase
Wed, 23 Feb 2022 12:11 EST
Members of the media and public are invited to participate in NASA’s imaginAviation, a three-day virtual event that will focus on the future transformation of aviation that begins Tuesday, March 1.
Match ID: 166 Score: 9.29 source: www.nasa.gov age: 279 days
qualifiers: 9.29 nasa

NASA’s TESS Tunes into an All-sky ‘Symphony’ of Red Giant Stars
Wed, 04 Aug 2021 17:00 EDT
Using NASA’s Transiting Exoplanet Survey Satellite, astronomers have identified a vast collection of pulsating red giant stars that will help us explore our galactic neighborhood.
Match ID: 167 Score: 9.29 source: www.nasa.gov age: 482 days
qualifiers: 9.29 nasa

Planetary Sleuthing Finds Triple-Star World
Mon, 11 Jan 2021 13:40 EST
Years after its detection, astronomers have confirmed a planet called KOI-5Ab orbiting in a triple-star system with a skewed configuration.
Match ID: 168 Score: 9.29 source: www.nasa.gov age: 687 days
qualifiers: 9.29 nasa

NASA Awards SETI Institute Contract for Planetary Protection Support
Fri, 10 Jul 2020 12:04 EDT
NASA has awarded the SETI Institute in Mountain View, California, a contract to support all phases of current and future planetary protection missions to ensure compliance with planetary protection standards.
Match ID: 169 Score: 9.29 source: www.nasa.gov age: 872 days
qualifiers: 9.29 nasa

Imagining Another Earth
Thu, 28 May 2020 10:27 EDT
This artist's concept shows exoplanet Kepler-1649c orbiting around its host red dwarf star.
Match ID: 170 Score: 9.29 source: www.nasa.gov age: 915 days
qualifiers: 9.29 nasa

NASA’s TESS Enables Breakthrough Study of Perplexing Stellar Pulsations
Wed, 13 May 2020 11:00 EDT
Astronomers have detected elusive pulsation patterns in dozens of young, rapidly rotating stars thanks to data from NASA’s Transiting Exoplanet Survey Satellite (TESS).
Match ID: 171 Score: 9.29 source: www.nasa.gov age: 930 days
qualifiers: 9.29 nasa

AAS Names 29 NASA-Affiliated Legacy Fellows
Thu, 30 Apr 2020 09:00 EDT
Twenty-nine scientists working at or affiliated with NASA have been named Fellows of the American Astronomical Society (AAS), the major organization of professional astronomers in North America.
Match ID: 172 Score: 9.29 source: www.nasa.gov age: 943 days
qualifiers: 9.29 nasa

Kepler-1649c: Earth-Size, Habitable Zone Planet Hides in Plain Sight
Thu, 16 Apr 2020 02:13 EDT
This artist's illustration shows what Kepler-1649c could look like from its surface.
Match ID: 173 Score: 9.29 source: www.nasa.gov age: 957 days
qualifiers: 9.29 nasa

The Women Behind ENIAC
Mon, 21 Nov 2022 19:00:01 +0000


If you looked at the pictures of those working on the first programmable, general-purpose all-electronic computer, you would assume that J. Presper Eckert and John W. Mauchly were the only ones who had a hand in its development. Invented in 1945, the Electronic Numerical Integrator and Computer (ENIAC) was built to improve the accuracy of U.S. artillery during World War II. The two men and their team built the hardware. But hidden behind the scenes were six women—Jean Bartik, Kathleen Antonelli, Marlyn Meltzer, Betty Holberton, Frances Spence, and Ruth Teitelbaum—who programmed the computer to calculate artillery trajectories in seconds.

The U.S. Army recruited the women in 1942 to work as so-called human computersmathematicians who did calculations using a mechanical desktop calculator.

For decades, the six women were largely unknown. But thanks to Kathy Kleiman, cofounder of ICANN (the Internet Corporation for Assigned Names and Numbers), the world is getting to know the ENIAC programmers’ contributions to computer science. This year Kleiman’s book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer was published. It delves into the women’s lives and the pioneering work they did. The book follows an award-winning documentary, The Computers: The Remarkable Story of the ENIAC Programmers, which Kleiman helped produce. It premiered at the 2014 Seattle International Film Festival and won Best Documentary Short at the 2016 U.N. Association Film Festival.

Kleiman plans to give a presentation next year about the programmers as part of the IEEE Industry Hub Initiative’s Impact Speaker series. The initiative aims to introduce industry professionals and academics to IEEE and its offerings.

Planning for the event, which is scheduled to be held in Silicon Valley, is underway. Details are to be announced before the end of the year.

The Institute spoke with Kleiman, who teaches Internet technology and governance for lawyers at American University, in Washington, D.C., about her mission to publicize the programmers’ contributions. The interview has been condensed and edited for clarity.

Image of Kathy Kleiman and her book cover to the right. Kathy Kleiman delves into the ENIAC programmers’ lives and the pioneering work they did in her book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer.Kathy Kleiman

The Institute: What inspired you to film the documentary?

Kathy Kleiman: The ENIAC was a secret project of the U.S. Army during World War II. It was the first general-purpose, programmable, all-electronic computer—the key to the development of our smartphones, laptops, and tablets today. The ENIAC was a highly experimental computer, with 18,000 vacuums, and some of the leading technologists at the time didn’t think it would work, but it did.

Six months after the war ended, the Army decided to reveal the existence of ENIAC and heavily publicize it. To do so, in February 1946 the Army took a lot of beautiful, formal photos of the computer and the team of engineers that developed it. I found these pictures while researching women in computer science as an undergraduate at Harvard. At the time, I knew of only two women in computer science: Ada Lovelace and then U.S. Navy Capt. Grace Hopper. [Lovelace was the first computer programmer; Hopper co-developed COBOL, one of the earliest standardized computer languages.] But I was sure there were more women programmers throughout history, so I went looking for them and found the images taken of the ENIAC.

The pictures fascinated me because they had both men and women in them. Some of the photos had just women in front of the computer, but they weren’t named in any of the photos’ captions. I tracked them down after I found their identities, and four of six original ENIAC programmers responded. They were in their late 70s at the time, and over the course of many years they told me about their work during World War II and how they were recruited by the U.S. Army to be “human computers.”

Eckert and Mauchly promised the U.S. Army that the ENIAC could calculate artillery trajectories in seconds rather than the hours it took to do the calculations by hand. But after they built the 2.5-meter-tall by 24-meter-long computer, they couldn’t get it to work. Out of approximately 100 human computers working for the U.S. Army during World War II, six women were chosen to write a program for the computer to run differential calculus equations. It was hard because the program was complex, memory was very limited, and the direct programming interface that connected the programmers to the ENIAC was hard to use. But the women succeeded. The trajectory program was a great success. But Bartik, McNulty, Meltzer, Snyder, Spence, and Teitelbaum’s contributions to the technology were never recognized. Leading technologists and the public never knew of their work.

I was inspired by their story and wanted to share it. I raised funds, researched and recorded 20 hours of broadcast-quality oral histories with the ENIAC programmers—which eventually became the documentary. It allows others to see the women telling their story.

“If we open the doors to history, I think it would make it a lot easier to recruit the wonderful people we are trying to urge to enter engineering, computer science, and related fields.”

Why was the accomplishment of the six women important?

Kleiman: The ENIAC is considered by many to have launched the information age.

We generally think of women leaving the factory and farm jobs they held during World War II and giving them back to the men, but after ENIAC was completed, the six women continued to work for the U.S. Army. They helped world-class mathematicians program the ENIAC to complete “hundred-year problems” [problems that would take 100 years to solve by hand]. They also helped teach the next generation of ENIAC programmers, and some went on to create the foundations of modern programming.

What influenced you to continue telling the ENIAC programmers’ story in your book?

Kleiman: After my documentary premiered at the film festival, young women from tech companies who were in the audience came up to me to share why they were excited to learn the programmers’ story. They were excited to learn that women were an integral part of the history of early computing programming, and were inspired by their stories. Young men also came up to me and shared stories of their grandmothers and great-aunts who programmed computers in the 1960s and ’70s and inspired them to explore careers in computer science.

I met more women and men like the ones in Seattle all over the world, so it seemed like a good idea to tell the full story along with its historical context and background information about the lives of the ENIAC programmers, specifically what happened to them after the computer was completed.

What did you find most rewarding about sharing their story?

Kleiman: It was wonderful and rewarding to get to know the ENIAC programmers. They were incredible, wonderful, warm, brilliant, and exceptional people. Talking to the people who created the programming was inspiring and helped me to see that I could work at the cutting edge too. I entered Internet law as one of the first attorneys in the field because of them.

What I enjoy most is that the women’s experiences inspire young people today just as they inspired me when I was an undergraduate.

collage of vintage photographs of six women. Clockwise from top left: Jean Bartik, Kathleen Antonelli, Betty Holberton, Ruth Teitelbaum, Marlyn Meltzer, Frances Spence.Clockwise from top left: The Bartik Family; Bill Mauchly, Priscilla Holberton, Teitelbaum Family, Meltzer Family, Spence Family

Is it important to highlight the contributions made throughout history by women in STEM?

Kleiman: [Actor] Geena Davis founded the Geena Davis Institute on Gender in Media, which works collaboratively with the entertainment industry to dramatically increase the presence of female characters in media. It’s based on the philosophy of “you can’t be what you can’t see.”

That philosophy is both right and wrong. I think you can be what you can’t see, and certainly every pioneer who has ever broken a racial, ethnic, religion, or gender barrier has done so. However, it’s certainly much easier to enter a field if there are role models who look like you. To that end, many computer scientists today are trying to diversify the field. Yet I know from my work in Internet policy and my recent travels across the country for my book tour that many students still feel locked out because of old stereotypes in computing and engineering. By sharing strong stories of pioneers in the fields who are women and people of color, I hope we can open the doors to computing and engineering. I hope history and herstory that is shared make it much easier to recruit young people to join engineering, computer science, and related fields.

Are you planning on writing more books or producing another documentary?

Kleiman: I would like to continue the story of the ENIAC programmers and write about what happened to them after the war ended. I hope that my next book will delve into the 1950s and uncover more about the history of the Universal Automatic Computer, the first modern commercial computer series, and the diverse group of people who built and programmed it.


Match ID: 174 Score: 3.57 source: spectrum.ieee.org age: 8 days
qualifiers: 3.57 mit

The Transistor of 2047: Expert Predictions
Mon, 21 Nov 2022 16:00:01 +0000


The 100th anniversary of the invention of the transistor will happen in 2047. What will transistors be like then? Will they even be the critical computing element they are today? IEEE Spectrum asked experts from around the world for their predictions.


What will transistors be like in 2047?

Expect transistors to be even more varied than they are now, says one expert. Just as processors have evolved from CPUs to include GPUs, network processors, AI accelerators, and other specialized computing chips, transistors will evolve to fit a variety of purposes. “Device technology will become application domain–specific in the same way that computing architecture has become application domain–specific,” says H.-S. Philip Wong, an IEEE Fellow, professor of electrical engineering at Stanford University, and former vice president of corporate research at TSMC.

Despite the variety, the fundamental operating principle—the field effect that switches transistors on and off—will likely remain the same, suggests Suman Datta, an IEEE Fellow, professor of electrical and computer at Georgia Tech, and director of the multi-university nanotech research center ASCENT. This device will likely have minimum critical dimensions of 1 nanometer or less, enabling device densities of 10 trillion per square centimeter, says Tsu-Jae King Liu, an IEEE Fellow, dean of the college of engineering at the University of California, Berkeley, and a member of Intel’s board of directors.

"It is safe to assume that the transistor or switch architectures of 2047 have already been demonstrated on a lab scale"—Sri Samavedam

Experts seem to agree that the transistor of 2047 will need new materials and probably a stacked or 3D architecture, expanding on the planned complementary field-effect transistor (CFET, or 3D-stacked CMOS). [For more on the CFET, see "Taking Moore's Law to New Heights."] And the transistor channel, which now runs parallel to the plane of the silicon, may need to become vertical in order to continue to increase in density, says Datta.

AMD senior fellow Richard Schultz, suggests that the main aim in developing these new devices will be power. “The focus will be on reducing power and the need for advanced cooling solutions,” he says. “Significant focus on devices that work at lower voltages is required.”

Will transistors still be the heart of most computing in 25 years?

It’s hard to imagine a world where computing is not done with transistors, but, of course, vacuum tubes were once the digital switch of choice. Startup funding for quantum computing, which does not directly rely on transistors, reached US $1.4 billion in 2021, according to McKinsey & Co.

But advances in quantum computing won’t happen fast enough to challenge the transistor by 2047, experts in electron devices say. “Transistors will remain the most important computing element,” says Sayeef Salahuddin, an IEEE Fellow and professor of electrical engineering and computer science at the University of California, Berkeley. “Currently, even with an ideal quantum computer, the potential areas of application seem to be rather limited compared to classical computers.”

Sri Samavedam, senior vice president of CMOS technologies at the European chip R&D center Imec, agrees. “Transistors will still be very important computing elements for a majority of the general-purpose compute applications,” says Samavedam. “One cannot ignore the efficiencies realized from decades of continuous optimization of transistors.”

Has the transistor of 2047 already been invented?

Twenty-five years is a long time, but in the world of semiconductor R&D, it’s not that long. “In this industry, it usually takes about 20 years from [demonstrating a concept] to introduction into manufacturing,” says Samavedam. “It is safe to assume that the transistor or switch architectures of 2047 have already been demonstrated on a lab scale” even if the materials involved won’t be exactly the same. King Liu, who demonstrated the modern FinFET about 25 years ago with colleagues at Berkeley, agrees.

But the idea that the transistor of 2047 is already sitting in a lab somewhere isn’t universally shared. Salahuddin, for one, doesn’t think it’s been invented yet. “But just like the FinFET in the 1990s, it is possible to make a reasonable prediction for the geometric structure” of future transistors, he says.

AMD’s Schultz says you can glimpse this structure in proposed 3D-stacked devices made of 2D semiconductors or carbon-based semiconductors. “Device materials that have not yet been invented could also be in scope in this time frame,” he adds.

Will silicon still be the active part of most transistors in 2047?

Experts say that the heart of most devices, the transistor channel region, will still be silicon, or possibly silicon-germanium—which is already making inroads—or germanium. But in 2047 many chips may use semiconductors that are considered exotic today. These could include oxide semiconductors like indium gallium zinc oxide; 2D semiconductors, such as the metal dichalcogenide tungsten disulfide; and one-dimensional semiconductors, such as carbon nanotubes. Or even “others yet to be invented,” says Imec’s Samavedam.

"Transistors will remain the most important computing element"—Sayeef Salahuddin

Silicon-based chips may be integrated in the same package with chips that rely on newer materials, just as processor makers are today integrating chips using different silicon manufacturing technologies into the same package, notes IEEE Fellow Gabriel Loh, a senior fellow at AMD.

Which semiconductor material is at the heart of the device may not even be the central issue in 2047. “The choice of channel material will essentially be dictated by which material is the most compatible with many other materials that form other parts of the device,” says Salahuddin. And we know a lot about integrating materials with silicon.

In 2047, where will transistors be common where they are not found today?

Everywhere. No, seriously. Experts really do expect some amount of intelligence and sensing to creep into every aspect of our lives. That means devices will be attached to our bodies and implanted inside them; embedded in all kinds of infrastructure, including roads, walls, and houses; woven into our clothing; stuck to our food; swaying in the breeze in grain fields; watching just about every step in every supply chain; and doing many other things in places nobody has thought of yet.

Transistors will be “everywhere that needs computation, command and control, communications, data collection, storage and analysis, intelligence, sensing and actuation, interaction with humans, or an entrance portal to the virtual and mixed reality world,” sums up Stanford’s Wong.

This article appears in the December 2022 print issue as “The Transistor of 2047.”


Match ID: 175 Score: 3.57 source: spectrum.ieee.org age: 8 days
qualifiers: 3.57 mit

How the Graphical User Interface Was Invented
Sun, 20 Nov 2022 20:00:00 +0000


Mice, windows, icons, and menus: these are the ingredients of computer interfaces designed to be easy to grasp, simplicity itself to use, and straightforward to describe. The mouse is a pointer. Windows divide up the screen. Icons symbolize application programs and data. Menus list choices of action.

But the development of today’s graphical user interface was anything but simple. It took some 30 years of effort by engineers and computer scientists in universities, government laboratories, and corporate research groups, piggybacking on each other’s work, trying new ideas, repeating each other’s mistakes.


This article was first published as “Of Mice and menus: designing the user-friendly interface.” It appeared in the September 1989 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The photographs and diagrams appeared in the original print version.


Throughout the 1970s and early 1980s, many of the early concepts for windows, menus, icons, and mice were arduously researched at Xerox Corp.’s Palo Alto Research Center (PARC), Palo Alto, Calif. In 1973, PARC developed the prototype Alto, the first of two computers that would prove seminal in this area. More than 1200 Altos were built and tested. From the Alto’s concepts, starting in 1975, Xerox’s System Development Department then developed the Star and introduced it in 1981—the first such user-friendly machine sold to the public.

In 1984, the low-cost Macintosh from Apple Computer Inc., Cupertino, Calif., brought the friendly interface to thousands of personal computer users. During the next five years, the price of RAM chips fell enough to accommodate the huge memory demands of bit-mapped graphics, and the Mac was followed by dozens of similar interfaces for PCs and workstations of all kinds. By now, application programmers are becoming familiar with the idea of manipulating graphic objects.

The Mac’s success during the 1980s spurred Apple Computer to pursue legal action over ownership of many features of the graphical user interface. Suits now being litigated could assign those innovations not to the designers and their companies, but to those who first filed for legal protection on them.

The GUI started with Sketchpad


The grandfather of the graphical user interface was Sketchpad [see photograph]. Massachusetts Institute of Technology student Ivan E. Sutherland built it in 1962 as a Ph.D. thesis at MIT’s Lincoln Laboratory in Lexington, Mass. Sketchpad users could not only draw points, line segments, and circular arcs on a cathode ray tube (CRT) with a light pen—they could also assign constraints to, and relationships among, whatever they drew.

Arcs could have a specified diameter, lines could be horizontal or vertical, and figures could be built up from combinations of elements and shapes. Figures could be moved, copied, shrunk, expanded, and rotated, with their constraints (shown as onscreen icons) dynamically preserved. At a time when a CRT monitor was a novelty in itself, the idea that users could interactively create objects by drawing on a computer was revolutionary.


Man sitting in front of a round cathode ray display with a white square and triangle on a black background

Moreover, to zoom in on objects, Sutherland wrote the first window-drawing program, which required him to come up with the first clipping algorithm. Clipping is a software routine that calculates which part of a graphic object is to be displayed and displays only that part on the screen. The program must calculate where a line is to be drawn, compare that position to the coordinates of the window in use, and prevent the display of any line segment whose coordinates fall outside the window.

Though films of Sketchpad in operation were widely shown in the computer research community, Sutherland says today that there was little immediate fallout from the project. Running on MIT’s TX-2 mainframe, it demanded too much computing power to be practical for individual use. Many other engineers, however, see Sketchpad’s design and algorithms as a primary influence on an entire generation of research into user interfaces.

The origin of the computer mouse


The light pens used to select areas of the screen by interactive computer systems of the 1950s and 1960s—including Sketchpad—had drawbacks. To do the pointing, the user’s arm had to be lifted up from the table, and after a while that got tiring. Picking up the pen required fumbling around on the table or, if it had a holder, taking the time after making a selection to put it back.

Sensing an object with a light pen was straightforward: the computer displayed spots of light on the screen and interrogated the pen as to whether it sensed a spot, so the program always knew just what was being displayed. Locating the position of the pen on the screen required more sophisticated techniques—like displaying a cross pattern of nine points on the screen, then moving the cross until it centered on the light pen.

In 1964, Douglas Engelbart, a research project leader at SRI International in Menlo Park, Calif., tested all the commercially available pointing devices, from the still-popular light pen to a joystick and a Graphicon (a curve-tracing device that used a pen mounted on the arm of a potentiometer). But he felt the selection failed to cover the full spectrum of possible pointing devices, and somehow he should fill in the blanks.

Then he remembered a 1940s college class he had taken that covered the use of a planimeter to calculate area. (A planimeter has two arms, with a wheel on each. The wheels can roll only along their axes; when one of them rolls, the other must slide.)

If a potentiometer were attached to each wheel to monitor its rotation, he thought, a planimeter could be used as a pointing device. Engelbart explained his roughly sketched idea to engineer William English, who with the help of the SRI machine shop built what they quickly dubbed “the mouse.”



This first mouse was big because it used single-turn potentiometers: one rotation of the wheels had to be scaled to move a cursor from one side of the screen to the other. But it was simple to interface with the computer: the processor just read frequent samples of the potentiometer positioning signals through analog-to-digital converters.

The cursor moved by the mouse was easy to locate, since readings from the potentiometer determined the position of the cursor on the screen-unlike the light pen. But programmers for later windowing systems found that the software necessary to determine which object the mouse had selected was more complex than that for the light pen: they had to compare the mouse’s position with that of all the objects displayed onscreen.

The computer mouse gets redesigned—and redesigned again

Engelbart’s group at SRI ran controlled experiments with mice and other pointing devices, and the mouse won hands down. People adapted to it quickly, it was easy to grab, and it stayed where they put it. Still, Engelbart wanted to tinker with it. After experimenting, his group had concluded that the proper ratio of cursor movement to mouse movement was about 2:1, but he wanted to try varying that ratio—decreasing it at slow speeds and raising it at fast speeds—to improve user control of fine movements and speed up larger movements. Some modern mouse-control software incorporates this idea, including that of the Macintosh.

The mouse, still experimental at this stage, did not change until 1971. Several members of Engelbart’s group had moved to the newly established PARC, where many other researchers had seen the SRI mouse and the test report. They decided there was no need to repeat the tests; any experimental systems they designed would use mice.

Said English, “This was my second chance to build a mouse; it was obvious that it should be a lot smaller, and that it should be digital.” Chuck Thacker, then a member of the research staff, advised PARC to hire inventor Jack Hawley to build it.

Hawley decided the mouse should use shaft encoders, which measure position by a series of pulses, instead of potentiometers (both were covered in Engelbart’s 1970 patent), to eliminate the expensive analog-to-digital converters. The basic principle, of one wheel rolling while the other slid, was licensed from SRI.

The ball mouse was the “easiest patent I ever got. It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”
—Ron Rider

In 1972, the mouse changed again. Ron Rider, now vice president of systems architecture at PARC but then a new arrival, said he was using the wheel mouse while an engineer made excuses for its asymmetric operation (one wheel dragging while one turned). “I suggested that they turn a trackball upside down, make it small, and use it as a mouse instead,” Rider told IEEE Spectrum. This device came to be known as the ball mouse. “Easiest patent I ever got,” Rider said. “It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”

Defining terms


Bit map

The pixel pattern that makes up the graphic display on a computer screen.

Clicking

The motion of pressing a mouse button to Initiate an action by software; some actions require double-clicking.

Graphical user interface (GUI)

The combination of windowing displays, menus, icons, and a mouse that is increasingly used on personal computers and workstations.

Icon

An onscreen drawing that represents programs or data.

Menu

A list of command options currently available to the computer user; some stay onscreen, while pop-up or pull-down menus are requested by the user.

Mouse

A device whose motion across a desktop or other surface causes an on-screen cursor to move commensurately; today’s mice move on a ball and have one, two, or three buttons.

Raster display

A cathode ray tube on which Images are displayed as patterns of dots, scanned onto the screen sequentially in a predetermined pattern of lines.

Vector display

A cathode ray tube whose gun scans lines, or vectors, onto the screen phosphor.

Window

An area of a computer display, usually one of several, in which a particular program is executing.


In the PARC ball mouse design, the weight of the mouse is transferred to the ball by a swivel device and on one or two casters at the end of the mouse farthest from the wire “tail.” A prototype was built by Xerox’s Electronics Division in El Segundo, Calif., then redesigned by Hawley. The rolling ball turned two perpendicular shafts, with a drum on the end of each that was coated with alternating stripes of conductive and nonconductive material. As the drum turned, the stripes transmitted electrical impulses through metal wipers.

When Apple Computer decided in 1979 to design a mouse for its Lisa computer, the design mutated yet again. Instead of a metal ball held against the substrate by a swivel, Apple used a rubber ball whose traction depended on the friction of the rubber and the weight of the ball itself. Simple pads on the bottom of the case carried the weight, and optical scanners detected the motion of the internal wheels. The device had loose tolerances and few moving parts, so that it cost perhaps a quarter as much to build as previous ball mice.

How the computer mouse gained and lost buttons

The first, wooden, SRI mouse had only one button, to test the concept. The plastic batch of SRI mice bad three side-by-side buttons—all there was room for, Engelbart said. The first PARC mouse bad a column of three buttons-again, because that best fit the mechanical design. Today, the Apple mouse has one button, while the rest have two or three. The issue is no longer 1950—a standard 6-by-10-cm mouse could now have dozens of buttons—but human factors, and the experts have strong opinions.

Said English, now director of internationalization at Sun Microsystems Inc., Mountain View, Calif.: “Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” He sees two buttons as the minimum because two functions are basic to selecting an object: pointing to its start, then extending the motion to the end of the object.

William Verplank, a human factors specialist in the group that tested the graphical interface at Xerox from 1978 into the early 1980s, concurred. He told Spectrum that with three buttons, Alto users forgot which button did what. The group’s tests showed that one button was also confusing, because it required actions such as double-clicking to select and then open a file.

“We have agonizing videos of naive users struggling” with these problems, Verplank said. They concluded that for most users, two buttons (as used on the Star) are optimal, if a button means the same thing in every application. English experimented with one-button mice at PARC before concluding they were a bad idea.


“Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.”
—William English


A computer monitor with a chunky white keyboard sitting on a desk

But many interface designers dislike multiple buttons, saying that double-clicking a single button to select an item is easier than remembering which button points and which extends. Larry Tesler, formerly a computer scientist at PARC, brought the one-button mouse to Apple, where he is now vice president of advanced technology. The company’s rationale is that to attract novices to its computers one button was as simple as it could get.

More than two million one-button Apple mice are now in use. The Xerox and Microsoft two-button mice are less common than either Apple’s ubiquitous one-button model or the three-button mice found on technical workstations. Dozens of companies manufacture mice today; most are slightly smaller than a pack of cigarettes, with minor variations in shape.

How windows first came to the computer screen


In 1962, Sketchpad could split its screen horizontally into two independent sections. One section could, for example, give a close-up view of the object in the other section. Researchers call Sketchpad the first example of tiled windows, which are laid out side by side. They differ from overlapping windows, which can be stacked on top of each other, or overlaid, obscuring all or part of the lower layers.

Windows were an obvious means of adding functionality to a small screen. In 1969, Engelbart equipped NLS (as the On-Line System he invented at SRI during the 1960s was known, to distinguish it from the Off-Line System known as FLS) with windows. They split the screen into multiple parts horizontally or vertically, and introduced cross-window editing with a mouse.

By 1972, led by researcher Alan Kay, the Smalltalk programming language group at Xerox PARC had implemented their version of windows. They were working with far different technology from Sutherland or Engelbart: by deciding that their images had to be displayed as dots on the screen, they led a move from vector to raster displays, to make it simple to map the assigned memory location of each of those spots. This was the bit map invented at PARC, and made viable during the 1980s by continual performance improvements in processor logic and memory speed.

Experimenting with bit-map manipulation, Smalltalk researcher Dan Ingalls developed the bit-block transfer procedure, known as BitBlt. The BitBlt software enabled application programs to mix and manipulate rectangular arrays of pixel values in on-screen or off-screen memory, or between the two, combining the pixel values and storing the result in the appropriate bit-map location.

BitBlt made it much easier to write programs to scroll a window (move an image through it), resize (enlarge or contract) it, and drag windows (move them from one location to another on screen). It led Kay to create overlapping windows. They were soon implemented by the Smalltalk group, but made clipping harder.

Some researchers question whether overlapping windows offer more benefits than tiled on the grounds that screens with overlapping windows become so messy the user gets lost.

In a tiling system, explained researcher Peter Deutsch, who worked with the Smalltalk group, the clipping borders are simply horizontal or vertical lines from one screen border to another, and software just tracks the location of those lines. But overlapping windows may appear anywhere on the screen, randomly obscuring bits and pieces of other windows, so that quite irregular regions must be clipped. Thus application software must constantly track which portions of their windows remain visible.

Some researchers still question whether overlapping windows offer more benefits than tiled, at least above a certain screen size, on the grounds that screens with overlapping windows become so messy the user gets lost. Others argue that overlapping windows more closely match users’ work patterns, since no one arranges the papers on their physical desktop in neat horizontal and vertical rows. Among software engineers, however, overlapping windows seem to have won for the user interface world.

So has the cut-and-paste editing model that Larry Tesler developed, first for the Gypsy text editor he wrote at PARC and later for Apple. Charles Irby—who worked on Xerox’s windows and is now vice president of development at Metaphor Computer Systems Inc., Mountain View, Calif.—noted, however, that cut-and-paste worked better for pure text-editing than for moving graphic objects from one application to another.

The origin of the computer menu bar


Menus—functions continuously listed onscreen that could be called into action with key combinations—were commonly used in defense computing by the 1960s. But it was only with the advent of BitBlt and windows that menus could be made to appear as needed and to disappear after use. Combined with a pointing device to indicate a user’s selection, they are now an integral part of the user-friendly interface: users no longer need to refer to manuals or memorize available options.

Instead, the choices can be called up at a moment’s notice whenever needed. And menu design has evolved. Some new systems use nested hierarchies of menus; others offer different menu versions—one with the most commonly used commands for novices, another with all available commands for the experienced user.

Among the first to test menus on demand was PARC researcher William Newman, in a program called Markup. Hard on his heels, the Smalltalk group built in pop-up menus that appeared on screen at the cursor site when the user pressed one of the mouse buttons.

Implementation was on the whole straightforward, recalled Deutsch. The one exception was determining whether the menu or the application should keep track of the information temporarily obscured by the menu. In the Smalltalk 76 version, the popup menu saved and restored the screen bits it overwrote. But in today’s multitasking systems, that would not work, because an application may change those bits without the menu’s knowledge. Such systems add another layer to the operating system: a display manager that tracks what is written where.

The production Xerox Star, in 1981, featured a further advance: a menu bar, essentially a row of words indicating available menus that could be popped up for each window. Human factors engineer Verplank recalled that the bar was at first located at the bottom of its window. But the Star team found users were more likely to associate a bar with the window below it, so it was moved to the top of its window.

Apple simplified things in its Lisa and Macintosh with a single bar placed at the top of the screen. This menu bar relates only to the window in use: the menus could be ‘‘pulled down” from the bar, to appear below it. Designer William D. Atkinson received a patent (assigned to Apple Computer) in August 1984 for this innovation.

One new addition that most user interface pioneers consider an advantage is the tear-off menu, which the user can move to a convenient spot on the screen and “pin” there, always visible for ready access.

Many windowing interfaces now offer command-key or keyboard alternatives for many commands as well. This return to the earliest of user interfaces—key combinations—neatly supplements menus, providing both ease of use for novices and for the less experienced, and speed for those who can type faster than they can point to a menu and click on a selection.

How the computer “icon” got its name


Sketchpad had on-screen graphic objects that represented constraints (for example, a rule that lines be the same length), and the Flex machine built in 1967 at the University of Utah by students Alan Kay and Ed Cheadle had squares that represented programs and data (like today’s computer “folders”). Early work on icons was also done by Bell Northern Research, Ottawa, Canada, stemming from efforts to replace the recently legislated bilingual signs with graphic symbols.

But the concept of the computer “icon” was not formalized until 1975. David Canfield Smith, a computer science graduate student at Stanford University in California, began work on his Ph.D. thesis in 1973. His advisor was PARC’s Kay, who suggested that he look at using the graphics power of the experimental Alto not just to display text, but rather to help people program.

David Canfield Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents.

Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents: a Russian icon of a saint is holy and is to be venerated. Smith’s computer icons contained all the properties of the programs and data represented, and therefore could be linked or acted on as if they were the real thing.

After receiving his Ph.D. in 1975, Smith joined Xerox in 1976 to work on Star development. The first thing he did, he said, was to recast his concept of icons in office terms. “I looked around my office and saw papers, folders, file cabinets, a telephone, and bookshelves, and it was an easy translation to icons,” he said.

Xerox researchers developed, tested, and revised icons for the Star interface for three years before the first version was complete. At first they attempted to make the icons look like a detailed photographic rendering of the object, recalled Irby, who worked on testing and refining the Xerox windows. Trading off label space, legibility, and the number of icons that fit on the screen, they decided to constrain icons to a 1-inch (2.5-centimeter) square of 64 by 64 pixels, or 512 eight-bit bytes.

Then, Verplank recalls, they discovered that because of a background pattern based on two-pixel dots, the right-hand side of the icons appeared jagged. So they increased the width of the icons to 65 pixels, despite an outcry from programmers who liked the neat 16-bit breakdown. But the increase stuck, Verplank said, because they had already decided to store 72 bits per side to allow for white space around each icon.

After settling on a size for the icons, the Star developers tested four sets developed by two graphic designers and two software engineers. They discovered that, for example, resizing may cause problems. They shrunk the icon for a person—a head and shoulders—in order to use several of them to represent a group, only to hear one test subject say the screen resolution made the reduced icon look like a cross above a tombstone. Computer graphics artist Norm Cox, now of Cox & Hall, Dallas, Texas, was finally hired to redesign the icons.

Icon designers today still wrestle with the need to make icons adaptable to the many different system configurations offered by computer makers. Artist Karen Elliott, who has designed icons for Microsoft, Apple, Hewlett-Packard Co., and others, noted that on different systems an icon may be displayed in different colors, several resolutions, and a variety of gray shades, and it may also be inverted (light and dark areas reversed).

In the past few years, another concern has been added to icon designers’ tasks: internationalization. Icons designed in the United States often lack space for translations into languages other than English. Elliott therefore tries to leave space for both the longer words and the vertical orientation of some languages.


A square white macintosh computer with a white keyboard, in a separate image below, computer icons and the text address book, address, addresses

The main rule is to make icons simple, clean, and easily recognizable. Discarded objects are placed in a trash can on the Macintosh. On the NeXT Computer System, from NeXT Inc., Palo Alto, Calif.—the company formed by Apple cofounder Steven Jobs after he left Apple—they are dumped into a Black Hole. Elliott sees NeXT’s black hole as one of the best icons ever designed: ”It is distinct; its roundness stands out from the other, square icons, and this is important on a crowded display. It fits my image of information being sucked away, and it makes it clear that dumping something is serious.

English disagrees vehemently. The black hole “is fundamentally wrong,” he said. “You can dig paper out of a wastebasket, but you can’t dig it out of a black hole.” Another critic called the black hole familiar only to “computer nerds who read mostly science fiction and comics,” not to general users.

With the introduction of the Xerox Star in June 1981, the graphical user interface, as it is known today, arrived on the market. Though not a commercial triumph, the Star generated great interest among computer users, as the Alto before it had within the universe of computer designers.

Even before the Star was introduced, Jobs, then still at Apple, had visited Xerox PARC in November 1979 and asked the Smalltalk researchers dozens of questions about the Alto’s internal design. He later recruited Larry Tesler from Xerox to design the user interface of the Apple Lisa.

With the Lisa and then the Macintosh, introduced in January 1983 and January 1984 respectively, the graphical user interface reached the low-cost, high-volume computer market.

At almost $10,000, buyers deemed the Lisa too expensive for the office market. But aided by prizewinning advertising and its lower price, the Macintosh took the world by storm. Early Macs had only 128K bytes of RAM, which made them slow to respond because it was too little memory for heavy graphic manipulation. Also, the time needed for programmers to learn its Toolbox of graphics routines delayed application packages until well into 1985. But the Mac’s ease of use was indisputable, and it generated interest that spilled over into the MS-DOS world of IBM PCs and clones, as well as Unix-based workstations.

Who owns the graphical user interface?


The widespread acceptance of such interfaces, however, has led to bitter lawsuits to establish exactly who owns what. So far, none of several litigious companies has definitively established that it owns the software that implements windows, icons, or early versions of menus. But the suits continue.

Virtually all the companies that make and sell either wheel or ball mice paid license fees to SRI or to Xerox for their patents. Engelbart recalled that SRI patent attorneys inspected all the early work on the interface, but understood only hardware. After looking at developments like the implementation of windows, they told him that none of it was patentable.

At Xerox, the Star development team proposed 12 patents having to do with the user interface. The company’s patent committee rejected all but two on hardware—one on BitBlt, the other on the Star architecture. At the time, Charles Irby said, it was a good decision. Patenting required full disclosure, and no precedents then existed for winning software patent suits.


A computer screen in blue and white with multiple open windows


Three computer windows with greyscale images on a dark grey background


Computer windows tinted blue on a black background partially obscuring a planet and starfield


The most recent and most publicized suit was filed in March 1988, by Apple, against both Microsoft and Hewlett-Packard Co., Palo Alto, Calif. Apple alleges that HP’s New Wave interface, requiring version 2.03 of Microsoft’s Windows program, embodies the copyrighted “audio visual computer display” of the Macintosh without permission; that the displays of Windows 2.03 are illegal copies of the Mac’s audiovisual works; and that Windows 2.03 also exceeds the rights granted in a November 198S agreement in which Microsoft acknowledged that the displays in Windows 1.0 were derivatives of those in Apple’s Lisa and Mac.

In March 1989, U.S. District Judge William W. Schwarzer ruled Microsoft had exceeded the bounds of its license in creating Windows 2.03. Then in July 1989 Schwarzer ruled that all but 11 of the 260 items that Apple cited in its suit were, in fact, acceptable under the 1985 agreement. The larger issue—whether Apple’s copyrights are valid, and whether Microsoft and HP infringed on them—will not now be examined until 1990.

Among those 11 are overlapping windows and movable icons. According to Pamela Samuelson, a noted software intellectual property expert and visiting professor at Emory University Law School, Atlanta, Ga., many experts would regard both as functional features of an interface that cannot be copyrighted, rather than “expressions” of an idea protectable by copyright.

But lawyers for Apple—and for other companies that have filed lawsuits to protect the “look and feel’’ of their screen displays—maintain that if such protection is not granted, companies will lose the economic incentive to market technological innovations. How is Apple to protect its investment in developing the Lisa and Macintosh, they argue, if it cannot license its innovations to companies that want to take advantage of them?

If the Apple-Microsoft case does go to trial on the copyright issues, Samuelson said, the court may have to consider whether Apple can assert copyright protection for overlapping windows-an interface feature on which patents have also been granted. In April 1989, for example, Quarterdeck Office Systems Inc., Santa Monica, Calif., received a patent for a multiple windowing system in its Desq system software, introduced in 1984.

Adding fuel to the legal fire, Xerox said in May 1989 it would ask for license fees from companies that use the graphical user interface. But it is unclear whether Xerox has an adequate claim to either copyright or patent protection for the early graphical interface work done at PARC. Xerox did obtain design patents on later icons, noted human factors engineer Verplank. Meanwhile, both Metaphor and Sun Microsystems have negotiated licenses with Xerox for their own interfaces.

To Probe Further

The September 1989 IEEE Computer contains an article, “The Xerox ‘Star’: A Retrospective,” by Jeff Johnson et al., covering development of the Star. “Designing the Star User Interface,’’ [PDF] by David C. Smith et al., appeared in the April 1982 issue of Byte.

The Sept. 12, 1989, PC Magazine contains six articles on graphical user interfaces for personal computers and workstations. The July 1989 Byte includes ‘‘A Guide to [Graphical User Interfaces),” by Frank Hayes and Nick Baran, which describes 12 current interfaces for workstations and personal computers. “The Interface of Tomorrow, Today,’’ by Howard Reingold, in the July 10, 1989, InfoWorld does the same. “The interface that launched a thousand imitations,” by Richard Rawles, in the March 21, 1989, MacWeek covers the Macintosh interface.

The human factors of user interface design are discussed in The Psychology of Everyday Things, by Donald A. Norman (Basic Books Inc., New York, 1988). The January 1989 IEEE Software contains several articles on methods, techniques, and tools for designing and implementing graphical interfaces. The Way Things Work, by David Macaulay (Houghton Mifflin Co., Boston, 1988), contains a detailed drawing of a ball mouse.

The October 1985 IEEE Spectrum covered Xerox PARC’s history in “Research at Xerox PARC: a founder’s assessment,” by George Pake (pp. 54-61) and “Inside the PARC: the ‘information architects,’“ by Tekla Perry and Paul Wallich (pp. 62-75).

William Atkinson received patent no. 4,464,652 for the pulldown menu system on Aug. 8, 1984, and assigned it to Apple. Gary Pope received patent no. 4,823,108, for an improved system for displaying images in “windows” on a computer screen, on April 18, 1989, and assigned it to Quarterdeck Office Systems.

The wheel mouse patent, no. 3,541,541, “X-Y position indicator for a display system,” was issued to Douglas Engelbart on Nov. 17, 1970, and assigned to SRI International. The ball mouse patent, no. 3,835,464, was issued to Ronald Rider on Sept. 10, 1974, and assigned to Xerox.

The first selection device tests to include a mouse are covered in “Display-Selection Techniques for Text Manipulation,” by William English, Douglas Engelbart, and Melvyn Berman, in IEEE Transactions on Human Factors in Electronics, March 1967.

Sketchpad: A Man-Machine Graphical Communication System, by Ivan E. Sutherland (Garland Publishing Inc., New York City and London, 1980), reprints his 1963 Ph.D. thesis.










Match ID: 176 Score: 3.57 source: spectrum.ieee.org age: 9 days
qualifiers: 3.57 mit

How the First Transistor Worked
Sun, 20 Nov 2022 16:00:00 +0000


The vacuum-tube triode wasn’t quite 20 years old when physicists began trying to create its successor, and the stakes were huge. Not only had the triode made long-distance telephony and movie sound possible, it was driving the entire enterprise of commercial radio, an industry worth more than a billion dollars in 1929. But vacuum tubes were power-hungry and fragile. If a more rugged, reliable, and efficient alternative to the triode could be found, the rewards would be immense.

The goal was a three-terminal device made out of semiconductors that would accept a low-current signal into an input terminal and use it to control the flow of a larger current flowing between two other terminals, thereby amplifying the original signal. The underlying principle of such a device would be something called the field effect—the ability of electric fields to modulate the electrical conductivity of semiconductor materials. The field effect was already well known in those days, thanks to diodes and related research on semiconductors.


A photo of a cutaway of a point-contact of a transistor.  In the cutaway photo of a point-contact, two thin conductors are visible; these connect to the points that make contact with a tiny slab of germanium. One of these points is the emitter and the other is the collector. A third contact, the base, is attached to the reverse side of the germanium.AT&T ARCHIVES AND HISTORY CENTER

But building such a device had proved an insurmountable challenge to some of the world’s top physicists for more than two decades. Patents for transistor-like devices had been filed starting in 1925, but the first recorded instance of a working transistor was the legendary point-contact device built at AT&T Bell Telephone Laboratories in the fall of 1947.

Though the point-contact transistor was the most important invention of the 20th century, there exists, surprisingly, no clear, complete, and authoritative account of how the thing actually worked. Modern, more robust junction and planar transistors rely on the physics in the bulk of a semiconductor, rather than the surface effects exploited in the first transistor. And relatively little attention has been paid to this gap in scholarship.

It was an ungainly looking assemblage of germanium, plastic, and gold foil, all topped by a squiggly spring. Its inventors were a soft-spoken Midwestern theoretician, John Bardeen, and a voluble and “ somewhat volatile” experimentalist, Walter Brattain. Both were working under William Shockley, a relationship that would later prove contentious. In November 1947, Bardeen and Brattain were stymied by a simple problem. In the germanium semiconductor they were using, a surface layer of electrons seemed to be blocking an applied electric field, preventing it from penetrating the semiconductor and modulating the flow of current. No modulation, no signal amplification.


Sometime late in 1947 they hit on a solution. It featured two pieces of barely separated gold foil gently pushed by that squiggly spring into the surface of a small slab of germanium.

Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate. Indeed, the current edition of that bible of undergraduate EEs, The Art of Electronics by Horowitz and Hill, makes no mention of the point-contact transistor at all, glossing over its existence by erroneously stating that the junction transistor was a “Nobel Prize-winning invention in 1947.” But the transistor that was invented in 1947 was the point-contact; the junction transistor was invented by Shockley in 1948.

So it seems appropriate somehow that the most comprehensive explanation of the point-contact transistor is contained within John Bardeen’s lecture for that Nobel Prize, in 1956. Even so, reading it gives you the sense that a few fine details probably eluded even the inventors themselves. “A lot of people were confused by the point-contact transistor,” says Thomas Misa, former director of the Charles Babbage Institute for the History of Science and Technology, at the University of Minnesota.

Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate.

A year after Bardeen’s lecture, R. D. Middlebrook, a professor of electrical engineering at Caltech who would go on to do pioneering work in power electronics, wrote: “Because of the three-dimensional nature of the device, theoretical analysis is difficult and the internal operation is, in fact, not yet completely understood.”

Nevertheless, and with the benefit of 75 years of semiconductor theory, here we go. The point-contact transistor was built around a thumb-size slab of n-type germanium, which has an excess of negatively charged electrons. This slab was treated to produce a very thin surface layer that was p-type, meaning it had an excess of positive charges. These positive charges are known as holes. They are actually localized deficiencies of electrons that move among the atoms of the semiconductor very much as a real particle would. An electrically grounded electrode was attached to the bottom of this slab, creating the base of the transistor. The two strips of gold foil touching the surface formed two more electrodes, known as the emitter and the collector.

That’s the setup. In operation, a small positive voltage—just a fraction of a volt—is applied to the emitter, while a much larger negative voltage—4 to 40 volts—is applied to the collector, all with reference to the grounded base. The interface between the p-type layer and the n-type slab created a junction just like the one found in a diode: Essentially, the junction is a barrier that allows current to flow easily in only one direction, toward lower voltage. So current could flow from the positive emitter across the barrier, while no current could flow across that barrier into the collector.

A photo of rows of people sitting in front of microscopes and stacks of transistors. The Western Electric Type-2 point-contact transistor was the first transistor to be manufactured in large quantities, in 1951, at Western Electric’s plant in Allentown, Pa. By 1960, when this photo was taken, the plant had switched to producing junction transistors.AT&T ARCHIVES AND HISTORY CENTER

Now, let’s look at what happens down among the atoms. First, we’ll disconnect the collector and see what happens around the emitter without it. The emitter injects positive charges—holes—into the p-type layer, and they begin moving toward the base. But they don’t make a beeline toward it. The thin layer forces them to spread out laterally for some distance before passing through the barrier into the n-type slab. Think about slowly pouring a small amount of fine powder onto the surface of water. The powder eventually sinks, but first it spreads out in a rough circle.

Now we connect the collector. Even though it can’t draw current by itself through the barrier of the p-n junction, its large negative voltage and pointed shape do result in a concentrated electric field that penetrates the germanium. Because the collector is so close to the emitter, and is also negatively charged, it begins sucking up many of the holes that are spreading out from the emitter. This charge flow results in a concentration of holes near the p-n barrier underneath the collector. This concentration effectively lowers the “height” of the barrier that would otherwise prevent current from flowing between the collector and the base. With the barrier lowered, current starts flowing from the base into the collector—much more current than what the emitter is putting into the transistor.

The amount of current depends on the height of the barrier. Small decreases or increases in the emitter’s voltage cause the barrier to fluctuate up and down, respectively. Thus very small changes in the the emitter current control very large changes at the collector, so voilà! Amplification. (EEs will notice that the functions of base and emitter are reversed compared with those in later transistors, where the base, not the emitter, controls the response of the transistor.)

Ungainly and fragile though it was, it was a semiconductor amplifier, and its progeny would change the world. And its inventors knew it. The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil, with that tiny slit separating the emitter and collector contacts. This configuration gave reliable power gain, and the duo knew then that they had succeeded. In his carpool home that night, Brattain told his companions he’d just done “the most important experiment that I’d ever do in my life” and swore them to secrecy. The taciturn Bardeen, too, couldn’t resist sharing the news. As his wife, Jane, prepared dinner that night, he reportedly said, simply, “We discovered something today.” With their children scampering around the kitchen, she responded, “That’s nice, dear.

It was a transistor, at last, but it was pretty rickety. The inventors later hit on the idea of electrically forming the collector by passing large currents through it during the transistor’s manufacturing. This technique enabled them to get somewhat larger current flows that weren’t so tightly confined within the surface layer. The electrical forming was a bit hit-or-miss, though. “They would just throw out the ones that didn’t work,” Misa notes.

Nevertheless, point-contact transistors went into production at many companies, under license to AT&T, and, in 1951, at AT&T’s own manufacturing arm, Western Electric. They were used in hearing aids, oscillators, telephone-routing gear, in an experimental TV receiver built at RCA, and in the Tradic, the first airborne digital computer, among other systems. In fact, point-contact transistors remained in production until 1966, in part due to their superior speed compared with the alternatives.

The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil…

The Bell Labs group wasn’t alone in its successful pursuit of a transistor. In Aulnay-sous-Bois, a suburb northeast of Paris, two German physicists, Herbert Mataré and Heinrich Welker, were also trying to build a three-terminal semiconductor amplifier. Working for a French subsidiary of Westinghouse, they were following up on very intriguing observations Mataré had made while developing germanium and silicon rectifiers for the German military in 1944. The two succeeded in creating a reliable point-contact transistor in June 1948.

They were astounded, a week or so later, when Bell Labs finally revealed the news of its own transistor, at a press conference on 30 June 1948. Though they were developed completely independently, and in secret, the two devices were more or less identical.

Here the story of the transistor takes a weird turn, breathtaking in its brilliance and also disturbing in its details. Bardeen’s and Brattain’s boss, William Shockley, was furious that his name was not included with Bardeen’s and Brattain’s on the original patent application for the transistor. He was convinced that Bardeen and Brattain had merely spun his theories about using fields in semiconductors into their working device, and had failed to give him sufficient credit. Yet in 1945, Shockley had built a transistor based on those very theories, and it hadn’t worked.

A photo of a man in a jacket placing a transistor in a device. In 1953, RCA engineer Gerald Herzog led a team that designed and built the first "all-transistor" television (although, yes, it had a cathode-ray tube). The team used point-contact transistors produced by RCA under a license from Bell Labs. TRANSISTOR MUSEUM JERRY HERZOG ORAL HISTORY

At the end of December, barely two weeks after the initial success of the point-contact transistor, Shockley traveled to Chicago for the annual meeting of the American Physical Society. On New Year’s Eve, holed up in his hotel room and fueled by a potent mix of jealousy and indignation, he began designing a transistor of his own. In three days he scribbled some 30 pages of notes. By the end of the month, he had the basic design for what would become known as the bipolar junction transistor, or BJT, which would eventually supersede the point-contact transistor and reign as the dominant transistor until the late 1970s.

A photo of a group of transistors With insights gleaned from the Bell Labs work, RCA began developing its own point-contact transistors in 1948. The group included the seven shown here—four of which were used in RCA's experimental, 22-transistor television set built in 1953. These four were the TA153 [top row, second from left], the TA165 [top, far right], the TA156 [bottom row, middle] and the TA172 [bottom, right].TRANSISTOR MUSEUM JONATHAN HOPPE COLLECTION

The BJT was based on Shockley’s conviction that charges could, and should, flow through the bulk semiconductors rather than through a thin layer on their surface. The device consisted of three semiconductor layers, like a sandwich: an emitter, a base in the middle, and a collector. They were alternately doped, so there were two versions: n-type/p-type/n-type, called “NPN,” and p-type/n-type/p-type, called “PNP.”

The BJT relies on essentially the same principles as the point-contact, but it uses two p-n junctions instead of one. When used as an amplifier, a positive voltage applied to the base allows a small current to flow between it and the emitter, which in turn controls a large current between the collector and emitter.

Consider an NPN device. The base is p-type, so it has excess holes. But it is very thin and lightly doped, so there are relatively few holes. A tiny fraction of the electrons flowing in combines with these holes and are removed from circulation, while the vast majority (more than 97 percent) of electrons keep flowing through the thin base and into the collector, setting up a strong current flow.

But those few electrons that do combine with holes must be drained from the base in order to maintain the p-type nature of the base and the strong flow of current through it. That removal of the “trapped” electrons is accomplished by a relatively small flow of current through the base. That trickle of current enables the much stronger flow of current into the collector, and then out of the collector and into the collector circuit. So, in effect, the small base current is controlling the larger collector circuit.

Electric fields come into play, but they do not modulate the current flow, which the early theoreticians thought would have to happen for such a device to function. Here’s the gist: Both of the p-n junctions in a BJT are straddled by depletion regions, in which electrons and holes combine and there are relatively few mobile charge carriers. Voltage applied across the junctions sets up electric fields at each, which push charges across those regions. These fields enable electrons to flow all the way from the emitter, across the base, and into the collector.

In the BJT, “the applied electric fields affect the carrier density, but because that effect is exponential, it only takes a little bit to create a lot of diffusion current,” explains Ioannis “John” Kymissis, chair of the department of electrical engineering at Columbia University.

An illustration of a point-contact transistor. The very first transistors were a type known as point contact, because they relied on metal contacts touching the surface of a semiconductor. They ramped up output current—labeled “Collector current” in the top diagram—by using an applied voltage to overcome a barrier to charge flow. Small changes to the input, or “emitter,” current modulate this barrier, thus controlling the output current.

An illustration of a Bipolar Junction Transistor The bipolar junction transistor accomplishes amplification using much the same principles but with two semiconductor interfaces, or junctions, rather than one. As with the point-contact transistor, an applied voltage overcomes a barrier and enables current flow that is modulated by a smaller input current. In particular, the semiconductor junctions are straddled by depletion regions, across which the charge carriers diffuse under the influence of an electric field.Chris Philpot

The BJT was more rugged and reliable than the point-contact transistor, and those features primed it for greatness. But it took a while for that to become obvious. The BJT was the technology used to make integrated circuits, from the first ones in the early 1960s all the way until the late 1970s, when metal-oxide-semiconductor field-effect transistors (MOSFETs) took over. In fact, it was these field-effect transistors, first the junction field-effect transistor and then MOSFETs, that finally realized the decades-old dream of a three-terminal semiconductor device whose operation was based on the field effect—Shockley’s original ambition.

Such a glorious future could scarcely be imagined in the early 1950s, when AT&T and others were struggling to come up with practical and efficient ways to manufacture the new BJTs. Shockley himself went on to literally put the silicon into Silicon Valley. He moved to Palo Alto and in 1956 founded a company that led the switch from germanium to silicon as the electronic semiconductor of choice. Employees from his company would go on to found Fairchild Semiconductor, and then Intel.

Later in his life, after losing his company because of his terrible management, he became a professor at Stanford and began promulgating ungrounded and unhinged theories about race, genetics, and intelligence. In 1951 Bardeen left Bell Labs to become a professor at the University of Illinois at Urbana-Champaign, where he won a second Nobel Prize for physics, for a theory of superconductivity. (He is the only person to have won two Nobel Prizes in physics.) Brattain stayed at Bell Labs until 1967, when he joined the faculty at Whitman College, in Walla Walla, Wash.

Shockley died a largely friendless pariah in 1989. But his transistor would change the world, though it was still not clear as late as 1953 that the BJT would be the future. In an interview that year, Donald G. Fink, who would go on to help establish the IEEE a decade later, mused, “Is it a pimpled adolescent, now awkward, but promising future vigor? Or has it arrived at maturity, full of languor, surrounded by disappointments?”

It was the former, and all of our lives are so much the better because of it.

This article appears in the December 2022 print issue as “The First Transistor and How it Worked .”


Match ID: 177 Score: 3.57 source: spectrum.ieee.org age: 9 days
qualifiers: 3.57 mit

The EV Transition Explained: Battery Challenges
Sat, 19 Nov 2022 19:30:00 +0000


“Energy and information are two basic currencies of organic and social systems,” the economics Nobelist Herb Simon once observed. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.”

Electric vehicles at scale alter the terms of both basic currencies concurrently. Reliable, secure supplies of minerals and software are core elements for EVs, which represent a “shift from a fuel-intensive to a material-intensive energy system,” according to a report by the International Energy Agency (IEA). For example, the mineral requirements for an EV’s batteries and electric motors are six times that of an internal-combustion-engine (ICE) vehicle, which can increase the average weight of an EV by 340 kilograms (750 pounds). For something like the Ford Lightning, the weight can be more than twice that amount.

EVs also create a shift from an electromechanical-intensive to an information-intensive vehicle. EVs offer a virtual clean slate from which to accelerate the design of safe, software-defined vehicles, with computing and supporting electronics being the prime enabler of a vehicle’s features, functions, and value. Software also allows for the decoupling of the internal mechanical connections needed in an ICE vehicle, permitting an EV to be controlled remotely or autonomously. An added benefit is that the loss of the ICE power train not only reduces the components a vehicle requires but also frees up space for increased passenger comfort and storage.

The effects of Simon’s profound changes are readily apparent, forcing a 120-year-old industry to fundamentally reinvent itself. EVs require automakers to design new manufacturing processes and build plants to make both EVs and their batteries. Ramping up the battery supply chain is the automakers’ current “most challenging topic,” according to VW chief financial officer Arno Antlitz.

It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.

These plants are also very expensive. Ford and its Korean battery supplier SK Innovation are spending US $5.6 billion to produce F-Series EVs and batteries in Stanton, Tenn., for example, while GM is spending $2 billion to produce its new Cadillac Lyriq EVs in Spring Hill, Tenn. As automakers expand their lines of EVs, tens of billions more will need to be invested in both manufacturing and battery plants. It is little wonder that Tesla CEO Elon Musk calls EV factories “gigantic money furnaces.”

Furthermore, Kristin Dziczek a policy analyst with the Federal Reserve Bank of Chicago adds, there are scores of new global EV competitors actively seeking to replace the legacy automakers. The “simplicity” of EVs in comparison with ICE vehicles allows these disruptors to compete virtually from scratch with legacy automakers, not only in the car market itself but for the material and labor inputs as well.

Batteries and the supply-chain challenge

Another critical question is whether all the planned battery-plant output will support expected EV production demands. For instance, the United States will require 8 million EV batteries annually by 2030 if its target to make EVs half of all new-vehicle sales is met, with that number rising each year after. As IEA executive director Fatih Birol observes, “Today, the data shows a looming mismatch between the world’s strengthened climate ambitions and the availability of critical minerals that are essential to realizing those ambitions.”

This mismatch worries automakers. GM, Ford, Tesla, and others have moved to secure batteries through 2025, but it could be tricky after that. Rivian Automotive chief executive RJ Scaringe was recently quoted in the Wall Street Journal as saying that “90 to 95 percent of the (battery) supply chain does not exist,” and that the current semiconductor chip shortage is “a small appetizer to what we are about to feel on battery cells over the next two decades.”

The competition for securing raw materials, along with the increased consumer demand, has caused EV prices to spike. Ford has raised the price of the Lightning $6,000 to $8,500, and CEO Jim Farley bluntly states that in regard to material shortages in the foreseeable future, “I don’t think we should be confident in any other outcomes than an increase in prices.”

Stiff Competition for Engineering Talent


One critical area of resource competition is over the limited supply of software and systems engineers with the mechatronics and robotics expertise needed for EVs. Major automakers have moved aggressively to bring more software and systems-engineering expertise on board, rather than have it reside at their suppliers, as they have traditionally done. Automakers feel that if they’re not in control of the software, they’re not in control of their product.

Volvo’s CEO Jim Rowan stated earlier this year that increasing the computing power in EVs will be harder and more altering of the automotive industry than switching from ICE vehicles to EVs. This means that EV winners and losers will in great part be separated by their “relative strength in their cyberphysical systems engineering,” states Clemson’s Paredis.

Even for the large auto suppliers, the transition to EVs will not be an easy road. For instance, automakers are demanding these suppliers absorb more cost cuts because automakers are finding EVs so expensive to build. Not only do automakers want to bring cutting-edge software expertise in-house, they want greater inside expertise in critical EV supply-chain components, especially batteries.

Automakers, including Tesla, are all scrambling for battery talent, with bidding wars reportedly breaking out to acquire top candidates. With automakers planning to spend more than $13 billion to build at least 13 new EV battery plants in North America within the next five to seven years, experienced management and production-line talent will likely be in extremely short supply. Tesla’s Texas Gigafactory needs some 10,000 workers alone, for example. With at least 60 new battery plants planned to be in operation globally by 2030, and scores needed soon afterward, major battery makers are already highlighting their expected skill shortages.


The underlying reason for the worry: Supplying sufficient raw materials to existing and planned battery plants as well as to the manufacturers of other renewable energy sources and military systems—who are competing for the same materials—has several complications to overcome. Among them is the need for more mines to provide the metals required, which have spiked in price as demand has increased. For example, while demand for lithium is growing rapidly, investment in mines has significantly lagged the investment that has been aimed toward EVs and battery plants. It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.

Mining the raw materials, of course, assumes that there is sufficient refining capability to process them, which, outside of China, is limited. This is especially true in the United States, which, according to a Biden Administration special supply-chain investigative report, has “limited raw material production capacity and virtually no processing capacity.” Consequently, the report states, the United States “exports the limited raw materials produced today to foreign markets.” For example, output from the only nickel mine in the United States, the Eagle mine in Minnesota, is sent to Canada for smelting.

“Energy and information are two basic currencies of organic and social systems. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.” —Herb Simon

One possible solution is to move away from lithium-ion batteries and nickel metal hydride batteries to other battery chemistries such as lithium-iron phosphate, lithium-ion phosphate, lithium-sulfur, lithium-metal, and sodium-ion, among many others, not to mention solid-state batteries, as a way to alleviate some of the material supply and cost problems. Tesla is moving toward the use of lithium-iron phosphate batteries, as is Ford for some of its vehicles. These batteries are cobalt free, which alleviates several sourcing issues.

Another solution may be recycling both EV batteries as well as the waste and rejects from battery manufacturing, which can run between 5 to 10 percent of production. Effective recycling of EV batteries “has the potential to reduce primary demand compared to total demand in 2040, by approximately 25 percent for lithium, 35 percent for cobalt and nickel, and 55 percent for copper,” according to a report by the University of Sidney’s Institute for Sustainable Futures.



While investments into creating EV battery recycling facilities have started, there is a looming question of whether there will be enough battery factory scrap and other lithium-ion battery waste for them to remain operational while they wait for sufficient numbers of batteries to make them profitable. Lithium-ion battery-pack recycling is very time-consuming and expensive, making mining lithium often cheaper than recycling it, for example. Recycling low or no-cobalt lithium batteries, which is the direction many automakers are taking, may also make it unprofitable to recycle them.

An additional concern is that EV batteries, once no longer useful for propelling the EV, have years of life left in them. They can be refurbished, rebuilt, and reused in EVs, or repurposed into storage devices for homes, businesses, or the grid. Whether it will make economic sense to do either at scale versus recycling them remains to be seen.

Howard Nusbaum, the administrator of the National Salvage Vehicle Reporting Program (NSVRP), succinctly puts it, “There is no recycling, and no EV-recycling industry, if there is no economic basis for one.”

In the next article in the series, we will look at whether the grid can handle tens of millions of EVs.


Match ID: 178 Score: 3.57 source: spectrum.ieee.org age: 10 days
qualifiers: 3.57 mit

Are You Ready for Workplace Brain Scanning?
Sat, 19 Nov 2022 16:00:01 +0000


Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.

To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.

The fundamental technology that these companies rely on is not new: Electroencephalography (EEG) has been around for about a century, and it’s commonly used today in both medicine and neuroscience research. For those applications, the subject may have up to 256 electrodes attached to their scalp with conductive gel to record electrical signals from neurons in different parts of the brain. More electrodes, or “channels,” mean that doctors and scientists can get better spatial resolution in their readouts—they can better tell which neurons are associated with which electrical signals.

What is new is that EEG has recently broken out of clinics and labs and has entered the consumer marketplace. This move has been driven by a new class of “dry” electrodes that can operate without conductive gel, a substantial reduction in the number of electrodes necessary to collect useful data, and advances in artificial intelligence that make it far easier to interpret the data. Some EEG headsets are even available directly to consumers for a few hundred dollars.

While the public may not have gotten the memo, experts say the neurotechnology is mature and ready for commercial applications. “This is not sci-fi,” says James Giordano, chief of neuroethics studies at Georgetown University Medical Center. “This is quite real.”

How InnerEye’s TSA-boosting technology works

InnerEye Security Screening Demo youtu.be

In an office in Herzliya, Israel, Sergey Vaisman sits in front of a computer. He’s relaxed but focused, silent and unmoving, and not at all distracted by the seven-channel EEG headset he’s wearing. On the computer screen, images rapidly appear and disappear, one after another. At a rate of three images per second, it’s just possible to tell that they come from an airport X-ray scanner. It’s essentially impossible to see anything beyond fleeting impressions of ghostly bags and their contents.

“Our brain is an amazing machine,” Vaisman tells us as the stream of images ends. The screen now shows an album of selected X-ray images that were just flagged by Vaisman’s brain, most of which are now revealed to have hidden firearms. No one can knowingly identify and flag firearms among the jumbled contents of bags when three images are flitting by every second, but Vaisman’s brain has no problem doing so behind the scenes, with no action required on his part. The brain processes visual imagery very quickly. According to Vaisman, the decision-making process to determine whether there’s a gun in complex images like these takes just 300 milliseconds.

Brain data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier.

What takes much more time are the cognitive and motor processes that occur after the decision making—planning a response (such as saying something or pushing a button) and then executing that response. If you can skip these planning and execution phases and instead use EEG to directly access the output of the brain’s visual processing and decision-making systems, you can perform image-recognition tasks far faster. The user no longer has to actively think: For an expert, just that fleeting first impression is enough for their brain to make an accurate determination of what’s in the image.

An illustration of a person in front of screens with suitcases above it.  InnerEye’s image-classification system operates at high speed by providing a shortcut to the brain of an expert human. As an expert focuses on a continuous stream of images (from three to 10 images per second, depending on complexity), a commercial EEG system combined with InnerEye’s software can distinguish the characteristic response the expert’s brain produces when it recognizes a target. In this example, the target is a weapon in an X-ray image of a suitcase, representing an airport-security application.Chris Philpot

Vaisman is the vice president of R&D of InnerEye, an Israel-based startup that recently came out of stealth mode. InnerEye uses deep learning to classify EEG signals into responses that indicate “targets” and “nontargets.” Targets can be anything that a trained human brain can recognize. In addition to developing security screening, InnerEye has worked with doctors to detect tumors in medical images, with farmers to identify diseased plants, and with manufacturing experts to spot product defects. For simple cases, InnerEye has found that our brains can handle image recognition at rates of up to 10 images per second. And, Vaisman says, the company’s system produces results just as accurate as a human would when recognizing and tagging images manually—InnerEye is merely using EEG as a shortcut to that person’s brain to drastically speed up the process.

While using the InnerEye technology doesn’t require active decision making, it does require training and focus. Users must be experts at the task, well trained in identifying a given type of target, whether that’s firearms or tumors. They must also pay close attention to what they’re seeing—they can’t just zone out and let images flash past. InnerEye’s system measures focus very accurately, and if the user blinks or stops concentrating momentarily, the system detects it and shows the missed images again.

Can you spot the manufacturing defects?

Examine the sample images below, and then try to spot the target among the nontargets.

Ten images are displayed every second for five seconds on loop. There are three targets.

A pair of black and white images.  The left is labelled "non target" and the right is "target." there is a red circle around a black line on the right image.

A gif of a black and white static image

Can you spot the weapon?

Three images are displayed every second for five seconds on loop. There is one weapon.

A gif of x-rayed pieces of luggage. InnerEye

Having a human brain in the loop is especially important for classifying data that may be open to interpretation. For example, a well-trained image classifier may be able to determine with reasonable accuracy whether an X-ray image of a suitcase shows a gun, but if you want to determine whether that X-ray image shows something else that’s vaguely suspicious, you need human experience. People are capable of detecting something unusual even if they don’t know quite what it is.

“We can see that uncertainty in the brain waves,” says InnerEye founder and chief technology officer Amir Geva. “We know when they aren’t sure.” Humans have a unique ability to recognize and contextualize novelty, a substantial advantage that InnerEye’s system has over AI image classifiers. InnerEye then feeds that nuance back into its AI models. “When a human isn’t sure, we can teach AI systems to be not sure, which is better training than teaching the AI system just one or zero,” says Geva. “There is a need to combine human expertise with AI.” InnerEye’s system enables this combination, as every image can be classified by both computer vision and a human brain.

Using InnerEye’s system is a positive experience for its users, the company claims. “When we start working with new users, the first experience is a bit overwhelming,” Vaisman says. “But in one or two sessions, people get used to it, and they start to like it.” Geva says some users do find it challenging to maintain constant focus throughout a session, which lasts up to 20 minutes, but once they get used to working at three images per second, even two images per second feels “too slow.”

In a security-screening application, three images per second is approximately an order of magnitude faster than an expert can manually achieve. InnerEye says their system allows far fewer humans to handle far more data, with just two human experts redundantly overseeing 15 security scanners at once, supported by an AI image-recognition system that is being trained at the same time, using the output from the humans’ brains.

InnerEye is currently partnering with a handful of airports around the world on pilot projects. And it’s not the only company working to bring neurotech into the workplace.

How Emotiv’s brain-tracking technology works

Workers wearing earbuds sit in an office in front of computers. Emotiv’s MN8 earbuds collect two channels of EEG brain data. The earbuds can also be used for phone calls and music. Emotiv

When it comes to neural monitoring for productivity and well-being in the workplace, the San Francisco–based company Emotiv is leading the charge. Since its founding 11 years ago, Emotiv has released three models of lightweight brain-scanning headsets. Until now the company had mainly sold its hardware to neuroscientists, with a sideline business aimed at developers of brain-controlled apps or games. Emotiv started advertising its technology as an enterprise solution only this year, when it released its fourth model, the MN8 system, which tucks brain-scanning sensors into a pair of discreet Bluetooth earbuds.

Tan Le, Emotiv’s CEO and cofounder, sees neurotech as the next trend in wearables, a way for people to get objective “brain metrics” of mental states, enabling them to track and understand their cognitive and mental well-being. “I think it’s reasonable to imagine that five years from now this [brain tracking] will be quite ubiquitous,” she says. When a company uses the MN8 system, workers get insight into their individual levels of focus and stress, and managers get aggregated and anonymous data about their teams.

The Emotiv Experience

Illustration of head with an earpiece in.  With columns of data on either side. The Emotiv Experience Chris Philpot

Emotiv’s MN8 system uses earbuds to capture two channels of EEG data, from which the company’s proprietary algorithms derive performance metrics for attention and cognitive stress. It’s very difficult to draw conclusions from raw EEG signals [top], especially with only two channels of data. The MN8 system relies on machine-learning models that Emotiv developed using a decade’s worth of data from its earlier headsets, which have more electrodes.

To determine a worker’s level of attention and cognitive stress, the MN8 system uses a variety of analyses. One shown here [middle, bar graphs] reveals increased activity in the low-frequency ranges (theta and alpha) when a worker’s attention is high and cognitive stress is low; when the worker has low attention and high stress, there’s more activity in the higher-frequency ranges (beta and gamma). This analysis and many others feed into the models that present simplified metrics of attention and cognitive stress [bottom] to the worker.

Emotiv launched its enterprise technology into a world that is fiercely debating the future of the workplace. Workers are feuding with their employers about return-to-office plans following the pandemic, and companies are increasingly using “ bossware” to keep tabs on employees—whether staffers or gig workers, working in the office or remotely. Le says Emotiv is aware of these trends and is carefully considering which companies to work with as it debuts its new gear. “The dystopian potential of this technology is not lost on us,” she says. “So we are very cognizant of choosing partners that want to introduce this technology in a responsible way—they have to have a genuine desire to help and empower employees,” she says.

Lee Daniels, a consultant who works for the global real estate services company JLL, has spoken with a lot of C-suite executives lately. “They’re worried,” says Daniels. “There aren’t as many people coming back to the office as originally anticipated—the hybrid model is here to stay, and it’s highly complex.” Executives come to Daniels asking how to manage a hybrid workforce. “This is where the neuroscience comes in,” he says.

Emotiv has partnered with JLL, which has begun to use the MN8 earbuds to help its clients collect “true scientific data,” Daniels says, about workers’ attention, distraction, and stress, and how those factors influence both productivity and well-being. Daniels says JLL is currently helping its clients run short-term experiments using the MN8 system to track workers’ responses to new collaboration tools and various work settings; for example, employers could compare the productivity of in-office and remote workers.

“The dystopian potential of this technology is not lost on us.” —Tan Le, Emotiv CEO

Emotiv CTO Geoff Mackellar believes the new MN8 system will succeed because of its convenient and comfortable form factor: The multipurpose earbuds also let the user listen to music and answer phone calls. The downside of earbuds is that they provide only two channels of brain data. When the company first considered this project, Mackellar says, his engineering team looked at the rich data set they’d collected from Emotiv’s other headsets over the past decade. The company boasts that academics have conducted more than 4,000 studies using Emotiv tech. From that trove of data—from headsets with 5, 14, or 32 channels—Emotiv isolated the data from the two channels the earbuds could pick up. “Obviously, there’s less information in the two sensors, but we were able to extract quite a lot of things that were very relevant,” Mackellar says.

Once the Emotiv engineers had a hardware prototype, they had volunteers wear the earbuds and a 14-channel headset at the same time. By recording data from the two systems in unison, the engineers trained a machine-learning algorithm to identify the signatures of attention and cognitive stress from the relatively sparse MN8 data. The brain signals associated with attention and stress have been well studied, Mackellar says, and are relatively easy to track. Although everyday activities such as talking and moving around also register on EEG, the Emotiv software filters out those artifacts.

The app that’s paired with the MN8 earbuds doesn’t display raw EEG data. Instead, it processes that data and shows workers two simple metrics relating to their individual performance. One squiggly line shows the rise and fall of workers’ attention to their tasks—the degree of focus and the dips that come when they switch tasks or get distracted—while another line represents their cognitive stress. Although short periods of stress can be motivating, too much for too long can erode productivity and well-being. The MN8 system will therefore sometimes suggest that the worker take a break. Workers can run their own experiments to see what kind of break activity best restores their mood and focus—maybe taking a walk, or getting a cup of coffee, or chatting with a colleague.

What neuroethicists think about neurotech in the workplace

While MN8 users can easily access data from their own brains, employers don’t see individual workers’ brain data. Instead, they receive aggregated data to get a sense of a team or department’s attention and stress levels. With that data, companies can see, for example, on which days and at which times of day their workers are most productive, or how a big announcement affects the overall level of worker stress.

Emotiv emphasizes the importance of anonymizing the data to protect individual privacy and prevent people from being promoted or fired based on their brain metrics. “The data belongs to you,” says Emotiv’s Le. “You have to explicitly allow a copy of it to be shared anonymously with your employer.” If a group is too small for real anonymity, Le says, the system will not share that data with employers. She also predicts that the device will be used only if workers opt in, perhaps as part of an employee wellness program that offers discounts on medical insurance in return for using the MN8 system regularly.

However, workers may still be worried that employers will somehow use the data against them. Karen Rommelfanger, founder of the Institute of Neuroethics, shares that concern. “I think there is significant interest from employers” in using such technologies, she says. “I don’t know if there’s significant interest from employees.”

Both she and Georgetown’s Giordano doubt that such tools will become commonplace anytime soon. “I think there will be pushback” from employees on issues such as privacy and worker rights, says Giordano. Even if the technology providers and the companies that deploy the technology take a responsible approach, he expects questions to be raised about who owns the brain data and how it’s used. “Perceived threats must be addressed early and explicitly,” he says.

Giordano says he expects workers in the United States and other western countries to object to routine brain scanning. In China, he says, workers have reportedly been more receptive to experiments with such technologies. He also believes that brain-monitoring devices will really take off first in industrial settings, where a momentary lack of attention can lead to accidents that injure workers and hurt a company’s bottom line. “It will probably work very well under some rubric of occupational safety,” Giordano says. It’s easy to imagine such devices being used by companies involved in trucking, construction, warehouse operations, and the like. Indeed, at least one such product, an EEG headband that measures fatigue, is already on the market for truck drivers and miners.

Giordano says that using brain-tracking devices for safety and wellness programs could be a slippery slope in any workplace setting. Even if a company focuses initially on workers’ well-being, it may soon find other uses for the metrics of productivity and performance that devices like the MN8 provide. “Metrics are meaningless unless those metrics are standardized, and then they very quickly become comparative,” he says.

Rommelfanger adds that no one can foresee how workplace neurotech will play out. “I think most companies creating neurotechnology aren’t prepared for the society that they’re creating,” she says. “They don’t know the possibilities yet.”

This article appears in the December 2022 print issue.


Match ID: 179 Score: 3.57 source: spectrum.ieee.org age: 10 days
qualifiers: 3.57 mit

Why Your Organization Should Join the IEEE Standards Association
Fri, 18 Nov 2022 19:00:01 +0000


The global business landscape is constantly evolving. Digital transformation— compounded by the challenges of globalization, supply-chain stability, demographic shifts, and climate change—is pressuring companies and government agencies to innovate and safely deploy sustainable technologies.

As digital transformation continues, the pervasive growth of technology increasingly intersects with industry, government, and societal interests. Companies and organizations need access to technologies that can enhance efficiencies, productivity, and competitive advantage.


Governments seek influence over emerging technologies to preserve economic interests, advance global trade, and protect their citizens. Consumers are demanding more transparency regarding organizational motives, practices, and processes.

For those and other reasons, new types of stakeholders are seeking a voice in the technology standardization process.

How organizations benefit from developing standards

The need is evidenced in the membership gains at the IEEE Standards Association. IEEE SA membership for organizations, also known as entity membership, has increased by more than 150 percent in the past six years. Academic institutions, government agencies, and other types of organizations now account for more than 30 percent of the member base.

Entity membership offers the ability to help shape technology development and ensure your organization’s interests are represented in the standards development process. Other benefits include balloting privileges, leadership eligibility, and networking opportunities.

IEEE SA welcomes different types of organizations because they bring varied perspectives and they voice concerns that need to be addressed during the standards development process. Engaging diverse viewpoints from companies of all sizes and types also helps to identify and address changing market needs.

From a geographic standpoint, IEEE SA welcomes participation from all regions of the world. Diverse perspectives and contributions to the development cycle enable innovation to be shared and realized by all stakeholders.

Programs on blockchain, IoT, and other emerging technology

IEEE SA has introduced new industry-engagement programs such as open-source and industry-alliance offerings designed to speed innovation and adoption. In addition, industry participants have access to the full IEEE SA ecosystem of programs and services including technology incubation, pre-standardization work, standards development, and conformity assessment activities. Training and marketing tools support working groups at every stage of the process.

An increasing number of new standards projects from emerging technology areas have created a more robust and diversified portfolio of work. The technologies include artificial intelligence and machine learning, blockchain and distributed ledger technologies, quantum computing, cloud computing, the Internet of Things, smart cities, smart factories and online gaming. There is also more participation from the health care, automotive, and financial services sectors.

IEEE SA has grown and evolved its programs to address market needs, but its purpose has not changed. The organization is focused on empowering innovators to raise the world’s standards for the benefit of humanity.

Those innovators might be individuals or organizations looking to make a difference in the world, but it can be accomplished only when we all work together.

Learn more about IEEE SA membership for organizations and how your organization can play a key role in advancing future technologies.


Match ID: 180 Score: 3.57 source: spectrum.ieee.org age: 11 days
qualifiers: 3.57 mit

New Records for the Biggest and Smallest AI Computers
Thu, 17 Nov 2022 16:28:08 +0000


The machine-learning consortium MLCommons released the latest set of benchmark results last week, offering a glimpse at the capabilities of new chips and old as they tackled executing lightweight AI on the tiniest systems and training neural networks at both server and supercomputer scales. The benchmark tests saw the debut of new chips from Intel and Nvidia as well as speed boosts from software improvements and predictions that new software will play a role in speeding the new chips in the years after their debut.


Training Servers

Training AI has been a problem that’s driven billions of dollars in investment, and it seems to be paying off. “A few years ago we were talking about training these networks in days or weeks, now we’re talking about minutes,” says Dave Salvator, director of product marketing at Nvidia.

There are eight benchmarks in the MLPerf training suite, but here I’m showing results from just two—image classification and natural-language processing—because although they don’t give a complete picture, they’re illustrative of what’s happening. Not every company puts up benchmark results every time; in the past, systems from Baidu, Google, Graphcore, and Qualcomm have made marks, but none of these were on the most recent list. And there are companies whose goal is to train the very biggest neural networks, such as Cerebras and SambaNova, that have never participated.

Another note about the results I’m showing—they are incomplete. To keep the eye glazing to a minimum, I’ve listed only the fastest system of each configuration. There were already four categories in the main “closed” contest: cloud (self-evident), on premises (systems you could buy and install in-house right now), preview (systems you can buy soon but not now), and R&D (interesting but odd, so I excluded them). I then listed the fastest training result for each category for each configuration—the number of accelerators in a computer. If you want to see the complete list, it’s at the MLCommons website.

A casual glance shows that machine-learning training is still very much Nvidia’s house. It can bring a supercomputer-scale number of GPUs to the party to smash through training problems in mere seconds. Its A100 GPUs have dominated the MLPerf list for several iterations now, and it powers Microsoft’s Azure cloud AI offerings as well as systems large and small from partners including Dell, HPE, and Fujitsu. But even among the A100 gang there’s real competition, particularly between Dell and HPE.

But perhaps more important was Azure’s standing. On image classification, the cloud systems were essentially a match for the best A100 on-premises computers. The results strengthen Microsoft’s case that renting resources in the cloud is as good as buying your own. And that case might might be even stronger soon. This week Nvidia and Microsoft announced a multiyear collaboration that would see the inclusion of Nvidia’s upcoming GPU, the H100, in the Azure cloud.

This was the first peek at training abilities for the H100. And Nivida’s Dave Salvator emphasized how much progress happens—largely due to software improvements—in the years after a new chip comes out. On a per-chip basis, the A100 delivers 2.5 times the average performance today versus its first run at the MLPerf benchmarks in 2020. Compared to A100’s debut scores, H100 delivered 6.7 times the speed. But compared to A100 with today’s software, the gain is only 2.6-fold.

In a way, H100 seems a bit overpowered for the MLPerf benchmarks, tearing through most of them in minutes using a fraction of the A100 hardware needed to match it. And in truth, it is meant for bigger things. “H100 is our solution for the most advanced models where we get into the millions, even billions of hyperparameters,” says Salvator.

Salvator says a lot of the gain is from the H100’s “transformer engine.” Essentially, it’s the intelligent use of low-precision—efficient but less accurate—computations whenever possible. The scheme is particularly designed for neural networks called transformers, of which the natural language processing benchmark BERT is an example. Transformers are in the works for many other machine learning tasks. “Transformer-based networks have been literally transformative to AI,” says Salvator. “It’s a horrible pun.“

Memory is a bottleneck for all sorts of AI, but it’s particularly limiting in BERT and other transformer models. Such neural networks rely on a quality called “attention.” You can think of it as how many words a language processor is aware of at once. It doesn’t scale up well, largely because it leads to a huge increase in writing to system memory. Earlier this year Hazy Research (the name for Chris Re’s lab at Stanford) deployed an algorithm to an Azure cloud system that shaved 10 percent of the training time off Microsoft’s best effort. For this round, Azure and Hazy Research worked together to demonstrate the algorithm—called Flash Attention.

Both the image-classification and natural-language-processing tables show Intel’s competitive position. The company showed results for the Habana Gaudi2, its second generation AI accelerator, and the Sapphire Rapids Xeon CPU, which will be commercially available in the coming months. For the latter, the company was out to prove that you can do a lot of machine-learning training without a GPU.

A setup with 32 CPUs landed well behind a Microsoft Azure cloud-based system with only four GPUs on object recognition, but it still finished in less than an hour and a half, and for natural-language processing, it nearly matched that Azure system. In fact, none of the training took longer than 90 minutes, even on much more modest CPU-only computers.

“This is for customers for whom training is part of the workload, but it’s not the workload,” says Jordan Plawner, an Intel senior director and AI product manager. Intel is reasoning that if a customer is retraining only once a week, whether the work takes 30 minutes or 5 minutes is of too little importance for them to spend on a GPU accelerator they don’t need for the rest of the week.

Habana Gaudi2 is a different story. As the company’s dedicated machine-learning accelerator, the 7-nanometer chip goes up against Nvidia’s A100 (another 7-nm chip) and soon will face the 5-nm H100. In that light, it performed well on certain tests. On image classification, an eight-chip system landed only a couple of minutes behind an eight-chip H100. But the gap was much wider with the H100 at the natural-language-processing task, though it still narrowly bested an equal-size and Hazy-Research-enhanced A100 system.

“We’re not done with Gaudi 2,” says Habana’s Eitan Medina. Like others, Habana is hoping to speed learning by strategically using low-precision computations on certain layers of neural networks. The chip has 8-bit floating-point capabilities, but so far the smallest precision the company has engaged on the chip for MLPerf training purposes is bfloat 16.


Training Supercomputers

MLCommons released results for training high-performance computers—supercomputers and other big systems—at the same time as those for training servers. The HPC benchmarks are not as established and have fewer participants, but they still give a snapshot of how machine learning is done in the supercomputing space and what the goals are. There are three benchmarks: CosmoFlow estimates physical quantities from cosmological image data; DeepCAM spots hurricanes and atmospheric rivers in climate simulation data; and OpenCatalyst predicts the energy levels of molecular configurations.

There are two ways to measure systems on these benchmarks. One is to run a number of instances of the same neural network on the supercomputer, and the other is to just throw a bunch of resources at a single instance of the problem and see how long it takes. The table below is the latter and just for CosmoFlow, because it’s much simpler to read. (Again, feel free to view the whole schemozzle at MLCommons.)

The CosmoFlow results show four supercomputers powered by as many different types of CPU architectures and two types of GPU. Three of the four were accelerated by Nvidia GPUs, but Fugaku, the second most powerful computer in the world, used only its own custom-built processor, the Fujitsu A64FX.

The MLPerf HPC benchmarks came out only the week before Supercomputing 2022, in Dallas, one of the two conferences at which new Top500 rankings of supercomputers are announced.

A separate benchmark for supercomputing AI has also been developed. Instead of training particular neural networks, it solves “a system of linear equations using novel, mixed-precision algorithms that exploit modern hardware.” Although results from the two benchmarks don’t line up, there is overlap between the HPL-MxP list and the CosmoFlow results including: Nvidia’s Selene, Riken’s Fugaku, and Germany’s JUWELS.


Tiny ML systems

The latest addition to the MLPerf effort is a suite of benchmarks designed to test the speed and energy efficiency of microcontrollers and other small chips that execute neural networks that do things like spotting keywords and other low-power, always-on tasks. MLPerf Tiny, as it’s called, is too new for real trends to have emerged in the data. But the results released so far show a couple of standouts. The table here shows the fastest “visual wakewords” results for each type of processor, and shows that Syntiant and Greenwave Technologies have an edge over the competition.


Match ID: 181 Score: 3.57 source: spectrum.ieee.org age: 12 days
qualifiers: 3.57 mit

Failures in Twitter’s Two-Factor Authentication System
2022-11-17T10:53:50Z

Twitter is having intermittent problems with its two-factor authentication system:

Not all users are having problems receiving SMS authentication codes, and those who rely on an authenticator app or physical authentication token to secure their Twitter account may not have reason to test the mechanism. But users have been self-reporting issues on Twitter since the weekend, and WIRED confirmed that on at least some accounts, authentication texts are hours delayed or not coming at all. The meltdown comes less than two weeks after Twitter laid off about half of its workers...


Match ID: 182 Score: 3.57 source: www.schneier.com age: 12 days
qualifiers: 3.57 mit

Top 10 AI Content Generator & Writer Tools in 2022
Tue, 15 Nov 2022 08:58:00 +0000


Are you looking for a way to create content that is both effective and efficient? If so, then you should consider using an AI content generator. AI content generators are a great way to create content that is both engaging and relevant to your audience. 

There are a number of different AI content generator tools available on the market, and it can be difficult to know which one is right for you. To help you make the best decision, we have compiled a list of the top 10 AI content generator tools that you should use in 2022.

So, without further ado, let’s get started!


1. Jasper Ai(Formerly known as Jarvis)

jasper content generator tool


Jasper is a content writing and content generation tool that uses artificial intelligence to identify the best words and sentences for your writing style and medium in the most efficient, quick, and accessible way.

Features

  •  It's trusted by 50,000+ marketers for creating engaging marketing campaigns, ad copy, blog posts, and articles within minutes which would traditionally take hours or days. Special Features:
  • Blog posts have been optimized for search engines and rank high on Google and other search engines. This is a huge plus for online businesses that want to generate traffic to their website through content marketing.
  • 99.9% Original Content and guarantees that all content it generates will be original, so businesses can focus on their online reputation rather than worrying about penalties from Google for duplicate content.
  • Long-Form Article Writing – Jasper.ai is also useful for long-form writing, allowing users to create articles of up to 10,000 words without any difficulty. This is ideal for businesses that want to produce in-depth content that will capture their audience’s attention.

Pros

  • User-friendly interface
  • Generates a wide variety of content types
  • Guarantees 100% unique and free-plagiarism content
  • SEO friendly
  •  Create articles of up to 10k words

Cons

  • Not the cheapest AI writer on the market

Pricing

  • Jasper.ai offers a free trial
  • Starter Plan: $29/Month 

Features:

  • 50 AI copywriting skills 
  • Unlimited projects,  
  • Up to 5 user logins

  • The cheapest option covers up to 20,000 words generated each month.

Boss Mode:  $99/Month 

Features:

  • Up to 100k words are generated each month and can go up to over 300k.

Also Black Friday and Cyber Monday Deal is Running

Highlights of Jasper Black Friday/Cyber Monday Deal:

  • Free bonus 1: Get 300,000 extra words ($297 value)
  • gree bonus 2: Jasper Art for three teammates ($720 value)

  • Free bonus 3: Ticket to Jasper’s conference ($499 value)

  • Free bonus 4: The Jasper SEO Course ($499 value)
Grab The black Friday Deal

Note: Offer end in 1st December 2022 Mindight PST Time



2. Copy Ai

Copy ai, content generator


Copy.ai is a content writing tool that enables its users to create marketing copy, social media posts, Facebook Ads, and many more formats by using more than 90 templates such as Bullet Points to Blogs, General Ads, Hook Text, etc. 

The utility of this service can be used for short-term or format business purposes such as product descriptions, website copy, market copy, and sales reports.

Key Features:

  • Provides a large set of templates where you can input the data and the AI will generate Templates with around 10 or more options to make it easy for the user to choose.
  •  Smooth and efficient user experience with chrome extension where one can easily transfer information from Copy.ai to a content management forum, Google docs, etc without having to switch tabs.
  • Generates content  in 25 languages where your input and output language may differ if you are not a native English speaker.

Pros

  • The best option for short-length content generation such as market copy, sales reports, blogs, etc.
  • Facebook community and email support for users to understand the AI better and to interact with other users.
  • Beginner-friendly user experience with various templates to help the process of content generation.
  • Free plan and no credit card required.

Cons

  • The free plan from Copy AI is a welcome sight, however, it is just suitable for testing the software.

Pricing

Free Trial – 7 days with 24/7 email support and 100 runs per day.

Pro Plan:  $49 and yearly, it will cost you $420 i.e. $35 per month.

Wait! I've got a pretty sweet deal for you. Sign up through the link below, and you'll get (7,000 Free Words Plus 40% OFF) if you upgrade to the paid plan within four days.

Claim Your 7,000 Free Words With This Special Link - No Credit Card Required


3. Frase Ai

Frase, content generator


Just like Outranking, Frase is an AI that helps you research, create and optimize your content to make it high quality within seconds. Frase works on SEO optimization where the content is made to the liking of search engines by optimizing keywords and keywords.

Features:

  • Generate full-length, optimized content briefs in  seconds and review the main keywords, headers, and concepts in your SEO competitors’ content in one intuitive research panel.
  • Write high-converting, SEO-optimized copy and make writer’s block a thing of the past with automated outlines, blog introductions, product descriptions, FAQs, and more.
  • An intuitive text editor that uses a topic model to score your content Optimization against your competitors.
  • A dashboard that automatically identifies and categorizes your best content opportunities. Frase uses your Google Search Console data to serve up actionable insights about what you should work on next.

Pros

  • Unlike Outranking, the interface to Frase is very user-friendly and accessible.
  • Users who are content writers and have to research get a lot of time to write and ideate instead of juggling from one website to another as data can be easily accessed on Frase for research on a topic.
  • Optimizing content with keyword analysis and SEO optimization has been made easier with Frase's Content Optimization.
  • Reports on competitors' websites help in optimizing our own articles and websites.
  • Content briefs make research very easy and efficient.

Cons

  • The paid plans are a bit pricey because they include many tools for content optimization.

Pricing

Frase provides two plans for all users and a customizable plan for an enterprise or business.

Solo Plan: $14.99/Month and $12/Month if billed yearly with 4  Document Credits for 1 user seat.

Basic Plan: $44.99/month and $39.99/month if billed yearly with 30 Document Credits for 1 user seat.

Team Plan: $114.99/month and $99.99/month if billed yearly for unlimited document credits for 3 users.

*SEO Add-ons and other premium features for $35/month irrespective of the plan.


4. Article Forge — Popular Blog Writing Software for Efficiency and Affordability 

articleforge, content generator


Article Forge is another content generator that operates quite differently from the others on this list. Unlike Jasper.ai, which requires you to provide a brief and some information on what you want it to write this tool only asks for a keyword. From there, it’ll generate a complete article for you.

Features

  •  Article Forge integrates with several other software, including WordAi, RankerX, SEnuke TNG, and SEO Autopilot.
  • The software takes information from high-ranking websites and then creates more credible articles to rank well in search engines.
  •  If you want to generate content regularly, Article Forge can help. You can set it up to automatically generate articles based on your specific keyword or topic. Or, if you need a lot of content quickly, you can use the bulk content feature to get many articles in a short period.

Pros

  • Excellent for engaging with readers on multiple CMS platforms
  • No spinner content. Create multiple unique articles
  • Extremely quick and efficient
  • One of the cheapest options online

Cons

  • You need to pay attention to the content since it’s not always on point
  • Only ideal for decent-quality articles – if you’re lucky

Pricing

What’s excellent about Article Forge is they provide a 30-day money-back guarantee. You can choose between a monthly or yearly subscription. Unfortunately, they offer a free trial and no free plan:

Basic Plan:  $27/Month

Features:

 This plan allows users to produce up to 25k words each month. This is excellent for smaller blogs or those who are just starting.

Standard Plan: $57/month)

Features:

  •  This plan allows users to produce up to 250k words each month. This is excellent for smaller blogs or those who are just starting.

Unlimited Plan: $117/month

Features:

  • If you’re looking for an unlimited amount of content, this is the plan for you. You can create as many articles as you want, and there’s no word limit.

It’s important to note that Article Forge guarantees that all content generated through the platform passes Copyscape.


5. Rytr — Superb AI Content Writing Assistant

rytr.me, content generator


Rytr.me is a free AI content generator perfect for small businesses, bloggers, and students. The software is easy to use and can generate SEO-friendly blog posts, articles, and school papers in minutes.

Features

  • Rytr can be used for various purposes, from writing blog posts to creating school papers. You can also generate captions for social media, product descriptions, and meta descriptions.
  •  Rytr supports writing for over 30 languages, so you can easily create content in your native language.
  • The AI helps you write content in over 30 tones to find the perfect tone for your brand or project.
  • Rytr has a built-in plagiarism checker that ensures all your content is original and plagiarism free.

Pros

  • Easy to use
  • Creates unique content
  • It supports over 30 languages
  • Multi-tone writing capabilities

Cons

  • It can be slow at times
  • Grammar and flow could use improvement

Pricing

Rytr offers a free plan that comes with limited features. It covers up to 5,000 characters generated each month and has access to the built-in plagiarism checker. If you want to use all the features of the software, you can purchase one of the following plans:


Saver Plan: $9/month, $90/year 

Features:

  • Generate 100k  characters per month
  • Access 40+ use-cases
  • Write in 30+ languages
  • Access 20+ tones
  • Built-in plagiarism checker
  • Generate up to 20 images per month with AI
  • Access to premium community
  • Create your own custom use-case
  • Unlimited Plan: $29/month, $290/year

Features:

  • Generate UNLIMITED* characters per month
  • Access 40+ use-cases
  • Write in 30+ languages
  • Access 20+ tones

  • Built-in plagiarism checker
  • Generate up to 100 images per month with AI
  • Access to premium community
  • Create your own custom use-case
  • Dedicated account manager
  • Priority email & chat support


6. Writesonic — Best AI Article Writing Software with a Grammar and Plagiarism Checker

writessonic, content generator


Writesonic is a free, easy-to-use AI content generator. The software is designed to help you create copy for marketing content, websites, and blogs. It's also helpful for small businesses or solopreneurs who need to produce content on a budget.

Features

  • The tone checker, is a great feature that helps you ensure that your content is consistent with your brand’s voice. This is excellent for crafting cohesive and on-brand content.
  • The grammar checker is another valuable tool that helps you produce error-free content.
  • The plagiarism checker is a great way to ensure that your content is original.

Pricing

Writesonic is free with limited features. The free plan is more like a free trial, providing ten credits. After that, you’d need to upgrade to a paid plan. Here are your options:

Short-form: $15/month 

Features:

  • Access to all the short-form content templates like Facebook ads, product descriptions, paragraphs, and more.

Long-Form: $19/month

Features:

  • Awesome tools to help you write short and long-form content like blog posts, ebooks, and more.


7. CopySmith — Produces Quality Content in Seconds 

copysmith, content generator


CopySmith is an AI content generator that can be used to create personal and professional documents, blogs, and presentations. It offers a wide range of features including the ability to easily create documents and presentations.

 CopySmith also has several templates that you can use to get started quickly.

 Features

  • This software allows you to create product descriptions, landing pages, and more in minutes.
  •  Offers rewritten content that is both unique and plagiarism free.
  •  This feature helps you create product descriptions for your Shopify store that are SEO-friendly and attractive to customers.
  • This is an excellent tool for new content ideas.

Pros

  • Excellent for generating eCommerce-ready content
  • No credit card is required for the free trial
  • SEO-ready content

Cons

  • The blog content isn’t the best
  • Better suited for short copy

Pricing

CopySmith offers a free trial with no credit card required. After the free trial, the paid plans are as follows:

Starter Plan: $19/month

Features:

  • Get 50 credits monthly with up to 20 plagiarism checks.
  • Professional Plan: $59/month 

Features:

  • Upgrade to 400 credits per month with up to 100 plagiarism checks.  
Enterprise – Create a custom-tailored plan by contacting the sales team.


8. Hypotenuse.ai — Best AI Writing Software for E-Commerce and Product Descriptions

hypotenuseai, content generator


Hypotenuse.ai is a free online tool that can help you create AI content. It's great for beginners because it allows you to create videos, articles, and infographics with ease. The software has a simple and easy-to-use interface that makes it perfect for new people looking for AI content generation.

Special Features

  • You can create custom-tailored copy specific to your audience’s needs. This is impressive since most free AI content generators do not offer this feature.
  • Hypotenuse takes data from social media sites, websites, and more sources to provide accurate information for your content.
  • If you’re selling a product online, you can use Hypotenuse to create automated product descriptions that are of high quality and will help you sell more products.

Pros

  • Excellent research capabilities
  • Custom-tailored copy
  • Automated product descriptions

Cons

  • No free plan

Pricing

Hypotenuse doesn’t offer a free plan. Instead, it offers a free trial period where you can take the software for a run before deciding whether it’s the right choice for you or not. Other than that, here are its paid options:

Starter Plan: $29/month

Features:

  •  This plan comes with 100 credits/month with 25k Words with one user seat. It’s an excellent option for individuals or small businesses.

Growth Plan: $59/month

Features:

  •  This plan comes with 350 credits/month with 87.5k words and 1 user seat. It’s perfect for larger businesses or agencies.

Enterprise – pricing is custom, so don’t hesitate to contact the company for more information.


9. Kafkai — Leading AI Writing Tool for SEOs and Marketers

kafkai, content generator

Kafkai is an AI content generator and writing software that produces niche-specific content on a wide variety of topics. It offers a user-friendly interface, as well as a high degree of personalization.

 Features

  • Kafkai offers a host of features that make it SEO-ready, including the ability to add keywords and tags to your content.
  • Kafkai is designed explicitly for creating niche-specific content, which can be a significant advantage for businesses or bloggers looking to target a specific audience.
  •  Kafkai produces high-quality content, a significant advantage for businesses or bloggers looking to set themselves apart from the competition.
  • Kafkai offers a unique feature that allows you to seed content from other sources, which can be a significant time-saver when creating content.

Pros

  • Quick results with high efficiency
  • You can add seed content and phrases
  • It can be used to craft complete articles

Cons

  • Its long-form-content generator isn’t very high quality

Pricing

Kafkai comes with a free trial to help you understand whether it’s the right choice for you or not. Additionally, you can also take a look at its paid plans:

Writer Plan: $29/month Create 100 articles per month.  $0.29/article

Newsroom Plan $49/month – Generate 250 articles a month at $0.20 per article.

Printing Press Plan: $129 /month   Create up to 1000 articles a month at roughly $0.13/article.

Industrial Printer Plan: ($199 a month) – Generate 2500 articles each month for $0.08/article.


10. Peppertype.ai — Best AI Content Writing Software for Blogging

peppertypeai, content generator


Peppertype.ai is an online AI content generator that’s easy to use and best for small business owners looking for a powerful copy and content writing tool to help them craft and generate various content for many purposes.

Features

  • You can choose from various pre-trained templates to create your content. This can save you a lot of time since you don’t have to spend time designing your templates or starting entirely from scratch.
  • Peppertype offers various copywriting frameworks to help you write better content.
  • Peppertype is lightweight and easy to use. This makes it perfect for beginners who want to get started with AI content generation.
  • Peppertype’s autocorrect feature automatically corrects your grammar and spelling mistakes as you type. This ensures that your content is free of errors.
  • Peppertype tracks user engagement data to help you create content that resonates with your audience.

Pros

  • It doesn’t have a steep learning curve
  • It helps users to create entirely original content
  • The basic plan comes with access to all of their frameworks and templates
  • Built-in style editor

Cons

  • More hits than misses on content generated
  • Tons of typos and grammatical errors

Pricing

Unfortunately, Peppertype.ai isn’t free. However, it does have a free trial to try out the software before deciding whether it’s the right choice for you. Here are its paid plans:

personal Plan:$35/Month

Features:

  • 50,000 words included
  • 40+ content types
  • Notes and Text Editor
  • Access to templates
  • Unlimited projects
  • Active customer support

Team Plan: $199/month

Features:

  • Everything included in the Personal
  • Collaborate & share results
  • Access control
  • Export results
  • Request custom content types

Enterprise – pricing is custom, so please contact the company for more information.

Final thoughts:

It is no longer a secret that humans are getting overwhelmed with the daily task of creating content. Our lives are busy, and the process of writing blog posts, video scripts, or other types of content is not our day job. In comparison, AI writers are not only cheaper to hire, but also perform tasks at a high level of excellence. This article explores 10 writing tools that used AI to create better content choose the one which meets your requirements and budget but in my opinion Jasper ai is one of the best tools to use to make high-quality content. 

If you have any questions ask in the comments section

Note: Don't post links in your comments

Note: This article contains affiliate links which means we make a small commission if you buy any premium plan from our link.



Match ID: 183 Score: 3.57 source: www.crunchhype.com age: 14 days
qualifiers: 3.57 mit

COP27: Without Greta, activists make waves at climate summit
Tue, 15 Nov 2022 00:05:45 GMT
Young campaigners say they are calling out "greenwashing" at COP27 from the inside.
Match ID: 184 Score: 3.57 source: www.bbc.co.uk age: 15 days
qualifiers: 3.57 mit

AMA:: We’re a group of researchers and library developers of homomorphic encryption, here to answer questions about homomorphic encryption and why it’s important for the future of data privacy!
2022-11-08T00:15:26+00:00
submitted by /u/lugh
[link] [comments]
Match ID: 185 Score: 3.57 source: www.reddit.com age: 22 days
qualifiers: 3.57 mit

What is COP27 and why is it important?
Mon, 07 Nov 2022 13:25:26 GMT
This year's climate summit comes amid spiralling energy costs and diplomatic tensions.
Match ID: 186 Score: 3.57 source: www.bbc.co.uk age: 22 days
qualifiers: 3.57 mit

Robotic Falcon Keeps Birds Away From Airports
Sun, 06 Nov 2022 14:00:00 +0000


Collisions with birds are a serious problem for commercial aircraft, costing the industry billions of dollars and killing thousands of animals every year. New research shows that a robotic imitation of a peregrine falcon could be an effective way to keep them out of flight paths.

Worldwide, so-called birdstrikes are estimated to cost the civil aviation industry almost US $1.4 billion annually. Nearby habitats are often deliberately made unattractive to birds, but airports also rely on a variety of deterrents designed to scare them away, such as loud pyrotechnics or speakers that play distress calls from common species.

However, the effectiveness of these approaches tends to decrease over time, as the birds get desensitized by repeated exposure, says Charlotte Hemelrijk, a professor on the faculty of science and engineering at the University of Groningen, in the Netherlands. Live hawks or blinding lasers are also sometimes used to disperse flocks, she says, but this is controversial as it can harm the animals, and keeping and training falcons is not cheap.

“The birds don’t distinguish [RobotFalcon] from a real falcon, it seems.”
—Charlotte Hemelrijk, University of Groningen

In an effort to find a more practical and lasting solution, Hemelrijk and colleagues designed a robotic peregrine falcon that can be used to chase flocks away from airports. The device is the same size and shape as a real hawk, and its fiberglass and carbon-fiber body has been painted to mimic the markings of its real-life counterpart.

Rather than flapping like a bird, the RobotFalcon relies on two small battery-powered propellers on its wings, which allows it to travel at around 30 miles per hour for up to 15 minutes at a time. A human operator controls the machine remotely from a hawk’s-eye perspective via a camera perched above the robot’s head.

To see how effective the RobotFalcon was at scaring away birds, the researchers tested it against a conventional quadcopter drone over three months of field testing, near the Dutch city of Workum. They also compared their results to 15 years of data collected by the Royal Netherlands Air Force that assessed the effectiveness of conventional deterrence methods such as pyrotechnics and distress calls.

Flock-herding Falcon Drone Patrols Airport Flight Paths youtu.be

In a paper published in the Journal of the Royal Society Interface, the team showed that the RobotFalcon cleared fields of birds faster and more effectively than the drone. It also kept birds away from fields longer than distress calls, the most effective of the conventional approaches.

There was no evidence of birds getting habituated to the RobotFalcon over three months of testing, says Hemelrijk, and the researchers also found that the birds exhibited behavior patterns associated with escaping from predators much more frequently with the robot than with the drone. “The way of reacting to the RobotFalcon is very similar to the real falcon,” says Hemelrijk. “The birds don’t distinguish it from a real falcon, it seems.”

Other attempts to use hawk-imitating robots to disperse birds have had less promising results, though. Morgan Drabik-Hamshare, a research wildlife biologist at the DoA, and her colleagues published a paper in Scientific Reports last year that described how they pitted a robotic peregrine falcon with flapping wings against a quadcopter and a fixed-wing remote-controlled aircraft.

They found the robotic falcon was the least effective of the three at scaring away turkey vultures, with the quadcopter scaring the most birds off and the remote-controlled plane eliciting the quickest response. “Despite the predator silhouette, the vultures did not perceive the predator UAS [unmanned aircraft system] as a threat,” Drabik-Hamshare wrote in an email.

Zihao Wang, an associate lecturer at the University of Sydney, in Australia, who develops UAS for bird deterrence, says the RobotFalcon does seem to be effective at dispersing flocks. But he points out that its wingspan is nearly twice the diagonal length of the quadcopter it was compared with, which means it creates a much larger silhouette when viewed from the birds’ perspective. This means the birds could be reacting more to its size than its shape, and he would like to see the RobotFalcon compared with a similar size drone in the future.

The unique design also means the robot requires an experienced and specially trained operator, Wang adds, which could make it difficult to roll out widely. A potential solution could be to make the system autonomous, he says, but it’s unclear how easy this would be.

Hemelrijk says automating the RobotFalcon is probably not feasible, both due to strict regulations around the use of autonomous drones near airports as well as the sheer technical complexity. Their current operator is a falconer with significant experience in how hawks target their prey, she says, and creating an autonomous system that could recognize and target bird flocks in a similar way would be highly challenging.

But while the need for skilled operators is a limitation, Hemelrijk points out that most airports already have full-time staff dedicated to bird deterrence, who could be trained. And given the apparent lack of habituation and the ability to chase birds in a specific direction—so that they head away from runways—she thinks the robotic falcon could be a useful addition to their arsenal.


Match ID: 187 Score: 3.57 source: spectrum.ieee.org age: 23 days
qualifiers: 3.57 mit

Cop27: the climate carnage we've faced this year – video
Thu, 03 Nov 2022 11:07:52 GMT

One by one, the grim scenarios climate scientists had outlined for the near future have been overtaken by events: extreme storms, droughts, floods and ice-sheet collapses whose sudden appearances have outstripped researchers’ worst predictions. Catastrophic climate change is happening more rapidly and with greater intensity than their grimmest warnings, it transpires.

With the 2022 global climate summit Cop 27 upon us, the Guardian looks back at how the climate crisis has affected communities around the world since the last meeting in Glasgow in 2021

Continue reading...
Match ID: 188 Score: 3.57 source: www.theguardian.com age: 26 days
qualifiers: 3.57 mit

Apple Kicks Off the Cell-Calls-From-Space Race
Thu, 08 Sep 2022 14:18:38 +0000


The race to deliver cellular calls from space passes two milestones this month and saw one major announcement last month. First, Apple will offer emergency satellite messaging on two of its latest iPhone models, the company announced on Wednesday. Second, AST SpaceMobile plans a launch on Saturday, 10 September, of an experimental satellite to test full-fledged satellite 5G service. In addition, T-Mobile USA and SpaceX intend to offer their own messaging and limited data service via the second generation of SpaceX’s Starlink satellite constellation, as the two companies announced on 25 August.

Each contender is taking a different approach to space-based cellular service. The Apple offering uses the existing satellite bandwidth Globalstar once used for messaging offerings, but without the need for a satellite-specific handset. The AST project and another company, Lynk Global, would use a dedicated network of satellites with larger-than-normal antennas to produce a 4G, 5G, and someday 6G cellular signal compatible with any existing 4G-compatible phone (as detailed in other recent IEEE Spectrum coverage of space-based 5G offerings). Assuming regulatory approval is forthcoming, the technology would work first in equatorial regions and then across more of the planet as these providers expand their satellite constellations. T-Mobile and Starlink’s offering would work in the former PCS band in the United States. SpaceX, like AST and Lynk, would need to negotiate access to spectrum on a country-by-country basis.

Apple’s competitors are unlikely to see commercial operations before 2024.

“Regulators have not decided on the power limits from space, what concerns there are about interference, especially across national borders. There’s a whole bunch of regulatory issues that simply haven’t been thought about to date.”
—Tim Farrar, telecommunications consultant

The T-Mobile–Starlink announcement is “in some ways an endorsement” of AST and Lynk’s proposition, and “in other ways a great threat,” says telecommunications consultant Tim Farrar of Tim Farrar Associates in Menlo Park, Calif. AST and Lynk have so far told investors they expect their national mobile network operator partners to charge per use or per day, but T-Mobile announced that they plan to include satellite messaging in the 1,900-megahertz range in their existing services. Apple said their Emergency SOS via Satellite service would be free the first two years for U.S. and Canadian iPhone 14 buyers, but did not say what it would cost after that. For now, the Globalstar satellites it is using cannot offer the kind of broadband bandwidth AST has promised, but Globalstar has reported to investors orders for new satellites that might offer new capabilities, including new gateways.

Even under the best conditions—a clear view of the sky—users will need 15 seconds to send a message via Apple’s service. They will also have to follow onscreen guidance to keep the device pointed at the satellites they are using. Light foliage can cause the same message to take more than a minute to send. Ashley Williams, a satellite engineer at Apple who recorded the service’s announcement, also mentioned a data-compression algorithm and a series of rescue-related suggested auto-replies intended to minimize the amount of data that users would need to send during a rescue.

Meanwhile, AST SpaceMobile says it aims to launch an experimental satellite Saturday, 10 September, to test its cellular broadband offering.

Last month’s T-Mobile-SpaceX announcement “helped the world focus attention on the huge market opportunity for SpaceMobile, the only planned space-based cellular broadband network. BlueWalker 3, which has a 693 sq ft array, is scheduled for launch within weeks!” tweeted AST SpaceMobile CEO Abel Avellan on 25 August. The size of the array matters because AST SpaceMobile has so far indicated in its applications for experimental s