logo RSS Rabbit quadric
News that matters, fast.
Good luck, have news.
Happy scrolling!

Categories



Date/Time of Last Update: Mon Nov 28 06:00:32 2022 UTC




********** CLIMATE **********
return to top



Dimming the Sun to Cool the Planet Is a Desperate Idea, Yet We’re Inching Toward It
Tue, 22 Nov 2022 11:00:00 +0000
The scientists who study solar geoengineering don’t want anyone to try it. But climate inaction is making it more likely.
Match ID: 0 Score: 17.14 source: www.newyorker.com age: 5 days
qualifiers: 8.57 climate change, 8.57 carbon

Australia politics live: Labor backs ADF chief on Brereton amid reports show-cause notices issued to veterans over honours
Mon, 28 Nov 2022 05:37:20 GMT

Follow the day’s news live

Coalition is alienating young voters, PM says

Over on ABC radio Melbourne, Anthony Albanese has been asked what he thinks about the Victorian election and the lessons for the Liberal party.

One of the things that we’re seeing, I believe is an alienation from younger voters from the Coalition.

When you have a position where you have senior members of the Coalition [who] can’t say that climate change is real in spite of the floods and bushfires and all of the evidence of the heating of the planet that we’re seeing, let alone any time something is put up to take action on climate change. They dismiss it.

[It] depends where you work. There will be some businesses, for example, which refuse to bargain with their staff where they used to and their staff where they used to and the better-off-overall test became too complex. Getting rid of the red tape we got there will bring some of the businesses back to the table straight away.

Also, any businesses that are concerned, like ... that actually don’t want to be involved in multi-employer bargaining, the simple fix for them is for them to negotiate with their staff now.

Continue reading...
Match ID: 1 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 climate change

Why are World Cup players wearing strange face masks on the pitch?
Mon, 28 Nov 2022 03:47:58 GMT

Several players, including Son Heung-min, Joško Gvardiol and Ellyes Skhiri, have been spotted sporting unusual face coverings during matches in Qatar

They have raised comparisons to Hannibal Lecter and the Phantom of the Opera, but the strange face masks popping up on World Cup pitches in Qatar are not fashion statements but hi-tech protective devices.

The masks are made from materials like polycarbonate and protect athletes who have sustained facial injuries, ensuring they can play on without risking further damage. Some are 3D printed to exactly fit a scan of the contours of each athlete’s face.

Continue reading...
Match ID: 2 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 carbon

Queensland will continue coal exports ‘as long as the market dictates’ despite emissions targets
Mon, 28 Nov 2022 02:45:53 GMT

Annastacia Palaszczuk says demand for steelmaking metallurgical coal will drive its production in the state

Queensland carbon emissions targets will dramatically change in coming years, but coal exports will continue for “as long as the market dictates”, Annastacia Palaszczuk says.

The premier has promised to slash her state’s emissions 30% below 2005 levels by 2030, and achieve net zero by 2050.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...
Match ID: 3 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 carbon

The Fijian island being strangled by vines
Mon, 28 Nov 2022 00:52:04 GMT

Vanua Levu is being overrun by invasive vines – and the increasing number of natural disasters, brought on by climate change, is only making things worse

In Vanua Levu, the second largest island of Fiji, every contour drips with green. The landscape is impossibly lush and verdant. But upon closer inspection, it’s evident that nearly everything is shrouded in vines.

There are several vine species in Fiji, one of which is the invasive kudzu, introduced by US troops in the second world war as living camouflage for Allied equipment. But, as botanist Judith Sumner, writes: “under tropical Pacific conditions kudzu quickly became an invasive species with a growth rate that aggressively outpaced native Fijian flora.”

Continue reading...
Match ID: 4 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 climate change

Ineos in talks with Rolls-Royce on mini-nuclear power plant technology
Sun, 27 Nov 2022 17:43:12 GMT

Chemicals giant wants to produce zero-carbon electricity to power planned hydrogen systems at Grangemouth refinery

Ineos, the chemicals company owned by the billionaire Sir Jim Ratcliffe, has held exploratory talks with Rolls-Royce on nuclear technology that could eventually be chosen to provide zero-carbon energy to the Grangemouth refinery in Scotland.

A deal between Ineos, one of the UK’s biggest privately owned companies, and FTSE 100 engineer Rolls-Royce could help Ineos with the tricky task of decarbonising the giant refinery, while also providing Rolls-Royce with an early customer for a new technology it hopes will transform its prospects.

Continue reading...
Match ID: 5 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 carbon

Neil Young Embraces Imperfection
Sun, 27 Nov 2022 15:23:20 +0000
The singer-songwriter discusses his new album with the theme of climate change, his friendship with Rick Rubin, and recording melodies on his flip phone.
Match ID: 6 Score: 15.00 source: www.newyorker.com age: 0 days
qualifiers: 15.00 climate change

‘We need a breakthrough deal on biodiversity’: can Montreal summit deliver for nature?
Sun, 27 Nov 2022 10:00:05 GMT

In 201o, politicians pledged to halt devastation of Earth’s wildlife. Since then, no progress has been made. And despite glimmers of hope, prospects look grim for next month’s top-level meeting in Canada

In 2010, politicians and scientists made a pledge to halt the devastating reductions in wildlife numbers that had been denuding the planet of its animals and sea creatures for the previous century. At that time, wild animal populations were declining by about 2.5% a year on average as habitat loss, invasive species, pollution, climate change and disease ravaged habitats and lives. Such losses must end within a decade, it was agreed.

Next month, conservationists and politicians will meet in Montreal for this year’s biodiversity summit where they will judge what progress has been made over the past 12 years. “It will be an easy assessment to make,” said Andrew Terry, the director of conservation at ZSL, the Zoological Society of London. “Absolutely no progress has been made. Populations have continued to decline at a rate of around 2.5% a year. We haven’t slowed the destruction in the slightest. Our planet’s biodiversity is now in desperate peril as a result.”

Continue reading...
Match ID: 7 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 climate change

Melting point: could ‘cloud brightening’ slow the thawing of the Arctic?
Sun, 27 Nov 2022 10:00:03 GMT

The climate emergency is prompting some scientists to suggest extreme measures. But whether you call it geoengineering or biomimicry, others feel interfering with nature will have too high a cost

Like the apocryphal frog that doesn’t notice the rising water temperature until it’s boiled alive, we as a global society are still struggling to recognise that anthropic global warming is hastening us towards irreversible environmental and ecological catastrophe. While there is consensus among climate scientists about the urgency of the situation, and widespread political acknowledgment that the use of carbon fuels must be reduced, targets have not been met and as the UK’s Cop26 president, Alok Sharma, said, the lack of progress at this month’s Cop27 demonstrates the 1.5C limit is on “life support” and there is growing evidence that we are further along the road to a tipping point than previously thought.

So advanced is this process that some scientists are beginning to argue that merely cutting carbon is not enough and an emergency measure involving what is known as geoengineering is called for. There have been a number of plans suggested, some more outlandish than others. They range from building giant mirrors in space to reflect away sunlight to painting the roofs of buildings white to help counteract heatwaves in cities.

Continue reading...
Match ID: 8 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 carbon

Climate change: Could centuries-old wheat help feed the planet?
Sun, 27 Nov 2022 00:22:39 GMT
Scientists are searching through a museum's wheat collection to find the crop of the future.
Match ID: 9 Score: 15.00 source: www.bbc.co.uk age: 1 day
qualifiers: 15.00 climate change

The Geological Fluke That's Protecting Sea Life in the Galapagos
Sat, 26 Nov 2022 13:00:00 +0000
The islands are in the line of an icy current that provides marine ecosystems refuge amid warming oceans. But the good news might not last for long.
Match ID: 10 Score: 15.00 source: www.wired.com age: 1 day
qualifiers: 15.00 climate change

Americans remain resistant to the lure of EVs, which are still unaffordable
Wed, 23 Nov 2022 16:37:36 +0000
The lack of cheaper electric vehicles is going to be a problem.
Match ID: 11 Score: 10.71 source: arstechnica.com age: 4 days
qualifiers: 10.71 carbon

Delving for Joules in the Fusion Mines
Tue, 22 Nov 2022 16:00:00 +0000


The Big Picture features technology through the lens of photographers.

Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition.

Enjoy the latest images, and if you have suggestions, leave a comment below.

Shot of Nuclear Fusion


A women using robotic equipment in front of multiple screens.

An old saw regarding the multitude of dashed hopes about fusion energy’s promise goes “Fusion is 30 years away—and it always will be.” After decades of researchers predicting that fusion was just around the corner, a team at the UK Atomic Energy Authority (which hosts the Joint European Torus [JET] plasma physics experiment) did something that suggests scientists are homing in on exactly which corner that is. In February 2022, the JET experimenters induced the single greatest sustained energy pulse ever created by humans. It had twice the energy of the previous record-setting blast, triggered a quarter century earlier. A doubling every 25 years is far behind the pace of the microchip improvements described by Moore’s Law. But that hasn’t dampened enthusiasm over an alternative energy source that could make fossil fuels and their effect on the environment relics of a bygone era. In the foreground of the picture is a trainee learning how to use the systems involved in accomplishing the feat.

Leon Neal/Getty Images


A laser cut rice caked based drone.

Turning Drones into Scones

What has two wings, can reach a person stranded in a disaster zone, and doubles as a source of precious calories when no other food is available? This drone, designed and built by a team of researchers at the Swiss Federal Institute of Technology Lausanne (EPFL), has wings made entirely of laser-cut rice cakes held together with “glue” made from gelatin. The EPFL group says it plans to keep refining the edible aircraft to improve its aeronautics and enhance its nutritional profile.

EPFL


Green laser light illuminates a metasurface that is a hundred times as thin as paper.

Metasurface Weaves Entangled Photons

Creating the quantum mechanical state of entanglement (in which paired atoms influence each other from across vast distances) has heretofore been reminiscent of the story of Noah’s ark. The tried-and-true method for entangling photons (by shining light through a nonlinear crystal) puts them in this state two by two, the way the animals are said to have boarded the ark. The ambition of quantum researchers has been to expand these connections from pairs to parties. And it seems they’ve figured out how to reliably entangle multiple photons in a complicated web, using half-millimeter-thick metasurfaces covered with forests of microscopic pillars. This, say experts, will not only greatly simplify the setup needed for quantum technology but also help support more-complex quantum applications.

Craig Fritz


A large camera within a lab in Chile.

Colossal Camera Coming to Chile

In a world obsessed with miniaturization, it’s almost shocking when, every now and then, a big deal is made of something, er, big. That is certainly the case with the new camera being built for the Vera C. Rubin Observatory in Chile. When the camera is delivered and set up in May 2023, its 1.57-meter-wide lens will make it the world’s largest device for taking snapshots. The gargantuan point-and-shoot instrument will capture images of a swath of the sky seven times the width of the moon.

Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory


A young woman with brown eyes and neck length hair dyed rose gold sits at a white table. In one hand she holds a carbon fiber robotic arm and hand. Her other arm ends near her elbow. Her short sleeve shirt has a pattern on it of illustrated hands.

Bionic Hands Haven’t Fully Grasped Users’ Needs

When we’re carrying out our quotidian activities, most of us rarely stop to think about what marvels of engineering our arms and hands are. But for those who have lost the use of a limb—or, like Britt Young, the woman pictured here, were born without one—there’s hardly ever a day when the challenges of navigating a two-handed world are not in the forefront of their thoughts. In Young’s October 2022 IEEE Spectrum cover story, she discusses these challenges, as well as how the bionic-hand technology intended to come to the rescue falls short of designers’ and users’ expectations.

Gabriela Hasbun. Makeup: Maria Nguyen for Mac Cosmetics; Hair: Joan Laqui for Living Proof


Match ID: 12 Score: 8.57 source: spectrum.ieee.org age: 5 days
qualifiers: 8.57 carbon

The Transistor of 2047: Expert Predictions
Mon, 21 Nov 2022 16:00:01 +0000


The 100th anniversary of the invention of the transistor will happen in 2047. What will transistors be like then? Will they even be the critical computing element they are today? IEEE Spectrum asked experts from around the world for their predictions.


What will transistors be like in 2047?

Expect transistors to be even more varied than they are now, says one expert. Just as processors have evolved from CPUs to include GPUs, network processors, AI accelerators, and other specialized computing chips, transistors will evolve to fit a variety of purposes. “Device technology will become application domain–specific in the same way that computing architecture has become application domain–specific,” says H.-S. Philip Wong, an IEEE Fellow, professor of electrical engineering at Stanford University, and former vice president of corporate research at TSMC.

Despite the variety, the fundamental operating principle—the field effect that switches transistors on and off—will likely remain the same, suggests Suman Datta, an IEEE Fellow, professor of electrical and computer at Georgia Tech, and director of the multi-university nanotech research center ASCENT. This device will likely have minimum critical dimensions of 1 nanometer or less, enabling device densities of 10 trillion per square centimeter, says Tsu-Jae King Liu, an IEEE Fellow, dean of the college of engineering at the University of California, Berkeley, and a member of Intel’s board of directors.

"It is safe to assume that the transistor or switch architectures of 2047 have already been demonstrated on a lab scale"—Sri Samavedam

Experts seem to agree that the transistor of 2047 will need new materials and probably a stacked or 3D architecture, expanding on the planned complementary field-effect transistor (CFET, or 3D-stacked CMOS). [For more on the CFET, see “Taking Moore’s Law to New Heights.”] And the transistor channel, which now runs parallel to the plane of the silicon, may need to become vertical in order to continue to increase in density, says Datta.

AMD senior fellow Richard Schultz, suggests that the main aim in developing these new devices will be power. “The focus will be on reducing power and the need for advanced cooling solutions,” he says. “Significant focus on devices that work at lower voltages is required.”

Will transistors still be the heart of most computing in 25 years?

It’s hard to imagine a world where computing is not done with transistors, but, of course, vacuum tubes were once the digital switch of choice. Startup funding for quantum computing, which does not directly rely on transistors, reached US $1.4 billion in 2021, according to McKinsey & Co.

But advances in quantum computing won’t happen fast enough to challenge the transistor by 2047, experts in electron devices say. “Transistors will remain the most important computing element,” says Sayeef Salahuddin, an IEEE Fellow and professor of electrical engineering and computer science at the University of California, Berkeley. “Currently, even with an ideal quantum computer, the potential areas of application seem to be rather limited compared to classical computers.”

Sri Samavedam, senior vice president of CMOS technologies at the European chip R&D center Imec, agrees. “Transistors will still be very important computing elements for a majority of the general-purpose compute applications,” says Samavedam. “One cannot ignore the efficiencies realized from decades of continuous optimization of transistors.”

Has the transistor of 2047 already been invented?

Twenty-five years is a long time, but in the world of semiconductor R&D, it’s not that long. “In this industry, it usually takes about 20 years from [demonstrating a concept] to introduction into manufacturing,” says Samavedam. “It is safe to assume that the transistor or switch architectures of 2047 have already been demonstrated on a lab scale” even if the materials involved won’t be exactly the same. King Liu, who demonstrated the modern FinFET about 25 years ago with colleagues at Berkeley, agrees.

But the idea that the transistor of 2047 is already sitting in a lab somewhere isn’t universally shared. Salahuddin, for one, doesn’t think it’s been invented yet. “But just like the FinFET in the 1990s, it is possible to make a reasonable prediction for the geometric structure” of future transistors, he says.

AMD’s Schultz says you can glimpse this structure in proposed 3D-stacked devices made of 2D semiconductors or carbon-based semiconductors. “Device materials that have not yet been invented could also be in scope in this time frame,” he adds.

Will silicon still be the active part of most transistors in 2047?

Experts say that the heart of most devices, the transistor channel region, will still be silicon, or possibly silicon-germanium—which is already making inroads—or germanium. But in 2047 many chips may use semiconductors that are considered exotic today. These could include oxide semiconductors like indium gallium zinc oxide; 2D semiconductors, such as the metal dichalcogenide tungsten disulfide; and one-dimensional semiconductors, such as carbon nanotubes. Or even “others yet to be invented,” says Imec’s Samavedam.

"Transistors will remain the most important computing element"—Sayeef Salahuddin

Silicon-based chips may be integrated in the same package with chips that rely on newer materials, just as processor makers are today integrating chips using different silicon manufacturing technologies into the same package, notes IEEE Fellow Gabriel Loh, a senior fellow at AMD.

Which semiconductor material is at the heart of the device may not even be the central issue in 2047. “The choice of channel material will essentially be dictated by which material is the most compatible with many other materials that form other parts of the device,” says Salahuddin. And we know a lot about integrating materials with silicon.

In 2047, where will transistors be common where they are not found today?

Everywhere. No, seriously. Experts really do expect some amount of intelligence and sensing to creep into every aspect of our lives. That means devices will be attached to our bodies and implanted inside them; embedded in all kinds of infrastructure, including roads, walls, and houses; woven into our clothing; stuck to our food; swaying in the breeze in grain fields; watching just about every step in every supply chain; and doing many other things in places nobody has thought of yet.

Transistors will be “everywhere that needs computation, command and control, communications, data collection, storage and analysis, intelligence, sensing and actuation, interaction with humans, or an entrance portal to the virtual and mixed reality world,” sums up Stanford’s Wong.

This article appears in the December 2022 print issue as “The Transistor of 2047.”


Match ID: 13 Score: 6.43 source: spectrum.ieee.org age: 6 days
qualifiers: 6.43 carbon

Climate Change from A to Z
Mon, 21 Nov 2022 11:00:00 +0000
The stories we tell ourselves about the future.
Match ID: 14 Score: 6.43 source: www.newyorker.com age: 6 days
qualifiers: 6.43 climate change

‘We couldn’t fail them’: how Pakistan’s floods spurred fight at Cop for loss and damage fund
Sun, 20 Nov 2022 16:24:54 GMT

With the deadly devastation fresh in the world’s mind, Pakistan pushed for damage funds with other frontline countries

In early September, after unprecedented rainfall had left a third of Pakistan under water, its climate change minister set out the country’s stall for Cop27. “We are on the frontline and intend to keep loss and damage and adapting to climate catastrophes at the core of our arguments and negotiations. There will be no moving away from that,” Sherry Rehman said.

Pakistan brought that resolve to the negotiations in Sharm el-Sheikh and, as president of the G77 plus China negotiating bloc, succeeded in keeping developing countries united on loss and damage – despite efforts by some rich countries to divide them. Its chief negotiator, Nabeel Munir, a career diplomat, was backed by a team of savvy veteran negotiators who had witnessed the devastation and suffering from the floods, which caused $30bn (£25bn) of damage and economic losses. Every day, Munir repeated the same message: “Loss and damage is not charity, it’s about climate justice.”

Continue reading...
Match ID: 15 Score: 4.29 source: www.theguardian.com age: 7 days
qualifiers: 4.29 climate change

Climate change: Five key takeaways from COP27
Sun, 20 Nov 2022 11:04:11 GMT
The biggest win on climate since the Paris Agreement in 2015... or the biggest loss?
Match ID: 16 Score: 4.29 source: www.bbc.co.uk age: 7 days
qualifiers: 4.29 climate change

Solar-to-Jet-Fuel System Readies for Takeoff
Wed, 03 Aug 2022 17:00:00 +0000


As climate change edges from crisis to emergency, the aviation sector looks set to miss its 2050 goal of net-zero emissions. In the five years preceding the pandemic, the top four U.S. airlines—American, Delta, Southwest, and United—saw a 15 percent increase in the use of jet fuel. Despite continual improvements in engine efficiencies, that number is projected to keep rising.

A glimmer of hope, however, comes from solar fuels. For the first time, scientists and engineers at the Swiss Federal Institute of Technology (ETH) in Zurich have reported a successful demonstration of an integrated fuel-production plant for solar kerosene. Using concentrated solar energy, they were able to produce kerosene from water vapor and carbon dioxide directly from air. Fuel thus produced is a drop-in alternative to fossil-derived fuels and can be used with existing storage and distribution infrastructures, and engines.

Fuels derived from synthesis gas (or syngas)—an intermediate product that is a specific mixture of carbon monoxide and hydrogen—is a known alternative to conventional, fossil-derived fuels. Syngas is produced by Fischer-Tropsch (FT) synthesis, in which chemical reactions convert carbon monoxide and water vapor into hydrocarbons. The team of researchers at ETH found that a solar-driven thermochemical method to split water and carbon dioxide using a metal oxide redox cycle can produce renewable syngas. They demonstrated the process in a rooftop solar refinery at the ETH Machine Laboratory in 2019.

Close-up of a spongy looking material Reticulated porous structure made of ceria used in the solar reactor to thermochemically split CO2 and H2O and produce syngas, a specific mixture of H2 and CO.ETH Zurich

The current pilot-scale solar tower plant was set up at the IMDEA Energy Institute in Spain. It scales up the solar reactor of the 2019 experiment by a factor of 10, says Aldo Steinfeld, an engineering professor at ETH who led the study. The fuel plant brings together three subsystems—the solar tower concentrating facility, solar reactor, and gas-to-liquid unit.

First, a heliostat field made of mirrors that rotate to follow the sun concentrates solar irradiation into a reactor mounted on top of the tower. The reactor is a cavity receiver lined with reticulated porous ceramic structures made of ceria (or cerium(IV) oxide). Within the reactor, the concentrated sunlight creates a high-temperature environment of about 1,500 °C which is hot enough to split captured carbon dioxide and water from the atmosphere to produce syngas. Finally, the syngas is processed to kerosene in the gas-to-liquid unit. A centralized control room operates the whole system.

Fuel produced using this method closes the fuel carbon cycle as it only produces as much carbon dioxide as has gone into its manufacture. “The present pilot fuel plant is still a demonstration facility for research purposes,” says Steinfeld, “but it is a fully integrated plant and uses a solar-tower configuration at a scale that is relevant for industrial implementation.”

“The solar reactor produced syngas with selectivity, purity, and quality suitable for FT synthesis,” the authors noted in their paper. They also reported good material stability for multiple consecutive cycles. They observed a value of 4.1 percent solar-to-syngas energy efficiency, which Steinfeld says is a record value for thermochemical fuel production, even though better efficiencies are required to make the technology economically competitive.

Schematic of the solar tower fuel plant.  A heliostat field concentrates solar radiation onto a solar reactor mounted on top of the solar tower. The solar reactor cosplits water and carbon dioxide and produces a mixture of molecular hydrogen and carbon monoxide, which in turn is processed to drop-in fuels such as kerosene.ETH Zurich

“The measured value of energy conversion efficiency was obtained without any implementation of heat recovery,” he says. The heat rejected during the redox cycle of the reactor accounted for more than 50 percent of the solar-energy input. “This fraction can be partially recovered via thermocline heat storage. Thermodynamic analyses indicate that sensible heat recovery could potentially boost the energy efficiency to values exceeding 20 percent.”

To do so, more work is needed to optimize the ceramic structures lining the reactor, something the ETH team is actively working on, by looking at 3D-printed structures for improved volumetric radiative absorption. “In addition, alternative material compositions, that is, perovskites or aluminates, may yield improved redox capacity, and consequently higher specific fuel output per mass of redox material,” Steinfeld adds.

The next challenge for the researchers, he says, is the scale-up of their technology for higher solar-radiative power inputs, possibly using an array of solar cavity-receiver modules on top of the solar tower.

To bring solar kerosene into the market, Steinfeld envisages a quota-based system. “Airlines and airports would be required to have a minimum share of sustainable aviation fuels in the total volume of jet fuel that they put in their aircraft,” he says. This is possible as solar kerosene can be mixed with fossil-based kerosene. This would start out small, as little as 1 or 2 percent, which would raise the total fuel costs at first, though minimally—adding “only a few euros to the cost of a typical flight,” as Steinfeld puts it

Meanwhile, rising quotas would lead to investment, and to falling costs, eventually replacing fossil-derived kerosene with solar kerosene. “By the time solar jet fuel reaches 10 to 15 percent of the total jet-fuel volume, we ought to see the costs for solar kerosene nearing those of fossil-derived kerosene,” he adds.

However, we may not have to wait too long for flights to operate solely on solar fuel. A commercial spin-off of Steinfeld’s laboratory, Synhelion, is working on commissioning the first industrial-scale solar fuel plant in 2023. The company has also collaborated with the airline SWISS to conduct a flight solely using its solar kerosene.


Match ID: 17 Score: 4.29 source: spectrum.ieee.org age: 116 days
qualifiers: 2.14 climate change, 2.14 carbon

The EV Transition Explained: Battery Challenges
Sat, 19 Nov 2022 19:30:00 +0000


“Energy and information are two basic currencies of organic and social systems,” the economics Nobelist Herb Simon once observed. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.”

Electric vehicles at scale alter the terms of both basic currencies concurrently. Reliable, secure supplies of minerals and software are core elements for EVs, which represent a “shift from a fuel-intensive to a material-intensive energy system,” according to a report by the International Energy Agency (IEA). For example, the mineral requirements for an EV’s batteries and electric motors are six times that of an internal-combustion-engine (ICE) vehicle, which can increase the average weight of an EV by 340 kilograms (750 pounds). For something like the Ford Lightning, the weight can be more than twice that amount.

EVs also create a shift from an electromechanical-intensive to an information-intensive vehicle. EVs offer a virtual clean slate from which to accelerate the design of safe, software-defined vehicles, with computing and supporting electronics being the prime enabler of a vehicle’s features, functions, and value. Software also allows for the decoupling of the internal mechanical connections needed in an ICE vehicle, permitting an EV to be controlled remotely or autonomously. An added benefit is that the loss of the ICE power train not only reduces the components a vehicle requires but also frees up space for increased passenger comfort and storage.

The effects of Simon’s profound changes are readily apparent, forcing a 120-year-old industry to fundamentally reinvent itself. EVs require automakers to design new manufacturing processes and build plants to make both EVs and their batteries. Ramping up the battery supply chain is the automakers’ current “most challenging topic,” according to VW chief financial officer Arno Antlitz.

It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.

These plants are also very expensive. Ford and its Korean battery supplier SK Innovation are spending US $5.6 billion to produce F-Series EVs and batteries in Stanton, Tenn., for example, while GM is spending $2 billion to produce its new Cadillac Lyriq EVs in Spring Hill, Tenn. As automakers expand their lines of EVs, tens of billions more will need to be invested in both manufacturing and battery plants. It is little wonder that Tesla CEO Elon Musk calls EV factories “gigantic money furnaces.”

Furthermore, Kristin Dziczek a policy analyst with the Federal Reserve Bank of Chicago adds, there are scores of new global EV competitors actively seeking to replace the legacy automakers. The “simplicity” of EVs in comparison with ICE vehicles allows these disruptors to compete virtually from scratch with legacy automakers, not only in the car market itself but for the material and labor inputs as well.

Batteries and the supply-chain challenge

Another critical question is whether all the planned battery-plant output will support expected EV production demands. For instance, the United States will require 8 million EV batteries annually by 2030 if its target to make EVs half of all new-vehicle sales is met, with that number rising each year after. As IEA executive director Fatih Birol observes, “Today, the data shows a looming mismatch between the world’s strengthened climate ambitions and the availability of critical minerals that are essential to realizing those ambitions.”

This mismatch worries automakers. GM, Ford, Tesla, and others have moved to secure batteries through 2025, but it could be tricky after that. Rivian Automotive chief executive RJ Scaringe was recently quoted in the Wall Street Journal as saying that “90 to 95 percent of the (battery) supply chain does not exist,” and that the current semiconductor chip shortage is “a small appetizer to what we are about to feel on battery cells over the next two decades.”

The competition for securing raw materials, along with the increased consumer demand, has caused EV prices to spike. Ford has raised the price of the Lightning $6,000 to $8,500, and CEO Jim Farley bluntly states that in regard to material shortages in the foreseeable future, “I don’t think we should be confident in any other outcomes than an increase in prices.”

Stiff Competition for Engineering Talent


One critical area of resource competition is over the limited supply of software and systems engineers with the mechatronics and robotics expertise needed for EVs. Major automakers have moved aggressively to bring more software and systems-engineering expertise on board, rather than have it reside at their suppliers, as they have traditionally done. Automakers feel that if they're not in control of the software, they're not in control of their product.

Volvo’s CEO Jim Rowan stated earlier this year that increasing the computing power in EVs will be harder and more altering of the automotive industry than switching from ICE vehicles to EVs. This means that EV winners and losers will in great part be separated by their “relative strength in their cyberphysical systems engineering,” states Clemson’s Paredis.

Even for the large auto suppliers, the transition to EVs will not be an easy road. For instance, automakers are demanding these suppliers absorb more cost cuts because automakers are finding EVs so expensive to build. Not only do automakers want to bring cutting-edge software expertise in-house, they want greater inside expertise in critical EV supply-chain components, especially batteries.

Automakers, including Tesla, are all scrambling for battery talent, with bidding wars reportedly breaking out to acquire top candidates. With automakers planning to spend more than $13 billion to build at least 13 new EV battery plants in North America within the next five to seven years, experienced management and production-line talent will likely be in extremely short supply. Tesla’s Texas Gigafactory needs some 10,000 workers alone, for example. With at least 60 new battery plants planned to be in operation globally by 2030, and scores needed soon afterward, major battery makers are already highlighting their expected skill shortages.


The underlying reason for the worry: Supplying sufficient raw materials to existing and planned battery plants as well as to the manufacturers of other renewable energy sources and military systems—who are competing for the same materials—has several complications to overcome. Among them is the need for more mines to provide the metals required, which have spiked in price as demand has increased. For example, while demand for lithium is growing rapidly, investment in mines has significantly lagged the investment that has been aimed toward EVs and battery plants. It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.

Mining the raw materials, of course, assumes that there is sufficient refining capability to process them, which, outside of China, is limited. This is especially true in the United States, which, according to a Biden Administration special supply-chain investigative report, has “limited raw material production capacity and virtually no processing capacity.” Consequently, the report states, the United States “exports the limited raw materials produced today to foreign markets.” For example, output from the only nickel mine in the United States, the Eagle mine in Minnesota, is sent to Canada for smelting.

“Energy and information are two basic currencies of organic and social systems. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.” —Herb Simon

One possible solution is to move away from lithium-ion batteries and nickel metal hydride batteries to other battery chemistries such as lithium-iron phosphate, lithium-ion phosphate, lithium-sulfur, lithium-metal, and sodium-ion, among many others, not to mention solid-state batteries, as a way to alleviate some of the material supply and cost problems. Tesla is moving toward the use of lithium-iron phosphate batteries, as is Ford for some of its vehicles. These batteries are cobalt free, which alleviates several sourcing issues.

Another solution may be recycling both EV batteries as well as the waste and rejects from battery manufacturing, which can run between 5 to 10 percent of production. Effective recycling of EV batteries “has the potential to reduce primary demand compared to total demand in 2040, by approximately 25 percent for lithium, 35 percent for cobalt and nickel, and 55 percent for copper,” according to a report by the University of Sidney’s Institute for Sustainable Futures.



While investments into creating EV battery recycling facilities have started, there is a looming question of whether there will be enough battery factory scrap and other lithium-ion battery waste for them to remain operational while they wait for sufficient numbers of batteries to make them profitable. Lithium-ion battery-pack recycling is very time-consuming and expensive, making mining lithium often cheaper than recycling it, for example. Recycling low or no-cobalt lithium batteries, which is the direction many automakers are taking, may also make it unprofitable to recycle them.

An additional concern is that EV batteries, once no longer useful for propelling the EV, have years of life left in them. They can be refurbished, rebuilt, and reused in EVs, or repurposed into storage devices for homes, businesses, or the grid. Whether it will make economic sense to do either at scale versus recycling them remains to be seen.

Howard Nusbaum, the administrator of the National Salvage Vehicle Reporting Program (NSVRP), succinctly puts it, “There is no recycling, and no EV-recycling industry, if there is no economic basis for one.”

In the next article in the series, we will look at whether the grid can handle tens of millions of EVs.


Match ID: 18 Score: 2.14 source: spectrum.ieee.org age: 8 days
qualifiers: 2.14 carbon

Why Your Organization Should Join the IEEE Standards Association
Fri, 18 Nov 2022 19:00:01 +0000


The global business landscape is constantly evolving. Digital transformation— compounded by the challenges of globalization, supply-chain stability, demographic shifts, and climate change—is pressuring companies and government agencies to innovate and safely deploy sustainable technologies.

As digital transformation continues, the pervasive growth of technology increasingly intersects with industry, government, and societal interests. Companies and organizations need access to technologies that can enhance efficiencies, productivity, and competitive advantage.


Governments seek influence over emerging technologies to preserve economic interests, advance global trade, and protect their citizens. Consumers are demanding more transparency regarding organizational motives, practices, and processes.

For those and other reasons, new types of stakeholders are seeking a voice in the technology standardization process.

How organizations benefit from developing standards

The need is evidenced in the membership gains at the IEEE Standards Association. IEEE SA membership for organizations, also known as entity membership, has increased by more than 150 percent in the past six years. Academic institutions, government agencies, and other types of organizations now account for more than 30 percent of the member base.

Entity membership offers the ability to help shape technology development and ensure your organization’s interests are represented in the standards development process. Other benefits include balloting privileges, leadership eligibility, and networking opportunities.

IEEE SA welcomes different types of organizations because they bring varied perspectives and they voice concerns that need to be addressed during the standards development process. Engaging diverse viewpoints from companies of all sizes and types also helps to identify and address changing market needs.

From a geographic standpoint, IEEE SA welcomes participation from all regions of the world. Diverse perspectives and contributions to the development cycle enable innovation to be shared and realized by all stakeholders.

Programs on blockchain, IoT, and other emerging technology

IEEE SA has introduced new industry-engagement programs such as open-source and industry-alliance offerings designed to speed innovation and adoption. In addition, industry participants have access to the full IEEE SA ecosystem of programs and services including technology incubation, pre-standardization work, standards development, and conformity assessment activities. Training and marketing tools support working groups at every stage of the process.

An increasing number of new standards projects from emerging technology areas have created a more robust and diversified portfolio of work. The technologies include artificial intelligence and machine learning, blockchain and distributed ledger technologies, quantum computing, cloud computing, the Internet of Things, smart cities, smart factories and online gaming. There is also more participation from the health care, automotive, and financial services sectors.

IEEE SA has grown and evolved its programs to address market needs, but its purpose has not changed. The organization is focused on empowering innovators to raise the world’s standards for the benefit of humanity.

Those innovators might be individuals or organizations looking to make a difference in the world, but it can be accomplished only when we all work together.

Learn more about IEEE SA membership for organizations and how your organization can play a key role in advancing future technologies.


Match ID: 19 Score: 2.14 source: spectrum.ieee.org age: 9 days
qualifiers: 2.14 climate change

India’s First Private Space Rocket Blasts Off
Fri, 18 Nov 2022 17:51:56 +0000


A rocket built by Indian startup Skyroot has become the country’s first privately developed launch vehicle to reach space, following a successful maiden flight earlier today. The suborbital mission is a major milestone for India’s private space industry, say experts, though more needs to be done to nurture the fledgling sector.

The Vikram-S rocket, named after the founder of the Indian space program, Vikram Sarabhai, lifted off from the Indian Space Research Organization’s (ISRO) Satish Dhawan Space Centre, on India’s east coast, at 11:30 a.m. local time (1 a.m. eastern time). It reached a peak altitude of 89.5 kilometers (55.6 miles), crossing the 80-km line that NASA counts as the boundary of space, but falling just short of the 100 km recognized by the Fédération Aéronautique Internationale.

In the longer run, India’s space industry has ambitions of capturing a significant chunk of the global launch market.

Pawan Kumar Chandana, cofounder of the Hyderabad-based startup, says the success of the launch is a major victory for India’s nascent space industry, but the buildup to the mission was nerve-racking. “We were pretty confident on the vehicle, but, as you know, rockets are very notorious for failure,” he says. “Especially in the last 10 seconds of countdown, the heartbeat was racing up. But once the vehicle had crossed the launcher and then went into the stable trajectory, I think that was the moment of celebration.”

At just 6 meters (20 feet) long and weighing only around 550 kilograms (0.6 tonnes), the Vikram-S is not designed for commercial use. Today’s mission, called Prarambh, which means “the beginning” in Sanskrit, was designed to test key technologies that will be used to build the startup’s first orbital rocket, the Vikram I. The rocket will reportedly be capable of lofting as much as 480 kg up to an 500-km altitude and is slated for a maiden launch next October.

man standing in front of a rocket behind him Skyroot cofounder Pawan Kumar Chandana standing in front of the Vikram-S rocket at the Satish Dhawan Space Centre, on the east coast of India.Skyroot

In particular, the mission has validated Skyroot’s decision to go with a novel all-carbon fiber structure to cut down on weight, says Chandana. It also allowed the company to test 3D-printed thrusters, which were used for spin stabilization in Vikram-S but will power the upper stages of its later rockets. Perhaps the most valuable lesson, though, says Chandana, was the complexity of interfacing Skyroot's vehicle with ISRO’s launch infrastructure. “You can manufacture the rocket, but launching it is a different ball game,” he says. “That was a great learning experience for us and will really help us accelerate our orbital vehicle.”

Skyroot is one of several Indian space startups looking to capitalize on recent efforts by the Indian government to liberalize its highly regulated space sector. Due to the dual-use nature of space technology, ISRO has historically had a government-sanctioned monopoly on most space activities, says Rajeswari Pillai Rajagopalan, director of the Centre for Security, Strategy and Technology at the Observer Research Foundation think tank, in New Delhi. While major Indian engineering players like Larsen & Toubro and Godrej Aerospace have long supplied ISRO with components and even entire space systems, the relationship has been one of a supplier and vendor, she says.

But in 2020, Finance Minister Nirmala Sitharaman announced a series of reforms to allow private players to build satellites and launch vehicles, carry out launches, and provide space-based services. The government also created the Indian National Space Promotion and Authorisation Centre (InSpace), a new agency designed to act as a link between ISRO and the private sector, and affirmed that private companies would be able to take advantage of ISRO’s facilities.

The first launch of a private rocket from an ISRO spaceport is a major milestone for the Indian space industry, says Rajagopalan. “This step itself is pretty crucial, and it’s encouraging to other companies who are looking at this with a lot of enthusiasm and excitement,” she says. But more needs to be done to realize the government’s promised reforms, she adds. The Space Activities Bill that is designed to enshrine the country’s space policy in legislation has been languishing in draft form for years, and without regulatory clarity, it’s hard for the private sector to justify significant investments. “These are big, bold statements, but these need to be translated into actual policy and regulatory mechanisms,” says Rajagopalan.

Skyroot’s launch undoubtedly signals the growing maturity of India’s space industry, says Saurabh Kapil, associate director in PwC’s space practice. “It’s a critical message to the Indian space ecosystem, that we can do it, we have the necessary skill set, we have those engineering capabilities, we have those manufacturing or industrialization capabilities,” he says.

rocket launching into the sky with fire tail The Vikram-S rocket blasting off from the Satish Dhawan Space Centre, on the east coast of India.Skyroot

However, crossing this technical milestone is only part of the challenge, he says. The industry also needs to demonstrate a clear market for the kind of launch vehicles that companies like Skyroot are building. While private players are showing interest in launching small satellites for applications like agriculture and infrastructure monitoring, he says, these companies will be able to build sustainable businesses only if they are allowed to compete for more lucrative government and defense-sector contacts.

In the longer run, though, India’s space industry has ambitions of capturing a significant chunk of the global launch market, says Kapil. ISRO has already developed a reputation for both reliability and low cost—its 2014 mission to Mars cost just US $74 million, one-ninth the cost of a NASA Mars mission launched the same week. That is likely to translate to India’s private space industry, too, thanks to a considerably lower cost of skilled labor, land, and materials compared with those of other spacefaring nations, says Kapil. “The optimism is definitely there that because we are low on cost and high on reliability, whoever wants to build and launch small satellites is largely going to come to India,” he says.


Match ID: 20 Score: 2.14 source: spectrum.ieee.org age: 9 days
qualifiers: 2.14 carbon

COP27: Lack of women at negotiations raises concern
Wed, 16 Nov 2022 01:01:11 GMT
Women barely feature in negotiations in Egypt despite bearing the brunt of climate change.
Match ID: 21 Score: 2.14 source: www.bbc.co.uk age: 12 days
qualifiers: 2.14 climate change

The Infinite Cloud Is a Fantasy
Tue, 15 Nov 2022 14:00:00 +0000
It's all too easy to believe in the illusion of neverending data storage and streaming. But it's destroying the natural world.
Match ID: 22 Score: 2.14 source: www.wired.com age: 12 days
qualifiers: 2.14 climate change

The EV Transition Explained
Sun, 13 Nov 2022 14:17:59 +0000


From the outside, there is little to tell a basic Ford XL ICE F-150 from the electric Ford PRO F-150 Lightning. Exterior changes could pass for a typical model-year refresh. While there are LED headlight and rear-light improvements along with a more streamlined profile, the Lightning’s cargo box is identical to that of an ICE F-150, complete with tailgate access steps and a jobsite ruler. The Lightning’s interior also has a familiar feel.

But when you pop the Lightning’s hood, you find that the internal combustion engine has gone missing. In its place is a front trunk (“frunk”), while concealed beneath is the new skateboard frame with its dual electric motors (one for each axle) and a big 98-kilowatt-hour standard (and 131-kWh extended-range) battery pack. The combination permits the Lightning to travel 230 miles (370 kilometers) without recharging and go from 0 to 60 miles per hour in 4.5 seconds, making it the fastest F-150 available despite its much heavier weight.

Invisible, too, are the Lightning’s sophisticated computing and software systems. The 2016 ICE F-150 reportedly had about 150 million lines of code. The Lightning’s software suite may even be larger than its ICE counterpart (Ford will not confirm this). The Lightning replaces the Ford F-150 ICE-related software in the electronic control units (ECUs) with new “intelligent” software and systems that control the main motors, manage the battery system, and provide charging information to the driver.

The EV Transition Explained


This is the first in a series of articles presenting just some of the technological and social challenges in moving from vehicles with internal-combustion engines to electric vehicles. These must be addressed at scale before EVs can happen. Each challenge entails a multitude of interacting systems, subsystems, sub-subsystems, and so on. In reviewing each article, readers should bear in mind Nobel Prize–winning physicist Richard Feynman’s admonition: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”

Ford says the Lightning’s software will identify nearby public charging stations and tell drivers when to recharge. To increase the accuracy of the range calculation, the software will draw upon similar operational data communicated from other Lightning owners that Ford will dynamically capture, analyze, and feed back to the truck.

For executives, however, Lightning’s software is not only a big consumer draw but also among the biggest threats to its success. Ford CEO Jim Farley told the New York Times that software bugs worry him most. To mitigate the risk, Ford has incorporated an over-the-air (OTA) software-update capability for both bug fixes and feature upgrades. Yet with an incorrect setting in the Lightning’s tire pressure monitoring system requiring a software fix only a few weeks after its initial delivery, and with some new Ford Mustang Mach-Es recalled because of misconfigured software caused by a “service update or as an over-the-air update,” Farley’s worries probably won’t be soothed for some time.

Ford calls the Lightning a “Model T moment for the 21st century” and the company's US $50 billion investment in EVs is a bet-the-company proposition. Short-term success looks likely, as Ford closed Lightning preorders after reaching 200,000 and with sales expectations of 150,000 a year by 2024.

A construction crew working on a site with a Ford F-150's frunk open showing tools. The F-150 Lightning's front trunk (also known as a frunk) helps this light-duty electric pickup haul even more. Ford

However, long-term success is not guaranteed. “Ford is walking a tightrope, trying at the same time to convince everyone that EVs are the same as ICE vehicles yet different,” says University of Michigan professor emeritus John Leslie King, who has long studied the auto industry. Ford and other automakers will need to convince tens of millions of customers to switch to EVs to meet the Biden Administration’s decarbonization goals of 50 percent new auto sales being non-ICE vehicles by 2030.

King points out that neither Ford nor other automakers can forever act like EVs are merely interchangeable with—but more ecofriendly than—their ICE counterparts. As EVs proliferate at scale, they operate in a vastly different technological, political, and social ecosystem than ICE vehicles. The core technologies and requisite expertise, supply-chain dependencies, and political alliances are different. The expectations of and about EV owners, and their agreement to change their lifestyles, also differ significantly.

Indeed, the challenges posed by the transition from ICE vehicles to EVs at scale are significantly larger in scope and more complex than the policymakers setting the regulatory timeline appreciate. The systems-engineering task alone is enormous, with countless interdependencies that are outside policymakers' control, and resting on optimistic assumptions about promising technologies and wished-for changes in human behavior. The risk of getting it wrong, and the resulting negative environmental and economic consequences created, are high. In this series, we will break down the myriad infrastructure, policy, and social challenges involved learned from discussions with numerous industry insiders and industry watchers. Let's take a look at some of the elemental challenges blocking the road ahead for EVs.

The soft car

For Ford and the other automakers that have shaped the ICE vehicle ecosystem for more than a century, ultimate success is beyond the reach of the traditional political, financial, and technological levers they once controlled. Renault chief executive Luca de Meo, for example, is quoted in the Financial Times as saying that automakers must recognize that “the game has changed,” and they will “have to play by new rules” dictated by the likes of mining and energy companies.

One reason for the new rules, observes professor Deepak Divan, the director of the Center for Distributed Energy at Georgia Tech, is that the EV transition is “a subset of the energy transition” away from fossil fuels. On the other hand, futurist Peter Schwartz contends that the entire electric system is part of the EV supply chain. These alternative framings highlight the strong codependencies involved. Consequently, automakers will be competing against not only other EV manufacturers but also numerous players involved in the energy transition aiming to grab the same scarce resources and talent.

“Ford is walking a tightrope, trying at the same time to convince everyone that EVs are the same as ICE vehicles yet different.” —John Leslie King

EVs represent a new class of cyberphysical systems that unify the physical with information technology, allowing them to sense, process, act, and communicate in real time within a large transportation ecosystem, as I have noted in detail elsewhere. While computing in ICE vehicles typically optimizes a car’s performance at the time of sale, EV-based cyberphysical systems are designed to evolve as they are updated and upgraded, postponing their obsolescence.

“As an automotive company, we’ve been trained to put vehicles out when they’re perfect,” Ford’s Farley told the New York Times. “But with software, you can change it with over-the-air updates.” This allows new features to be introduced in existing models instead of waiting for next year’s model to appear. Farley sees Ford spending much less effort on changing vehicles’ physical properties and devoting more to upgrading their software capabilities in the future.

Systems engineering for holistic solutions

EV success at scale depends on as much, if not more, on political decisions as technical ones. Government decision-makers in the United States at both the state and federal level, for instance, have created EV market incentives and set increasingly aggressive dates to sunset ICE vehicle sales, regardless of whether the technological infrastructure needed to support EVs at scale actually exists. While passing public policy can set a direction, it does not guarantee that engineering results will be available when needed.

“A systems-engineering approach towards managing the varied and often conflicting interests of the many stakeholders involved will be necessary to find a workable solution.” —Chris Paredis

Having committed $1.2 trillion through 2030 so far toward decarbonizing the planet, automakers are understandably wary not only of the fast reconfiguration of the auto industry but of the concurrent changes required in the energy, telecom, mining, recycling, and transportation industries that must succeed for their investments to pay off.

The EV transition is part of an unprecedented, planetary-wide, cyberphysical systems-engineering project with massive potential benefits as well as costs. Considering the sheer magnitude, interconnectedness, and uncertainties presented by the concurrent technological, political, and social changes necessary, the EV transition will undoubtedly be messy.

This chart from the Global EV Outlook 2021, IEA, Paris shows 2020 EV sales in the first column; in the second column, projected sales under current climate-mitigation policies; in the third column, projected sales under accelerated climate-mitigation policies.

“There is a lot that has to go right. And it won’t all go right,” observes Kristin Dziczek, former vice president of research at the Center for Automotive Research and now a policy analyst with the Federal Reserve Bank of Chicago. “We will likely stumble forward in some fashion,” but, she stresses, “it’s not a reason not to move forward.”

How many stumbles and how long the transition will take depend on whether the multitude of challenges involved are fully recognized and realistically addressed.

“Everyone needs to stop thinking in silos. It is the adjacency interactions that are going to kill you.” —Deepak Divan

“A systems-engineering approach towards managing the varied and often conflicting interests of the many stakeholders involved will be necessary to find a workable solution,” says Chris Paredis, the BMW Endowed Chair in Automotive Systems Integration at Clemson University. The range of engineering-infrastructure improvements needed to support EVs, for instance, “will need to be coordinated at a national/international level beyond what can be achieved by individual companies,” he states.

If the nitty gritty but hard-to-solve issues are glossed over or ignored, or if EV expectations are hyped beyond the market’s capability to deliver, no one should be surprised by a backlash against EVs, making the transition more difficult.

Until Tesla proved otherwise, EVs—especially battery EVs (BEVs)—were not believed by legacy automakers to be a viable, scalable approach to transport decarbonization even a decade ago. Tesla’s success at producing more than 3 million vehicles to date has shown that EVs are both technologically and economically feasible, at least for the luxury EV niche.

What has not yet been proven, but is widely assumed, is that BEVs can rapidly replace the majority of the current 1.3 billion-plus light-duty ICE vehicles. The interrelated challenges involving EV engineering infrastructure, policy, and societal acceptance, however, will test how well this assumption holds true.

Therefore, the successful transition to EVs at scale demands a “holistic approach,” emphasizes Georgia Tech’s Deepak Divan. “Everyone needs to stop thinking in silos. It is the adjacency interactions that are going to kill you.”

These adjacency issues involve numerous social-infrastructure obstacles that need to be addressed comprehensively along with the engineering issues, including the interactions and contradictions among them. These issues include the value and impacts of government EV incentives, the EV transition impacts on employment, and the public’s willingness to change its lifestyle behavior when it realizes converting to EVs will not be enough to reach future decarbonization goals.

“We cannot foresee all the details needed to make the EV transition successful,” John Leslie King says. “While there’s a reason to believe we will get there, there’s less reason to believe we know the way. It is going to be hard.”

In the next article in the series, we will look at the complexities introduced by trading our dependence on oil for our dependence on batteries.


Match ID: 23 Score: 2.14 source: spectrum.ieee.org age: 14 days
qualifiers: 2.14 carbon

Climate change: Dimming Earth, mustard shortages and other odd side-effects
Sat, 12 Nov 2022 00:51:32 GMT
Changes to our planet's shine is just one of the stranger side effects of rising temperatures.
Match ID: 24 Score: 2.14 source: www.bbc.co.uk age: 16 days
qualifiers: 2.14 climate change

COP27: What Can Business Leaders Do to Fight Climate Change Now?
2022-11-09T00:00:00Z
The US government plans to spend $370 billion to cut greenhouse gases and expand renewable energy—its biggest investment yet. In the wake of COP27, we asked Harvard Business School faculty members how executives could seize this moment.
Match ID: 25 Score: 2.14 source: hbswk.hbs.edu age: 19 days
qualifiers: 2.14 climate change

Robotic Falcon Keeps Birds Away From Airports
Sun, 06 Nov 2022 14:00:00 +0000


Collisions with birds are a serious problem for commercial aircraft, costing the industry billions of dollars and killing thousands of animals every year. New research shows that a robotic imitation of a peregrine falcon could be an effective way to keep them out of flight paths.

Worldwide, so-called birdstrikes are estimated to cost the civil aviation industry almost US $1.4 billion annually. Nearby habitats are often deliberately made unattractive to birds, but airports also rely on a variety of deterrents designed to scare them away, such as loud pyrotechnics or speakers that play distress calls from common species.

However, the effectiveness of these approaches tends to decrease over time, as the birds get desensitized by repeated exposure, says Charlotte Hemelrijk, a professor on the faculty of science and engineering at the University of Groningen, in the Netherlands. Live hawks or blinding lasers are also sometimes used to disperse flocks, she says, but this is controversial as it can harm the animals, and keeping and training falcons is not cheap.

“The birds don’t distinguish [RobotFalcon] from a real falcon, it seems.”
—Charlotte Hemelrijk, University of Groningen

In an effort to find a more practical and lasting solution, Hemelrijk and colleagues designed a robotic peregrine falcon that can be used to chase flocks away from airports. The device is the same size and shape as a real hawk, and its fiberglass and carbon-fiber body has been painted to mimic the markings of its real-life counterpart.

Rather than flapping like a bird, the RobotFalcon relies on two small battery-powered propellers on its wings, which allows it to travel at around 30 miles per hour for up to 15 minutes at a time. A human operator controls the machine remotely from a hawk’s-eye perspective via a camera perched above the robot’s head.

To see how effective the RobotFalcon was at scaring away birds, the researchers tested it against a conventional quadcopter drone over three months of field testing, near the Dutch city of Workum. They also compared their results to 15 years of data collected by the Royal Netherlands Air Force that assessed the effectiveness of conventional deterrence methods such as pyrotechnics and distress calls.

Flock-herding Falcon Drone Patrols Airport Flight Paths youtu.be

In a paper published in the Journal of the Royal Society Interface, the team showed that the RobotFalcon cleared fields of birds faster and more effectively than the drone. It also kept birds away from fields longer than distress calls, the most effective of the conventional approaches.

There was no evidence of birds getting habituated to the RobotFalcon over three months of testing, says Hemelrijk, and the researchers also found that the birds exhibited behavior patterns associated with escaping from predators much more frequently with the robot than with the drone. “The way of reacting to the RobotFalcon is very similar to the real falcon,” says Hemelrijk. “The birds don’t distinguish it from a real falcon, it seems.”

Other attempts to use hawk-imitating robots to disperse birds have had less promising results, though. Morgan Drabik-Hamshare, a research wildlife biologist at the DoA, and her colleagues published a paper in Scientific Reports last year that described how they pitted a robotic peregrine falcon with flapping wings against a quadcopter and a fixed-wing remote-controlled aircraft.

They found the robotic falcon was the least effective of the three at scaring away turkey vultures, with the quadcopter scaring the most birds off and the remote-controlled plane eliciting the quickest response. “Despite the predator silhouette, the vultures did not perceive the predator UAS [unmanned aircraft system] as a threat,” Drabik-Hamshare wrote in an email.

Zihao Wang, an associate lecturer at the University of Sydney, in Australia, who develops UAS for bird deterrence, says the RobotFalcon does seem to be effective at dispersing flocks. But he points out that its wingspan is nearly twice the diagonal length of the quadcopter it was compared with, which means it creates a much larger silhouette when viewed from the birds’ perspective. This means the birds could be reacting more to its size than its shape, and he would like to see the RobotFalcon compared with a similar size drone in the future.

The unique design also means the robot requires an experienced and specially trained operator, Wang adds, which could make it difficult to roll out widely. A potential solution could be to make the system autonomous, he says, but it’s unclear how easy this would be.

Hemelrijk says automating the RobotFalcon is probably not feasible, both due to strict regulations around the use of autonomous drones near airports as well as the sheer technical complexity. Their current operator is a falconer with significant experience in how hawks target their prey, she says, and creating an autonomous system that could recognize and target bird flocks in a similar way would be highly challenging.

But while the need for skilled operators is a limitation, Hemelrijk points out that most airports already have full-time staff dedicated to bird deterrence, who could be trained. And given the apparent lack of habituation and the ability to chase birds in a specific direction—so that they head away from runways—she thinks the robotic falcon could be a useful addition to their arsenal.


Match ID: 26 Score: 2.14 source: spectrum.ieee.org age: 21 days
qualifiers: 2.14 carbon

NASA, USAID Partnership Strengthens Global Development
Fri, 04 Nov 2022 16:08 EDT
NASA and the U.S. Agency for International Development (USAID) signed an agreement Friday strengthening the collaboration between the two agencies, including efforts that advance the federal response to climate change.
Match ID: 27 Score: 2.14 source: www.nasa.gov age: 23 days
qualifiers: 2.14 climate change

NASA Leaders to Participate in Annual Global Climate Conference
Fri, 04 Nov 2022 14:52 EDT
NASA will participate in the 27th United Nations Climate Change Conference of the Parties (COP27) in Sharm El Sheikh, Egypt, which begins Sunday, Nov. 6, and runs through Friday, Nov. 18. The COP27 summit brings together countries from around the world to increase ambition by implementing existing goals and strengthening commitments to solutions th
Match ID: 28 Score: 2.14 source: www.nasa.gov age: 23 days
qualifiers: 2.14 climate change

Cop27: the climate carnage we've faced this year – video
Thu, 03 Nov 2022 11:07:52 GMT

One by one, the grim scenarios climate scientists had outlined for the near future have been overtaken by events: extreme storms, droughts, floods and ice-sheet collapses whose sudden appearances have outstripped researchers’ worst predictions. Catastrophic climate change is happening more rapidly and with greater intensity than their grimmest warnings, it transpires.

With the 2022 global climate summit Cop 27 upon us, the Guardian looks back at how the climate crisis has affected communities around the world since the last meeting in Glasgow in 2021

Continue reading...
Match ID: 29 Score: 2.14 source: www.theguardian.com age: 24 days
qualifiers: 2.14 climate change

How gas is being rebranded as green – video
Thu, 03 Nov 2022 09:05:21 GMT

Is natural gas renewable? Is it a fossil fuel? A casual google search for natural gas gives the impression that these questions are somehow up for debate. And while natural gas has helped reduce carbon emissions as it was widely adopted as a replacement for coal, it is now up against zero-emission energy such as wind and solar. So how did natural gas end up in the same bracket as renewables? Josh Toussaint-Strauss explores the lengths fossil fuel companies have gone to in order to try to convince consumers, voters and lawmakers that natural gas is somehow a clean energy source

Continue reading...
Match ID: 30 Score: 2.14 source: www.theguardian.com age: 24 days
qualifiers: 2.14 carbon

NASA Invites Media to Launch of Water-Monitoring Satellite
Thu, 13 Oct 2022 17:19 EDT
Media accreditation is now open for the upcoming launch of the Surface Water and Ocean Topography (SWOT) satellite, a pathfinder mission about Earth’s water that will use new technology to address climate change and its impact on our environment.
Match ID: 31 Score: 2.14 source: www.nasa.gov age: 45 days
qualifiers: 2.14 climate change

MOXIE Shows How to Make Oxygen on Mars
Thu, 08 Sep 2022 15:27:59 +0000


Planning for the return journey is an integral part of the preparations for a crewed Mars mission. Astronauts will require a total mass of about 50 tonnes of rocket propellent for the ascent vehicle that will lift them off the planet’s surface, including 31 tonnes of oxygen approximately. The less popular option is for crewed missions to carry the required oxygen themselves. But scientists are optimistic that it could instead be produced from the carbon dioxide–rich Martian atmosphere itself, using a system called MOXIE.

The Mars Oxygen ISRU (In-Situ Resource Utilization) Experiment is an 18-kilogram unit housed within the Perseverance rover on Mars. The unit is “the size of a toaster,” adds Jeffrey Hoffman, professor of aerospace engineering at MIT. Its job is to electrochemically break down carbon dioxide collected from the Martian atmosphere into oxygen and carbon monoxide. It also tests the purity of the oxygen.

Between February 2021, when it arrived on Mars aboard the Perseverance, and the end of the year, MOXIE has had several successful test runs. According to a review of the system by Hoffman and colleagues, published in Science Advances, it has demonstrated its ability to produce oxygen during both night and day, when temperatures can vary by over 100 ºC. The generation and purity rates of oxygen also meet requirements to produce rocket propellent and for breathing. The authors assert that a scaled-up version of MOXIE could produce the required oxygen for lift-off as well as for the astronauts to breathe.

Next question: How to power any oxygen-producing factories that NASA can land on Mars? Perhaps via NASA’s Kilopower fission reactors?

MOXIE is a first step toward a much larger and more complex system to support the human exploration of Mars. The researchers estimate a required generation rate of 2 to 3 kilograms per hour, compared with the current MOXIE rate of 6 to 8 grams per hour, to produce enough oxygen for lift-off for a crew arriving 26 months later. “So we’re talking about a system that’s a couple of hundred times bigger than MOXIE,” Hoffman says.

They calculate this rate accounting for eight months to get to Mars, followed by some time to set up the system. “We figure you'd probably have maybe 14 months to make all the oxygen.” Further, he says, the produced oxygen would have to be liquefied to be used a rocket propellant, something the current version of MOXIE doesn’t do.

MOXIE also currently faces several design constraints because, says Hoffman, a former astronaut, “our only ride to Mars was inside the Perseverance rover.” This limited the amount of power available to operate the unit, the amount of heat they could produce, the volume and the mass.

“MOXIE does not work nearly as efficiently as a stand-alone system that was specifically designed would,” says Hoffman. Most of the time, it’s turned off. “Every time we want to make oxygen, we have to heat it up to 800 ºC, so most of the energy goes into heating it up and running the compressor, whereas in a well-designed stand-alone system, most of the energy will go into the actual electrolysis, into actually producing the oxygen.”

However, there are still many kinks to iron out for the scaling-up process. To begin with, any oxygen-producing system will need lots of power. Hoffman thinks nuclear power is the most likely option, maybe NASA’s Kilopower fission reactors. The setup and the cabling would certainly be challenging, he says. “You’re going to have to launch to all of these nuclear reactors, and of course, they’re not going to be in exactly the same place as the [other] units,” he says. "So, robotically, you’re going to have to connect to the electrical cables to bring power to the oxygen-producing unit.”

Then there is the solid oxide electrolysis units, which Hoffman points out are carefully machined systems. Fortunately, the company that makes them, OxEon, has already designed, built, and tested a full-scale unit, a hundred times bigger than the one on MOXIE. “Several of those units would be required to produce oxygen at the quantities that we need,” Hoffman says.

He also adds that at present, there is no redundancy built into MOXIE. If any part fails, the whole system dies. “If you’re counting on a system to produce oxygen for rocket propellant and for breathing, you need very high reliability, which means you’re going to need quite a few redundant units.”

Moreover, the system has to be pretty much autonomous, Hoffman says. “It has to be able to monitor itself, run itself.” For testing purposes, every time MOXIE is powered up, there is plenty of time to plan. A full-scale MOXIE system, though, would have to run continuously, and for that it has to be able to adjust automatically to changes in the Mars atmosphere, which can vary by a factor of two over a year, and between nighttime and daytime temperature differences.


Match ID: 32 Score: 2.14 source: spectrum.ieee.org age: 80 days
qualifiers: 2.14 carbon

Inside the Universe Machine: The Webb Space Telescope’s Staggering Vision
Wed, 06 Jul 2022 13:00:00 +0000


For a deep dive into the engineering behind the James Webb Space Telescope, see our collection of posts here.

“Build something that will absolutely, positively work.” This was the mandate from NASA for designing and building the James Webb Space Telescope—at 6.5 meters wide the largest space telescope in history. Last December, JWST launched famously and successfully to its observing station out beyond the moon. And now according to NASA, as soon as next week, the JWST will at long last begin releasing scientific images and data.

Mark Kahan, on JWST’s product integrity team, recalls NASA’s engineering challenge as a call to arms for a worldwide team of thousands that set out to create one of the most ambitious scientific instruments in human history. Kahan—chief electro-optical systems engineer at Mountain View, Calif.–based Synopsys—and many others in JWST’s “pit crew” (as he calls the team) drew hard lessons from three decades ago, having helped repair another world-class space telescope with a debilitating case of flawed optics. Of course the Hubble Space Telescope is in low Earth orbit, and so a special space-shuttle mission to install corrective optics ( as happened in 1993) was entirely possible.

Not so with the JWST.

The meticulous care NASA demanded of JWST’s designers is all the more a necessity because Webb is well out of reach of repair crews. Its mission is to study the infrared universe, and that requires shielding the telescope and its sensors from both the heat of sunlight and the infrared glow of Earth. A good place to do that without getting too far from Earth is an empty patch of interplanetary space 1.5 million kilometers away (well beyond the moon’s orbit) near a spot physicists call the second Lagrange point, or L2.

The pit crew’s job was “down at the detail level, error checking every critical aspect of the optical design,” says Kahan. Having learned the hard way from Hubble, the crew insisted that every measurement on Webb’s optics be made in at least two different ways that could be checked and cross-checked. Diagnostics were built into the process, Kahan says, so that “you could look at them to see what to kick” to resolve any discrepancies. Their work had to be done on the ground, but their tests had to assess how the telescope would work in deep space at cryogenic temperatures.

Three New Technologies for the Main Mirror

Superficially, Webb follows the design of all large reflecting telescopes. A big mirror collects light from stars, galaxies, nebulae, planets, comets, and other astronomical objects—and then focuses those photons onto a smaller secondary mirror that sends it to a third mirror that then ultimately directs the light to instruments that record images and spectra.

Webb’s 6.5-meter primary mirror is the first segmented mirror to be launched into space. All the optics had to be made on the ground at room temperature but were deployed in space and operated at 30 to 55 degrees above absolute zero. “We had to develop three new technologies” to make it work, says Lee D. Feinberg of the NASA Goddard Space Flight Center, the optical telescope element manager for Webb for the past 20 years.

The longest wavelengths that Hubble has to contend with were 2.5 micrometers, whereas Webb is built to observe infrared light that stretches to 28 μm in wavelength. Compared with Hubble, whose primary mirror is a circle of an area 4.5 square meters, “[Webb’s primary mirror] had to be 25 square meters,” says Feinberg. Webb also “needed segmented mirrors that were lightweight, and its mass was a huge consideration,” he adds. No single-component mirror that could provide the required resolution would have fit on the Ariane 5 rocket that launched JWST. That meant the mirror would have to be made in pieces, assembled, folded, secured to withstand the stress of launch, then unfolded and deployed in space to create a surface that was within tens of nanometers of the shape specified by the designers.

Images of the James Webb Space Telescope and Hubble Space Telescope to scale, compared to a human figure, who is dwarfed by their size The James Webb Space Telescope [left] and the Hubble Space Telescope side by side—with Hubble’s 2.4-meter-diameter mirror versus Webb’s array of hexagonal mirrors making a 6.5-meter-diameter light-collecting area. NASA Goddard Space Flight Center

NASA and the U.S. Air Force, which has its own interests in large lightweight space mirrors for surveillance and focusing laser energy, teamed up to develop the technology. The two agencies narrowed eight submitted proposals down to two approaches for building JWST’s mirrors: one based on low-expansion glass made of a mixture of silicon and titanium dioxides similar to that used in Hubble and the other the light but highly toxic metal beryllium. The most crucial issue came down to how well the materials could withstand temperature changes from room temperature on the ground to around 50 K in space. Beryllium won because it could fully release stress after cooling without changing its shape, and it’s not vulnerable to the cracking that can occur in glass. The final beryllium mirror was a 6.5-meter array of 18 hexagonal beryllium mirrors, each weighing about 20 kilograms. The weight per unit area of JWST’s mirror was only 10 percent of that in Hubble. A 100-nanometer layer of pure gold makes the surface reflect 98 percent of incident light from JWST’s main observing band of 0.6 to 28.5 μm. “Pure silver has slightly higher reflectivity than pure gold, but gold is more robust,” says Feinberg. A thin layer of amorphous silica protects the metal film from surface damage.

In addition, a wavefront-sensing control system keeps mirror segment surfaces aligned to within tens of nanometers. Built on the ground, the system is expected to keep mirror alignment stabilized throughout the telescope’s operational life. A backplane kept at a temperature of 35 K holds all 2.4 tonnes of the telescope and instruments rock-steady to within 32 nm while maintaining them at cryogenic temperatures during observations.

Metal superstructure of cages and supports stands on a giant platform in a warehouse-sized clean-room. A man in a cleanroom suit watches the operations. The JWST backplane, the “spine” that supports the entire hexagonal mirror structure and carries more than 2,400 kg of hardware, is readied for assembly to the rest of the telescope. NASA/Chris Gunn

Hubble’s amazing, long-exposure images of distant galaxies are possible through the use of gyroscopes and reaction wheels. The gyroscopes are used to sense unwanted rotations, and reaction wheels are used to counteract them.

But the gyroscopes used on Hubble have had a bad track record and have had to be replaced repeatedly. Only three of Hubble’s six gyros remain operational today, and NASA has devised plans for operating with one or two gyros at reduced capability. Hubble also includes reaction wheels and magnetic torquers, used to maintain its orientation when needed or to point at different parts of the sky.

Webb uses reaction wheels similarly to turn across the sky, but instead of using mechanical gyros to sense direction, it uses hemispherical resonator gyroscopes, which have no moving parts. Webb also has a small fine-steering mirror in the optical path, which can tilt over an angle of just 5 arc seconds. Those very fine adjustments of the light path into the instruments keep the telescope on target. “It’s a really wonderful way to go,” says Feinberg, adding that it compensates for small amounts of jitter without having to move the whole 6-tonne observatory.

Instruments

Other optics distribute light from the fine-steering mirror among four instruments, two of which can observe simultaneously. Three instruments have sensors that observe wavelengths of 0.6 to 5 μm, which astronomers call the near-infrared. The fourth, called the Mid-InfraRed Instrument (MIRI), observes what astronomers call the mid-infrared spectrum, from 5 to 28.5 μm. Different instruments are needed because sensors and optics have limited wavelength ranges. (Optical engineers may blanch slightly at astronomers’ definitions of what constitutes the near- and mid-infrared wavelength ranges. These two groups simply have differing conventions for labeling the various regimes of the infrared spectrum.)

Mid-infrared wavelengths are crucial for observing young stars and planetary systems and the earliest galaxies, but they also pose some of the biggest engineering challenges. Namely, everything on Earth and planets out to Jupiter glow in the mid-infrared. So for JWST to observe distant astronomical objects, it must avoid recording extraneous mid-infrared noise from all the various sources inside the solar system. “I have spent my whole career building instruments for wavelengths of 5 μm and longer,” says MIRI instrument scientist Alistair Glasse of the Royal Observatory, in Edinburgh. “We’re always struggling against thermal background.”

Mountaintop telescopes can see the near-infrared, but observing the mid-infrared sky requires telescopes in space. However, the thermal radiation from Earth and its atmosphere can cloud their view, and so can the telescopes themselves unless they are cooled far below room temperature. An ample supply of liquid helium and an orbit far from Earth allowed the Spitzer Space Telescope’s primary observing mission to last for five years, but once the last of the cryogenic fluid evaporated in 2009, its observations were limited to wavelengths shorter than 5 μm.

Webb has an elaborate solar shield to block sunlight, and an orbit 1.5 million km from Earth that can keep the telescope to below 55 K, but that’s not good enough for low-noise observations at wavelengths longer than 5 μm. The near-infrared instruments operate at 40 K to minimize thermal noise. But for observations out to 28.5 μm, MIRI uses a specially developed closed-cycle, helium cryocooler to keep MIRI cooled below 7 K. “We want to have sensitivity limited by the shot noise of astronomical sources,” says Glasse. (Shot noise occurs when optical or electrical signals are so feeble that each photon or electron constitutes a detectable peak.) That will make MIRI 1,000 times as sensitive in the mid-infrared as Spitzer.

Another challenge is the limited transparency of optical materials in the mid-infrared. “We use reflective optics wherever possible,” says Glasse, but they also pose problems, he adds. “Thermal contraction is a big deal,” he says, because the instrument was made at room temperature but is used at 7 K. To keep thermal changes uniform throughout MIRI, they made the whole structure of gold-coated aluminum lest other metals cause warping.

Detectors are another problem. Webb’s near-infrared sensors use mercury cadmium telluride photodetectors with a resolution of 2,048 x 2,048 pixels. This resolution is widely used at wavelengths below 5 μm, but sensing at MIRI’s longer wavelengths required exotic detectors that are limited to offering only 1,024 x 1,024 pixels.

Glasse says commissioning “has gone incredibly well.” Although some stray light has been detected, he says, “we are fully expecting to meet all our science goals.”

NIRCam Aligns the Whole Telescope

The near-infrared detectors and optical materials used for observing at wavelengths shorter than 5 μm are much more mature than those for the mid-infrared, so the Near-Infrared Camera (NIRCam) does double duty by both recording images and aligning all the optics in the whole telescope. That alignment was the trickiest part of building the instrument, says NIRCam principal investigator Marcia Rieke of the University of Arizona.

Alignment means getting all the light collected by the primary mirror to get to the right place in the final image. That’s crucial for Webb, because it has 18 separate segments that have to overlay their images perfectly in the final image, and because all those segments were built on the ground at room temperature but operate at cryogenic temperatures in space at zero gravity. When NASA recorded a test image of a single star after Webb first opened its primary mirror, it showed 18 separate bright spots, one from each segment. When alignment was completed on 11 March, the image from NIRcam showed a single star with six spikes caused by diffraction.

Image of a star with six-pointed spikes caused by diffraction Even when performing instrumental calibration tasks, JWST couldn’t help but showcase its stunning sensitivity to the infrared sky. The central star is what telescope technicians used to align JWST’s mirrors. But notice the distant galaxies and stars that photobombed the image too!NASA/STScI

Building a separate alignment system would have added to both the weight and cost of Webb, Rieke realized, and in the original 1995 plan for the telescope she proposed designing NIRCam so it could align the telescope optics once it was up in space as well as record images. “The only real compromise was that it required NIRCam to have exquisite image quality,” says Rieke, wryly. From a scientific point, she adds, using the instrument to align the telescope optics “is great because you know you’re going to have good image quality and it’s going to be aligned with you.” Alignment might be just a tiny bit off for other instruments. In the end, it took a team at Lockheed Martin to develop the computational tools to account for all the elements of thermal expansion.

Escalating costs and delays had troubled Webb for years. But for Feinberg, “commissioning has been a magical five months.” It began with the sight of sunlight hitting the mirrors. The segmented mirror deployed smoothly, and after the near-infrared cameras cooled, the mirrors focused one star into 18 spots, then aligned them to put the spots on top of each other. “Everything had to work to get it to [focus] that well,” he says. It’s been an intense time, but for Feinberg, a veteran of the Hubble repair mission, commissioning Webb was “a piece of cake.”

NASA announced that between May 23rd and 25th, one segment of the primary mirror had been dinged by a micrometeorite bigger than the agency had expected when it analyzed the potential results of such impacts. “Things do degrade over time,” Feinberg said. But he added that Webb had been engineered to minimize damage, and NASA said the event had not affected Webb’s operation schedule.

Corrections 26-28 July 2022: The story was updated a) to reflect the fact that the Lagrange point L2 where Webb now orbits is not that of the "Earth-moon system" (as the story had originally reported) but rather the Earth-sun system
and b) to correct misstatements in the original posting about Webb's hardware for controlling its orientation.

Corrections 12 Aug. 2022: Alistair Glasse's name was incorrectly spelled in a previous version of this story, as was NIRCam (which we'd spelled as NIRcam); Webb's tertiary mirror (we'd originally reported only its primary and secondary mirrors) was also called out in this version.

This article appears in the September 2022 print issue as “Inside the Universe Machine.”


Match ID: 33 Score: 2.14 source: spectrum.ieee.org age: 144 days
qualifiers: 2.14 toxic

NASA to Industry: Let’s Develop Flight Tech to Reduce Carbon Emissions
Wed, 29 Jun 2022 14:25 EDT
NASA announced Wednesday the agency is seeking partners to develop technologies needed to shape a new generation of lower-emission, single-aisle airliners that passengers could see in airports in the 2030s.
Match ID: 34 Score: 2.14 source: www.nasa.gov age: 151 days
qualifiers: 2.14 carbon

U.N. Kills Any Plans to Use Mercury as a Rocket Propellant
Tue, 19 Apr 2022 18:00:01 +0000


A recent United Nations provision has banned the use of mercury in spacecraft propellant. Although no private company has actually used mercury propellant in a launched spacecraft, the possibility was alarming enough—and the dangers extreme enough—that the ban was enacted just a few years after one U.S.-based startup began toying with the idea. Had the company gone through with its intention to sell mercury propellant thrusters to some of the companies building massive satellite constellations over the coming decade, it would have resulted in Earth’s upper atmosphere being laced with mercury.

Mercury is a neurotoxin. It’s also bio-accumulative, which means it’s absorbed by the body at a faster rate than the body can remove it. The most common way to get mercury poisoning is through eating contaminated seafood. “It’s pretty nasty,” says Michael Bender, the international coordinator of the Zero Mercury Working Group (ZMWG). “Which is why this is one of the very few instances where the governments of the world came together pretty much unanimously and ratified a treaty.”

Bender is referring to the 2013 Minamata Convention on Mercury, a U.N. treaty named for a city in Japan whose residents suffered from mercury poisoning from a nearby chemical factory for decades. Because mercury pollutants easily find their way into the oceans and the atmosphere, it’s virtually impossible for one country to prevent mercury poisoning within its borders. “Mercury—it’s an intercontinental pollutant,” Bender says. “So it required a global treaty.”

Today, the only remaining permitted uses for mercury are in fluorescent lighting and dental amalgams, and even those are being phased out. Mercury is otherwise found as a by-product of other processes, such as the burning of coal. But then a company hit on the idea to use it as a spacecraft propellant.

In 2018, an employee at Apollo Fusion approached the Public Employees for Environmental Responsibility (PEER), a nonprofit that investigates environmental misconduct in the United States. The employee—who has remained anonymous—alleged that the Mountain View, Calif.–based space startup was planning to build and sell thrusters that used mercury propellant to multiple companies building low Earth orbit (LEO) satellite constellations.

Four industry insiders ultimately confirmed that Apollo Fusion was building thrusters that utilized mercury propellant. Apollo Fusion, which was acquired by rocket manufacturing startup Astra in June 2021, insisted that the composition of its propellant mixture should be considered confidential information. The company withdrew its plans for a mercury propellant in April 2021. Astra declined to respond to a request for comment for this story.

Apollo Fusion wasn’t the first to consider using mercury as a propellant. NASA originally tested it in the 1960s and 1970s with two Space Electric Propulsion Tests (SERT), one of which was sent into orbit in 1970. Although the tests demonstrated mercury’s effectiveness as a propellant, the same concerns over the element’s toxicity that have seen it banned in many other industries halted its use by the space agency as well.

“I think it just sort of fell off a lot of folks’ radars,” says Kevin Bell, the staff counsel for PEER. “And then somebody just resurrected the research on it and said, ‘Hey, other than the environmental impact, this was a pretty good idea.’ It would give you a competitive advantage in what I imagine is a pretty tight, competitive market.”

That’s presumably why Apollo Fusion was keen on using it in their thrusters. Apollo Fusion as a startup emerged more or less simultaneously with the rise of massive LEO constellations that use hundreds or thousands of satellites in orbits below 2,000 kilometers to provide continual low-latency coverage. Finding a slightly cheaper, more efficient propellant for one large geostationary satellite doesn’t move the needle much. But doing the same for thousands of satellites that need to be replaced every several years? That’s a much more noticeable discount.

Were it not for mercury’s extreme toxicity, it would actually make an extremely attractive propellant. Apollo Fusion wanted to use a type of ion thruster called a Hall-effect thruster. Ion thrusters strip electrons from the atoms that make up a liquid or gaseous propellant, and then an electric field pushes the resultant ions away from the spacecraft, generating a modest thrust in the opposite direction. The physics of rocket engines means that the performance of these engines increases with the mass of the ion that you can accelerate.

Mercury is heavier than either xenon or krypton, the most commonly used propellants, meaning more thrust per expelled ion. It’s also liquid at room temperature, making it efficient to store and use. And it’s cheap—there’s not a lot of competition with anyone looking to buy mercury.

Bender says that ZMWG, alongside PEER, caught wind of Apollo Fusion marketing its mercury-based thrusters to at least three companies deploying LEO constellations—One Web, Planet Labs, and SpaceX. Planet Labs, an Earth-imaging company, has at least 200 CubeSats in low Earth orbit. One Web and SpaceX, both wireless-communication providers, have many more. One Web plans to have nearly 650 satellites in orbit by the end of 2022. SpaceX already has nearly 1,500 active satellites aloft in its Starlink constellation, with an eye toward deploying as many as 30,000 satellites before its constellation is complete. Other constellations, like Amazon’s Kuiper constellation, are also planning to deploy thousands of satellites.

In 2019, a group of researchers in Italy and the United States estimated how much of the mercury used in spacecraft propellant might find its way back into Earth’s atmosphere. They figured that a hypothetical LEO constellation of 2,000 satellites, each carrying 100 kilograms of propellant, would emit 20 tonnes of mercury every year over the course of a 10-year life span. Three quarters of that mercury, the researchers suggested, would eventually wind up in the oceans.

That amounts to 1 percent of global mercury emissions from a constellation only a fraction of the size of the one planned by SpaceX alone. And if multiple constellations adopted the technology, they would represent a significant percentage of global mercury emissions—especially, the researchers warned, as other uses of mercury are phased out as planned in the years ahead.

Fortunately, it’s unlikely that any mercury propellant thrusters will even get off the ground. Prior to the fourth meeting of the Minamata Convention, Canada, the European Union, and Norway highlighted the dangers of mercury propellant, alongside ZMWG. The provision to ban mercury usage in satellites was passed on 26 March 2022.

The question now is enforcement. “Obviously, there aren’t any U.N. peacekeepers going into space to shoot down” mercury-based satellites, says Bell. But the 137 countries, including the United States, who are party to the convention have pledged to adhere to its provisions—including the propellant ban.

The United States is notable in that list because as Bender explains, it did not ratify the Minamata Convention via the U.S. Senate but instead deposited with the U.N. an instrument of acceptance. In a 7 November 2013 statement (about one month after the original Minamata Convention was adopted), the U.S. State Department said the country would be able to fulfill its obligations “under existing legislative and regulatory authority.”

Bender says the difference is “weedy” but that this appears to mean that the U.S. government has agreed to adhere to the Minamata Convention’s provisions because it already has similar laws on the books. Except there is still no existing U.S. law or regulation banning mercury propellant. For Bender, that creates some uncertainty around compliance when the provision goes into force in 2025.

Still, with a U.S. company being the first startup to toy with mercury propellant, it might be ideal to have a stronger U.S. ratification of the Minamata Convention before another company hits on the same idea. “There will always be market incentives to cut corners and do something more dangerously,” Bell says.

Update 19 April 2022: In an email, a spokesperson for Astra stated that the company's propulsion system, the Astra Spacecraft Engine, does not use mercury. The spokesperson also stated that Astra has no plans to use mercury propellant and that the company does not have anything in orbit that uses mercury.

Updated 20 April 2022 to clarify that Apollo Fusion was building thrusters that used mercury, not that they had actually used them.


Match ID: 35 Score: 2.14 source: spectrum.ieee.org age: 222 days
qualifiers: 2.14 toxic

Ahrefs vs SEMrush: Which SEO Tool Should You Use?
Tue, 01 Mar 2022 12:16:00 +0000
semrush vs ahrefs


SEMrush and Ahrefs are among the most popular tools in the SEO industry. Both companies have been in business for years and have thousands of customers per month.

If you're a professional SEO or trying to do digital marketing on your own, at some point you'll likely consider using a tool to help with your efforts. Ahrefs and SEMrush are two names that will likely appear on your shortlist.

In this guide, I'm going to help you learn more about these SEO tools and how to choose the one that's best for your purposes.

What is SEMrush?

semrush

SEMrush is a popular SEO tool with a wide range of features—it's the leading competitor research service for online marketers. SEMrush's SEO Keyword Magic tool offers over 20 billion Google-approved keywords, which are constantly updated and it's the largest keyword database.

The program was developed in 2007 as SeoQuake is a small Firefox extension

Features

  • Most accurate keyword data: Accurate keyword search volume data is crucial for SEO and PPC campaigns by allowing you to identify what keywords are most likely to bring in big sales from ad clicks. SEMrush constantly updates its databases and provides the most accurate data.
  • Largest Keyword database: SEMrush's Keyword Magic Tool now features 20-billion keywords, providing marketers and SEO professionals the largest database of keywords.

  • All SEMrush users receive daily ranking data, mobile volume information, and the option to buy additional keywords by default with no additional payment or add-ons needed
  • Most accurate position tracking tool: This tool provides all subscribers with basic tracking capabilities, making it suitable for SEO professionals. Plus, the Position Tracking tool provides local-level data to everyone who uses the tool.
  • SEO Data Management: SEMrush makes managing your online data easy by allowing you to create visually appealing custom PDF reports, including Branded and White Label reports, report scheduling, and integration with GA, GMB, and GSC.
  • Toxic link monitoring and penalty recovery: With SEMrush, you can make a detailed analysis of toxic backlinks, toxic scores, toxic markers, and outreach to those sites.
  • Content Optimization and Creation Tools: SEMrush offers content optimization and creation tools that let you create SEO-friendly content. Some features include the SEO Writing Assistant, On-Page SEO Check, er/SEO Content Template, Content Audit, Post Tracking, Brand Monitoring.

Ahrefs

ahrefs


Ahrefs is a leading SEO platform that offers a set of tools to grow your search traffic, research your competitors, and monitor your niche. The company was founded in 2010, and it has become a popular choice among SEO tools. Ahrefs has a keyword index of over 10.3 billion keywords and offers accurate and extensive backlink data updated every 15-30 minutes and it is the world's most extensive backlink index database.

Features

  • Backlink alerts data and new keywords: Get an alert when your site is linked to or discussed in blogs, forums, comments, or when new keywords are added to a blog posting about you.
  • Intuitive interface: The intuitive design of the widget helps you see the overall health of your website and search engine ranking at a glance.
  • Site Explorer: The Site Explorer will give you an in-depth look at your site's search traffic.
  • Domain Comparison
  • Reports with charts and graphs
  • JavaScript rendering and a site audit can identify SEO issues.
  • A question explorer that provides well-crafted topic suggestions

Direct Comparisons: Ahrefs vs SEMrush

Now that you know a little more about each tool, let's take a look at how they compare. I'll analyze each tool to see how they differ in interfaces, keyword research resources, rank tracking, and competitor analysis.

User Interface

Ahrefs and SEMrush both offer comprehensive information and quick metrics regarding your website's SEO performance. However, Ahrefs takes a bit more of a hands-on approach to getting your account fully set up, whereas SEMrush's simpler dashboard can give you access to the data you need quickly.

In this section, we provide a brief overview of the elements found on each dashboard and highlight the ease with which you can complete tasks.

AHREFS

ahrefs interface


The Ahrefs dashboard is less cluttered than that of SEMrush, and its primary menu is at the very top of the page, with a search bar designed only for entering URLs.

Additional features of the Ahrefs platform include:

  • You can see analytics from the dashboard, including search engine rankings to domain ratings, referring domains, and backlink
  • Jumping from one tool to another is easy. You can use the Keyword Explorer to find a keyword to target and then directly track your ranking with one click.
  • The website offers a tooltip helper tool that allows you to hover your mouse over something that isn't clear and get an in-depth explanation.

SEMRUSH

semrush domain overview


When you log into the SEMrush Tool, you will find four main modules. These include information about your domains, organic keyword analysis, ad keyword, and site traffic.

You'll also find some other options like

  • A search bar allows you to enter a domain, keyword, or anything else you wish to explore.
  • A menu on the left side of the page provides quick links to relevant information, including marketing insights, projects, keyword analytics, and more.
  • The customer support resources located directly within the dashboard can be used to communicate with the support team or to learn about other resources such as webinars and blogs.
  • Detailed descriptions of every resource offered. This detail is beneficial for new marketers, who are just starting.

WHO WINS?

Both Ahrefs and SEMrush have user-friendly dashboards, but Ahrefs is less cluttered and easier to navigate. On the other hand, SEMrush offers dozens of extra tools, including access to customer support resources.

When deciding on which dashboard to use, consider what you value in the user interface, and test out both.

Rank Tracking

If you're looking to track your website's search engine ranking, rank tracking features can help. You can also use them to monitor your competitors.

Let's take a look at Ahrefs vs. SEMrush to see which tool does a better job.

Ahrefs

ahrefs rank tracking


The Ahrefs Rank Tracker is simpler to use. Just type in the domain name and keywords you want to analyze, and it spits out a report showing you the search engine results page (SERP) ranking for each keyword you enter.

Rank Tracker looks at the ranking performance of keywords and compares them with the top rankings for those keywords. Ahrefs also offers:

You'll see metrics that help you understand your visibility, traffic, average position, and keyword difficulty.

It gives you an idea of whether a keyword would be profitable to target or not.

SEMRUSH

semrush position tracking


SEMRush offers a tool called Position Tracking. This tool is a project tool—you must set it up as a new project. Below are a few of the most popular features of the SEMrush Position Tracking tool:

All subscribers are given regular data updates and mobile search rankings upon subscribing

The platform provides opportunities to track several SERP features, including Local tracking.

Intuitive reports allow you to track statistics for the pages on your website, as well as the keywords used in those pages.

Identify pages that may be competing with each other using the Cannibalization report.

WHO WINS?

Ahrefs is a more user-friendly option. It takes seconds to enter a domain name and keywords. From there, you can quickly decide whether to proceed with that keyword or figure out how to rank better for other keywords.

SEMrush allows you to check your mobile rankings and ranking updates daily, which is something Ahrefs does not offer. SEMrush also offers social media rankings, a tool you won't find within the Ahrefs platform. Both are good which one do you like let me know in the comment.

Keyword Research

Keyword research is closely related to rank tracking, but it's used for deciding which keywords you plan on using for future content rather than those you use now.

When it comes to SEO, keyword research is the most important thing to consider when comparing the two platforms.

AHREFS



The Ahrefs Keyword Explorer provides you with thousands of keyword ideas and filters search results based on the chosen search engine.

Ahrefs supports several features, including:

  • It can search multiple keywords in a single search and analyze them together. At SEMrush, you also have this feature in Keyword Overview.
  • Ahrefs has a variety of keywords for different search engines, including Google, YouTube, Amazon, Bing, Yahoo, Yandex, and other search engines.
  • When you click on a keyword, you can see its search volume and keyword difficulty, but also other keywords related to it, which you didn't use.

SEMRUSH



SEMrush's Keyword Magic Tool has over 20 billion keywords for Google. You can type in any keyword you want, and a list of suggested keywords will appear.

The Keyword Magic Tool also lets you to:

  • Show performance metrics by keyword
  • Search results are based on both broad and exact keyword matches.
  • Show data like search volume, trends, keyword difficulty, and CPC.
  • Show the first 100 Google search results for any keyword.
  • Identify SERP Features and Questions related to each keyword
  • SEMrush has released a new Keyword Gap Tool that uncovers potentially useful keyword opportunities for you, including both paid and organic keywords.

WHO WINS?

Both of these tools offer keyword research features and allow users to break down complicated tasks into something that can be understood by beginners and advanced users alike.

If you're interested in keyword suggestions, SEMrush appears to have more keyword suggestions than Ahrefs does. It also continues to add new features, like the Keyword Gap tool and SERP Questions recommendations.

Competitor Analysis

Both platforms offer competitor analysis tools, eliminating the need to come up with keywords off the top of your head. Each tool is useful for finding keywords that will be useful for your competition so you know they will be valuable to you.

AHREFS



Ahrefs' domain comparison tool lets you compare up to five websites (your website and four competitors) side-by-side.it also shows you how your site is ranked against others with metrics such as backlinks, domain ratings, and more.

Use the Competing Domains section to see a list of your most direct competitors, and explore how many keywords matches your competitors have.

To find more information about your competitor, you can look at the Site Explorer and Content Explorer tools and type in their URL instead of yours.

SEMRUSH



SEMrush provides a variety of insights into your competitors' marketing tactics. The platform enables you to research your competitors effectively. It also offers several resources for competitor analysis including:

Traffic Analytics helps you identify where your audience comes from, how they engage with your site, what devices visitors use to view your site, and how your audiences overlap with other websites.

SEMrush's Organic Research examines your website's major competitors and shows their organic search rankings, keywords they are ranking for, and even if they are ranking for any (SERP) features and more.

The Market Explorer search field allows you to type in a domain and lists websites or articles similar to what you entered. Market Explorer also allows users to perform in-depth data analytics on These companies and markets.

WHO WINS?

SEMrush wins here because it has more tools dedicated to competitor analysis than Ahrefs. However, Ahrefs offers a lot of functionality in this area, too. It takes a combination of both tools to gain an advantage over your competition.

Pricing

Ahrefs

  • Lite Monthly: $99/month
  • Standard Monthly: $179/month
  • Annually Lite: $990/year
  • Annually Standard: $1790/year

SEMRUSH

  • Pro Plan: $119.95/month
  • Guru Plan:$229.95/month
  • Business Plan: $449.95/month

Which SEO tool should you choose for digital marketing?

When it comes to keyword data research, you will become confused about which one to choose.

Consider choosing Ahrefs if you

  • Like friendly and clean interface
  • Searching for simple keyword suggestions

  • Want to get more keywords for different search engines like Amazon, Bing, Yahoo, Yandex, Baidu, and more

 

Consider SEMrush if you:

  • Want more marketing and SEO features
  • Need competitor analysis tool
  • Need to keep your backlinks profile clean
  • Looking for more keyword suggestions for Google

Both tools are great. Choose the one which meets your requirements and if you have any experience using either Ahrefs or SEMrush let me know in the comment section which works well for you.

 

 


Match ID: 36 Score: 2.14 source: www.crunchhype.com age: 271 days
qualifiers: 2.14 toxic

Filter efficiency 95.195 (37 matches/770 results)


********** UNIVERSITY **********
return to top



How the Graphical User Interface Was Invented
Sun, 20 Nov 2022 20:00:00 +0000


Mice, windows, icons, and menus: these are the ingredients of computer interfaces designed to be easy to grasp, simplicity itself to use, and straightforward to describe. The mouse is a pointer. Windows divide up the screen. Icons symbolize application programs and data. Menus list choices of action.

But the development of today’s graphical user interface was anything but simple. It took some 30 years of effort by engineers and computer scientists in universities, government laboratories, and corporate research groups, piggybacking on each other’s work, trying new ideas, repeating each other’s mistakes.


This article was first published as “Of Mice and menus: designing the user-friendly interface.” It appeared in the September 1989 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The photographs and diagrams appeared in the original print version.


Throughout the 1970s and early 1980s, many of the early concepts for windows, menus, icons, and mice were arduously researched at Xerox Corp.’s Palo Alto Research Center (PARC), Palo Alto, Calif. In 1973, PARC developed the prototype Alto, the first of two computers that would prove seminal in this area. More than 1200 Altos were built and tested. From the Alto’s concepts, starting in 1975, Xerox’s System Development Department then developed the Star and introduced it in 1981—the first such user-friendly machine sold to the public.

In 1984, the low-cost Macintosh from Apple Computer Inc., Cupertino, Calif., brought the friendly interface to thousands of personal computer users. During the next five years, the price of RAM chips fell enough to accommodate the huge memory demands of bit-mapped graphics, and the Mac was followed by dozens of similar interfaces for PCs and workstations of all kinds. By now, application programmers are becoming familiar with the idea of manipulating graphic objects.

The Mac’s success during the 1980s spurred Apple Computer to pursue legal action over ownership of many features of the graphical user interface. Suits now being litigated could assign those innovations not to the designers and their companies, but to those who first filed for legal protection on them.

The GUI started with Sketchpad


The grandfather of the graphical user interface was Sketchpad [see photograph]. Massachusetts Institute of Technology student Ivan E. Sutherland built it in 1962 as a Ph.D. thesis at MIT’s Lincoln Laboratory in Lexington, Mass. Sketchpad users could not only draw points, line segments, and circular arcs on a cathode ray tube (CRT) with a light pen—they could also assign constraints to, and relationships among, whatever they drew.

Arcs could have a specified diameter, lines could be horizontal or vertical, and figures could be built up from combinations of elements and shapes. Figures could be moved, copied, shrunk, expanded, and rotated, with their constraints (shown as onscreen icons) dynamically preserved. At a time when a CRT monitor was a novelty in itself, the idea that users could interactively create objects by drawing on a computer was revolutionary.


Man sitting in front of a round cathode ray display with a white square and triangle on a black background

Moreover, to zoom in on objects, Sutherland wrote the first window-drawing program, which required him to come up with the first clipping algorithm. Clipping is a software routine that calculates which part of a graphic object is to be displayed and displays only that part on the screen. The program must calculate where a line is to be drawn, compare that position to the coordinates of the window in use, and prevent the display of any line segment whose coordinates fall outside the window.

Though films of Sketchpad in operation were widely shown in the computer research community, Sutherland says today that there was little immediate fallout from the project. Running on MIT’s TX-2 mainframe, it demanded too much computing power to be practical for individual use. Many other engineers, however, see Sketchpad’s design and algorithms as a primary influence on an entire generation of research into user interfaces.

The origin of the computer mouse


The light pens used to select areas of the screen by interactive computer systems of the 1950s and 1960s—including Sketchpad—had drawbacks. To do the pointing, the user’s arm had to be lifted up from the table, and after a while that got tiring. Picking up the pen required fumbling around on the table or, if it had a holder, taking the time after making a selection to put it back.

Sensing an object with a light pen was straightforward: the computer displayed spots of light on the screen and interrogated the pen as to whether it sensed a spot, so the program always knew just what was being displayed. Locating the position of the pen on the screen required more sophisticated techniques—like displaying a cross pattern of nine points on the screen, then moving the cross until it centered on the light pen.

In 1964, Douglas Engelbart, a research project leader at SRI International in Menlo Park, Calif., tested all the commercially available pointing devices, from the still-popular light pen to a joystick and a Graphicon (a curve-tracing device that used a pen mounted on the arm of a potentiometer). But he felt the selection failed to cover the full spectrum of possible pointing devices, and somehow he should fill in the blanks.

Then he remembered a 1940s college class he had taken that covered the use of a planimeter to calculate area. (A planimeter has two arms, with a wheel on each. The wheels can roll only along their axes; when one of them rolls, the other must slide.)

If a potentiometer were attached to each wheel to monitor its rotation, he thought, a planimeter could be used as a pointing device. Engelbart explained his roughly sketched idea to engineer William English, who with the help of the SRI machine shop built what they quickly dubbed “the mouse.”



This first mouse was big because it used single-turn potentiometers: one rotation of the wheels had to be scaled to move a cursor from one side of the screen to the other. But it was simple to interface with the computer: the processor just read frequent samples of the potentiometer positioning signals through analog-to-digital converters.

The cursor moved by the mouse was easy to locate, since readings from the potentiometer determined the position of the cursor on the screen-unlike the light pen. But programmers for later windowing systems found that the software necessary to determine which object the mouse had selected was more complex than that for the light pen: they had to compare the mouse’s position with that of all the objects displayed onscreen.

The computer mouse gets redesigned—and redesigned again

Engelbart’s group at SRI ran controlled experiments with mice and other pointing devices, and the mouse won hands down. People adapted to it quickly, it was easy to grab, and it stayed where they put it. Still, Engelbart wanted to tinker with it. After experimenting, his group had concluded that the proper ratio of cursor movement to mouse movement was about 2:1, but he wanted to try varying that ratio—decreasing it at slow speeds and raising it at fast speeds—to improve user control of fine movements and speed up larger movements. Some modern mouse-control software incorporates this idea, including that of the Macintosh.

The mouse, still experimental at this stage, did not change until 1971. Several members of Engelbart’s group had moved to the newly established PARC, where many other researchers had seen the SRI mouse and the test report. They decided there was no need to repeat the tests; any experimental systems they designed would use mice.

Said English, “This was my second chance to build a mouse; it was obvious that it should be a lot smaller, and that it should be digital.” Chuck Thacker, then a member of the research staff, advised PARC to hire inventor Jack Hawley to build it.

Hawley decided the mouse should use shaft encoders, which measure position by a series of pulses, instead of potentiometers (both were covered in Engelbart’s 1970 patent), to eliminate the expensive analog-to-digital converters. The basic principle, of one wheel rolling while the other slid, was licensed from SRI.

The ball mouse was the “easiest patent I ever got. It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”
—Ron Rider

In 1972, the mouse changed again. Ron Rider, now vice president of systems architecture at PARC but then a new arrival, said he was using the wheel mouse while an engineer made excuses for its asymmetric operation (one wheel dragging while one turned). “I suggested that they turn a trackball upside down, make it small, and use it as a mouse instead,” Rider told IEEE Spectrum. This device came to be known as the ball mouse. “Easiest patent I ever got,” Rider said. “It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”

Defining terms


Bit map

The pixel pattern that makes up the graphic display on a computer screen.

Clicking

The motion of pressing a mouse button to Initiate an action by software; some actions require double-clicking.

Graphical user interface (GUI)

The combination of windowing displays, menus, icons, and a mouse that is increasingly used on personal computers and workstations.

Icon

An onscreen drawing that represents programs or data.

Menu

A list of command options currently available to the computer user; some stay onscreen, while pop-up or pull-down menus are requested by the user.

Mouse

A device whose motion across a desktop or other surface causes an on-screen cursor to move commensurately; today’s mice move on a ball and have one, two, or three buttons.

Raster display

A cathode ray tube on which Images are displayed as patterns of dots, scanned onto the screen sequentially in a predetermined pattern of lines.

Vector display

A cathode ray tube whose gun scans lines, or vectors, onto the screen phosphor.

Window

An area of a computer display, usually one of several, in which a particular program is executing.


In the PARC ball mouse design, the weight of the mouse is transferred to the ball by a swivel device and on one or two casters at the end of the mouse farthest from the wire “tail.” A prototype was built by Xerox’s Electronics Division in El Segundo, Calif., then redesigned by Hawley. The rolling ball turned two perpendicular shafts, with a drum on the end of each that was coated with alternating stripes of conductive and nonconductive material. As the drum turned, the stripes transmitted electrical impulses through metal wipers.

When Apple Computer decided in 1979 to design a mouse for its Lisa computer, the design mutated yet again. Instead of a metal ball held against the substrate by a swivel, Apple used a rubber ball whose traction depended on the friction of the rubber and the weight of the ball itself. Simple pads on the bottom of the case carried the weight, and optical scanners detected the motion of the internal wheels. The device had loose tolerances and few moving parts, so that it cost perhaps a quarter as much to build as previous ball mice.

How the computer mouse gained and lost buttons

The first, wooden, SRI mouse had only one button, to test the concept. The plastic batch of SRI mice bad three side-by-side buttons—all there was room for, Engelbart said. The first PARC mouse bad a column of three buttons-again, because that best fit the mechanical design. Today, the Apple mouse has one button, while the rest have two or three. The issue is no longer 1950—a standard 6-by-10-cm mouse could now have dozens of buttons—but human factors, and the experts have strong opinions.

Said English, now director of internationalization at Sun Microsystems Inc., Mountain View, Calif.: “Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” He sees two buttons as the minimum because two functions are basic to selecting an object: pointing to its start, then extending the motion to the end of the object.

William Verplank, a human factors specialist in the group that tested the graphical interface at Xerox from 1978 into the early 1980s, concurred. He told Spectrum that with three buttons, Alto users forgot which button did what. The group’s tests showed that one button was also confusing, because it required actions such as double-clicking to select and then open a file.

“We have agonizing videos of naive users struggling” with these problems, Verplank said. They concluded that for most users, two buttons (as used on the Star) are optimal, if a button means the same thing in every application. English experimented with one-button mice at PARC before concluding they were a bad idea.


“Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.”
—William English


A computer monitor with a chunky white keyboard sitting on a desk

But many interface designers dislike multiple buttons, saying that double-clicking a single button to select an item is easier than remembering which button points and which extends. Larry Tesler, formerly a computer scientist at PARC, brought the one-button mouse to Apple, where he is now vice president of advanced technology. The company’s rationale is that to attract novices to its computers one button was as simple as it could get.

More than two million one-button Apple mice are now in use. The Xerox and Microsoft two-button mice are less common than either Apple’s ubiquitous one-button model or the three-button mice found on technical workstations. Dozens of companies manufacture mice today; most are slightly smaller than a pack of cigarettes, with minor variations in shape.

How windows first came to the computer screen


In 1962, Sketchpad could split its screen horizontally into two independent sections. One section could, for example, give a close-up view of the object in the other section. Researchers call Sketchpad the first example of tiled windows, which are laid out side by side. They differ from overlapping windows, which can be stacked on top of each other, or overlaid, obscuring all or part of the lower layers.

Windows were an obvious means of adding functionality to a small screen. In 1969, Engelbart equipped NLS (as the On-Line System he invented at SRI during the 1960s was known, to distinguish it from the Off-Line System known as FLS) with windows. They split the screen into multiple parts horizontally or vertically, and introduced cross-window editing with a mouse.

By 1972, led by researcher Alan Kay, the Smalltalk programming language group at Xerox PARC had implemented their version of windows. They were working with far different technology from Sutherland or Engelbart: by deciding that their images had to be displayed as dots on the screen, they led a move from vector to raster displays, to make it simple to map the assigned memory location of each of those spots. This was the bit map invented at PARC, and made viable during the 1980s by continual performance improvements in processor logic and memory speed.

Experimenting with bit-map manipulation, Smalltalk researcher Dan Ingalls developed the bit-block transfer procedure, known as BitBlt. The BitBlt software enabled application programs to mix and manipulate rectangular arrays of pixel values in on-screen or off-screen memory, or between the two, combining the pixel values and storing the result in the appropriate bit-map location.

BitBlt made it much easier to write programs to scroll a window (move an image through it), resize (enlarge or contract) it, and drag windows (move them from one location to another on screen). It led Kay to create overlapping windows. They were soon implemented by the Smalltalk group, but made clipping harder.

Some researchers question whether overlapping windows offer more benefits than tiled on the grounds that screens with overlapping windows become so messy the user gets lost.

In a tiling system, explained researcher Peter Deutsch, who worked with the Smalltalk group, the clipping borders are simply horizontal or vertical lines from one screen border to another, and software just tracks the location of those lines. But overlapping windows may appear anywhere on the screen, randomly obscuring bits and pieces of other windows, so that quite irregular regions must be clipped. Thus application software must constantly track which portions of their windows remain visible.

Some researchers still question whether overlapping windows offer more benefits than tiled, at least above a certain screen size, on the grounds that screens with overlapping windows become so messy the user gets lost. Others argue that overlapping windows more closely match users’ work patterns, since no one arranges the papers on their physical desktop in neat horizontal and vertical rows. Among software engineers, however, overlapping windows seem to have won for the user interface world.

So has the cut-and-paste editing model that Larry Tesler developed, first for the Gypsy text editor he wrote at PARC and later for Apple. Charles Irby—who worked on Xerox’s windows and is now vice president of development at Metaphor Computer Systems Inc., Mountain View, Calif.—noted, however, that cut-and-paste worked better for pure text-editing than for moving graphic objects from one application to another.

The origin of the computer menu bar


Menus—functions continuously listed onscreen that could be called into action with key combinations—were commonly used in defense computing by the 1960s. But it was only with the advent of BitBlt and windows that menus could be made to appear as needed and to disappear after use. Combined with a pointing device to indicate a user’s selection, they are now an integral part of the user-friendly interface: users no longer need to refer to manuals or memorize available options.

Instead, the choices can be called up at a moment’s notice whenever needed. And menu design has evolved. Some new systems use nested hierarchies of menus; others offer different menu versions—one with the most commonly used commands for novices, another with all available commands for the experienced user.

Among the first to test menus on demand was PARC researcher William Newman, in a program called Markup. Hard on his heels, the Smalltalk group built in pop-up menus that appeared on screen at the cursor site when the user pressed one of the mouse buttons.

Implementation was on the whole straightforward, recalled Deutsch. The one exception was determining whether the menu or the application should keep track of the information temporarily obscured by the menu. In the Smalltalk 76 version, the popup menu saved and restored the screen bits it overwrote. But in today’s multitasking systems, that would not work, because an application may change those bits without the menu’s knowledge. Such systems add another layer to the operating system: a display manager that tracks what is written where.

The production Xerox Star, in 1981, featured a further advance: a menu bar, essentially a row of words indicating available menus that could be popped up for each window. Human factors engineer Verplank recalled that the bar was at first located at the bottom of its window. But the Star team found users were more likely to associate a bar with the window below it, so it was moved to the top of its window.

Apple simplified things in its Lisa and Macintosh with a single bar placed at the top of the screen. This menu bar relates only to the window in use: the menus could be ‘‘pulled down” from the bar, to appear below it. Designer William D. Atkinson received a patent (assigned to Apple Computer) in August 1984 for this innovation.

One new addition that most user interface pioneers consider an advantage is the tear-off menu, which the user can move to a convenient spot on the screen and “pin” there, always visible for ready access.

Many windowing interfaces now offer command-key or keyboard alternatives for many commands as well. This return to the earliest of user interfaces—key combinations—neatly supplements menus, providing both ease of use for novices and for the less experienced, and speed for those who can type faster than they can point to a menu and click on a selection.

How the computer “icon” got its name


Sketchpad had on-screen graphic objects that represented constraints (for example, a rule that lines be the same length), and the Flex machine built in 1967 at the University of Utah by students Alan Kay and Ed Cheadle had squares that represented programs and data (like today’s computer “folders”). Early work on icons was also done by Bell Northern Research, Ottawa, Canada, stemming from efforts to replace the recently legislated bilingual signs with graphic symbols.

But the concept of the computer “icon” was not formalized until 1975. David Canfield Smith, a computer science graduate student at Stanford University in California, began work on his Ph.D. thesis in 1973. His advisor was PARC’s Kay, who suggested that he look at using the graphics power of the experimental Alto not just to display text, but rather to help people program.

David Canfield Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents.

Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents: a Russian icon of a saint is holy and is to be venerated. Smith’s computer icons contained all the properties of the programs and data represented, and therefore could be linked or acted on as if they were the real thing.

After receiving his Ph.D. in 1975, Smith joined Xerox in 1976 to work on Star development. The first thing he did, he said, was to recast his concept of icons in office terms. “I looked around my office and saw papers, folders, file cabinets, a telephone, and bookshelves, and it was an easy translation to icons,” he said.

Xerox researchers developed, tested, and revised icons for the Star interface for three years before the first version was complete. At first they attempted to make the icons look like a detailed photographic rendering of the object, recalled Irby, who worked on testing and refining the Xerox windows. Trading off label space, legibility, and the number of icons that fit on the screen, they decided to constrain icons to a 1-inch (2.5-centimeter) square of 64 by 64 pixels, or 512 eight-bit bytes.

Then, Verplank recalls, they discovered that because of a background pattern based on two-pixel dots, the right-hand side of the icons appeared jagged. So they increased the width of the icons to 65 pixels, despite an outcry from programmers who liked the neat 16-bit breakdown. But the increase stuck, Verplank said, because they had already decided to store 72 bits per side to allow for white space around each icon.

After settling on a size for the icons, the Star developers tested four sets developed by two graphic designers and two software engineers. They discovered that, for example, resizing may cause problems. They shrunk the icon for a person—a head and shoulders—in order to use several of them to represent a group, only to hear one test subject say the screen resolution made the reduced icon look like a cross above a tombstone. Computer graphics artist Norm Cox, now of Cox & Hall, Dallas, Texas, was finally hired to redesign the icons.

Icon designers today still wrestle with the need to make icons adaptable to the many different system configurations offered by computer makers. Artist Karen Elliott, who has designed icons for Microsoft, Apple, Hewlett-Packard Co., and others, noted that on different systems an icon may be displayed in different colors, several resolutions, and a variety of gray shades, and it may also be inverted (light and dark areas reversed).

In the past few years, another concern has been added to icon designers’ tasks: internationalization. Icons designed in the United States often lack space for translations into languages other than English. Elliott therefore tries to leave space for both the longer words and the vertical orientation of some languages.


A square white macintosh computer with a white keyboard, in a separate image below, computer icons and the text address book, address, addresses

The main rule is to make icons simple, clean, and easily recognizable. Discarded objects are placed in a trash can on the Macintosh. On the NeXT Computer System, from NeXT Inc., Palo Alto, Calif.—the company formed by Apple cofounder Steven Jobs after he left Apple—they are dumped into a Black Hole. Elliott sees NeXT’s black hole as one of the best icons ever designed: ”It is distinct; its roundness stands out from the other, square icons, and this is important on a crowded display. It fits my image of information being sucked away, and it makes it clear that dumping something is serious.

English disagrees vehemently. The black hole “is fundamentally wrong,” he said. “You can dig paper out of a wastebasket, but you can’t dig it out of a black hole.” Another critic called the black hole familiar only to “computer nerds who read mostly science fiction and comics,” not to general users.

With the introduction of the Xerox Star in June 1981, the graphical user interface, as it is known today, arrived on the market. Though not a commercial triumph, the Star generated great interest among computer users, as the Alto before it had within the universe of computer designers.

Even before the Star was introduced, Jobs, then still at Apple, had visited Xerox PARC in November 1979 and asked the Smalltalk researchers dozens of questions about the Alto’s internal design. He later recruited Larry Tesler from Xerox to design the user interface of the Apple Lisa.

With the Lisa and then the Macintosh, introduced in January 1983 and January 1984 respectively, the graphical user interface reached the low-cost, high-volume computer market.

At almost $10,000, buyers deemed the Lisa too expensive for the office market. But aided by prizewinning advertising and its lower price, the Macintosh took the world by storm. Early Macs had only 128K bytes of RAM, which made them slow to respond because it was too little memory for heavy graphic manipulation. Also, the time needed for programmers to learn its Toolbox of graphics routines delayed application packages until well into 1985. But the Mac’s ease of use was indisputable, and it generated interest that spilled over into the MS-DOS world of IBM PCs and clones, as well as Unix-based workstations.

Who owns the graphical user interface?


The widespread acceptance of such interfaces, however, has led to bitter lawsuits to establish exactly who owns what. So far, none of several litigious companies has definitively established that it owns the software that implements windows, icons, or early versions of menus. But the suits continue.

Virtually all the companies that make and sell either wheel or ball mice paid license fees to SRI or to Xerox for their patents. Engelbart recalled that SRI patent attorneys inspected all the early work on the interface, but understood only hardware. After looking at developments like the implementation of windows, they told him that none of it was patentable.

At Xerox, the Star development team proposed 12 patents having to do with the user interface. The company’s patent committee rejected all but two on hardware—one on BitBlt, the other on the Star architecture. At the time, Charles Irby said, it was a good decision. Patenting required full disclosure, and no precedents then existed for winning software patent suits.


A computer screen in blue and white with multiple open windows


Three computer windows with greyscale images on a dark grey background


Computer windows tinted blue on a black background partially obscuring a planet and starfield


The most recent and most publicized suit was filed in March 1988, by Apple, against both Microsoft and Hewlett-Packard Co., Palo Alto, Calif. Apple alleges that HP’s New Wave interface, requiring version 2.03 of Microsoft’s Windows program, embodies the copyrighted “audio visual computer display” of the Macintosh without permission; that the displays of Windows 2.03 are illegal copies of the Mac’s audiovisual works; and that Windows 2.03 also exceeds the rights granted in a November 198S agreement in which Microsoft acknowledged that the displays in Windows 1.0 were derivatives of those in Apple’s Lisa and Mac.

In March 1989, U.S. District Judge William W. Schwarzer ruled Microsoft had exceeded the bounds of its license in creating Windows 2.03. Then in July 1989 Schwarzer ruled that all but 11 of the 260 items that Apple cited in its suit were, in fact, acceptable under the 1985 agreement. The larger issue—whether Apple’s copyrights are valid, and whether Microsoft and HP infringed on them—will not now be examined until 1990.

Among those 11 are overlapping windows and movable icons. According to Pamela Samuelson, a noted software intellectual property expert and visiting professor at Emory University Law School, Atlanta, Ga., many experts would regard both as functional features of an interface that cannot be copyrighted, rather than “expressions” of an idea protectable by copyright.

But lawyers for Apple—and for other companies that have filed lawsuits to protect the “look and feel’’ of their screen displays—maintain that if such protection is not granted, companies will lose the economic incentive to market technological innovations. How is Apple to protect its investment in developing the Lisa and Macintosh, they argue, if it cannot license its innovations to companies that want to take advantage of them?

If the Apple-Microsoft case does go to trial on the copyright issues, Samuelson said, the court may have to consider whether Apple can assert copyright protection for overlapping windows-an interface feature on which patents have also been granted. In April 1989, for example, Quarterdeck Office Systems Inc., Santa Monica, Calif., received a patent for a multiple windowing system in its Desq system software, introduced in 1984.

Adding fuel to the legal fire, Xerox said in May 1989 it would ask for license fees from companies that use the graphical user interface. But it is unclear whether Xerox has an adequate claim to either copyright or patent protection for the early graphical interface work done at PARC. Xerox did obtain design patents on later icons, noted human factors engineer Verplank. Meanwhile, both Metaphor and Sun Microsystems have negotiated licenses with Xerox for their own interfaces.

To Probe Further

The September 1989 IEEE Computer contains an article, “The Xerox ‘Star’: A Retrospective,” by Jeff Johnson et al., covering development of the Star. “Designing the Star User Interface,’’ [PDF] by David C. Smith et al., appeared in the April 1982 issue of Byte.

The Sept. 12, 1989, PC Magazine contains six articles on graphical user interfaces for personal computers and workstations. The July 1989 Byte includes ‘‘A Guide to [Graphical User Interfaces),” by Frank Hayes and Nick Baran, which describes 12 current interfaces for workstations and personal computers. “The Interface of Tomorrow, Today,’’ by Howard Reingold, in the July 10, 1989, InfoWorld does the same. “The interface that launched a thousand imitations,” by Richard Rawles, in the March 21, 1989, MacWeek covers the Macintosh interface.

The human factors of user interface design are discussed in The Psychology of Everyday Things, by Donald A. Norman (Basic Books Inc., New York, 1988). The January 1989 IEEE Software contains several articles on methods, techniques, and tools for designing and implementing graphical interfaces. The Way Things Work, by David Macaulay (Houghton Mifflin Co., Boston, 1988), contains a detailed drawing of a ball mouse.

The October 1985 IEEE Spectrum covered Xerox PARC’s history in “Research at Xerox PARC: a founder’s assessment,” by George Pake (pp. 54-61) and “Inside the PARC: the ‘information architects,’“ by Tekla Perry and Paul Wallich (pp. 62-75).

William Atkinson received patent no. 4,464,652 for the pulldown menu system on Aug. 8, 1984, and assigned it to Apple. Gary Pope received patent no. 4,823,108, for an improved system for displaying images in “windows” on a computer screen, on April 18, 1989, and assigned it to Quarterdeck Office Systems.

The wheel mouse patent, no. 3,541,541, “X-Y position indicator for a display system,” was issued to Douglas Engelbart on Nov. 17, 1970, and assigned to SRI International. The ball mouse patent, no. 3,835,464, was issued to Ronald Rider on Sept. 10, 1974, and assigned to Xerox.

The first selection device tests to include a mouse are covered in “Display-Selection Techniques for Text Manipulation,” by William English, Douglas Engelbart, and Melvyn Berman, in IEEE Transactions on Human Factors in Electronics, March 1967.

Sketchpad: A Man-Machine Graphical Communication System, by Ivan E. Sutherland (Garland Publishing Inc., New York City and London, 1980), reprints his 1963 Ph.D. thesis.










Match ID: 0 Score: 2.86 source: spectrum.ieee.org age: 7 days
qualifiers: 2.86 school

Video Friday: Little Robot, Big Stairs
Fri, 18 Nov 2022 16:43:36 +0000


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs, and even operate in the dark.

[ CMU ]

This robot is designed as a preliminary platform for humanoid robot research. The platform will be further extended with soles as well as upper limbs. In this video, the current lower limb version of the platform shows its capability in traversing uneven terrains without an active or passive ankle joint. The underactuation nature of the robot system has been well addressed with our locomotion-control framework, which also provides a new perspective on the leg design of bipedal robots.

[ CLEAR Lab ]

Thanks, Zejun!

Inbiodroid is a startup “dedicated to the development of fully immersive telepresence technologies that create a deeper connection between people and their environment.” Hot off the ANA Avatar XPrize competition, they’re doing a Kickstarter to fund the next generation of telepresence robots.

[ Kickstarter ] via [ Inbiodroid ]

Thanks, Alejandro!

A robot that can feel what a therapist feels when treating a patient, that can adjust the intensity of rehabilitation exercises at any time according to the patient's abilities and needs, and that can thus go on for hours without getting tired: It seems like fiction, and yet researchers from the Vrije Universiteit Brussel and Imec have now finished a prototype that unites all these skills in one robot.

[ VUB ]

Thanks, Bram!

Self-driving bikes present some special challenges, as this excellent video graphically demonstrates.

[ Paper ]

Pickle robots unload trucks. This is a short overview of the Pickle Robot Unload System in action at the end of October 2022—autonomously picking floor-loaded freight to unload a trailer. As a robotic system built on AI and advanced sensors, the system gets better and faster all the time.

[ Pickle ]

Learning agile skills can be challenging with reward shaping. Imitation learning provides an alternative solution by assuming access to decent expert references. However, such experts are not always available. We propose Wasserstein Adversarial Skill Imitation (WASABI), which acquires agile behaviors from partial and potentially physically incompatible demonstrations. In our work, Solo, a quadruped robot, learns highly dynamic skills (for example, backflips) from only handheld human demonstrations.

WASABI!

[ WASABI ]

NASA and the European Space Agency are developing plans for one of the most ambitious campaigns ever attempted in space: bringing the first samples of Mars material safely back to Earth for detailed study. The diverse set of scientifically curated samples now being collected by NASA’s Mars Perseverance rover could help scientists answer the question of whether ancient life ever arose on the Red Planet.

I thought I was promised some helicopters?

[ NASA ]

A Sanctuary general-purpose robot picks up and sorts medicine pills.

Remotely controlled, if that wasn’t clear.

[ Sanctuary ]

I don’t know what’s going on here, but it scares me.

[ KIMLAB ]

The Canadian Space Agency plans to send a rover to the moon as early as 2026 to explore a polar region. The mission will demonstrate key technologies and accomplish meaningful science. Its objectives are to gather imagery, measurements, and data on the surface of the moon, as well as to have the rover survive an entire night on the moon. Lunar nights, which last about 14 Earth days, are extremely cold and dark, posing a significant technological challenge.

[ CSA ]

Covariant Robotic Induction automates previously manual induction processes. This video shows the Covariant Robotic Induction solution picking a wide range of item types from totes, scanning bar codes, and inducting items onto a unit sorter. Note the robot’s ability to effectively handle items that are traditionally difficult to pick, such as transparent polybagged apparel and small, oddly shaped health and beauty items, and place them precisely onto individual trays.

[ Covariant ]

The solution will integrate Boston Dynamics’ Spot robot; the ExynPak, powered by ExynAI; and the Trimble X7 total station. It will enable fully autonomous missions inside complex and dynamic construction environments, which can result in consistent and precise reality capture for production and quality-control workflows.

[ Exyn ]

Our most advanced programmable robot yet is back and better than ever. Sphero RVR+ includes an advanced gearbox to improve torque and payload capacity; enhanced sensors, including an improved color sensor; and an improved rechargeable and swappable battery.

$279.

[ Sphero ]

I’m glad Starship is taking this seriously, although it’s hard to know from this video how well the robots behave when conditions are less favorable.

[ Starship ]

Complexity, cost, and power requirements for the actuation of individual robots can play a large factor in limiting the size of robotic swarms. Here we present PCBot, a minimalist robot that can precisely move on an orbital shake table using a bi-stable solenoid actuator built directly into its PCB. This allows the actuator to be built as part of the automated PCB manufacturing process, greatly reducing the impact it has on manual assembly.

[ Paper ]

Drone-racing world champion Thomas Bitmatta designed an indoor drone-racing track for ETH Zurich’s autonomous high-speed racing drones, and in something like half an hour, the autonomous drones were able to master the track at superhuman speeds (with the aid of a motion-capture system).

[ ETH RSL ] via [ BMS Racing ]

Thanks, Paul!

Moravec’s paradox is the observation that many things that are difficult for robots to do come easily to humans, and vice versa. Stanford University professor Chelsea Finn has been tasked to explain this concept to 5 different people: a child, a teen, a college student, a grad student, and an expert.

[ Wired ]

Roberto Calandra from Meta AI gives a talk about “Perceiving, Understanding, and Interacting Through Touch.”

[ UPenn ]

AI advancements have been motivated and inspired by human intelligence for decades. How can we use AI to expand our knowledge and understanding of the world and ourselves? How can we leverage AI to enrich our lives? In his Tanner Lecture, Eric Horvitz, chief science officer at Microsoft, will explore these questions and more, tracing the arc of intelligence from its origins and evolution in humans to its manifestations and prospects in the tools we create and use.

[ UMich ]


Match ID: 1 Score: 1.43 source: spectrum.ieee.org age: 9 days
qualifiers: 1.43 school

Filter efficiency 99.740 (2 matches/770 results)


********** ENTERTAINMENT **********
return to top



The Wonderful Wizard of Oz review – a joyful bad-taste romp
Sun, 27 Nov 2022 19:00:03 GMT

Tron theatre, Glasgow
Johnny McKnight’s irreverent and very funny panto is full of local colour and packed with put-downs – though never at the expense of the original film

Hollywood repeats itself, first as musical, second as panto. If you have never imagined Dorothy in drag, well, you have never reckoned on Johnny McKnight, the spangly gold standard of pantomime dames, swapping denim for gingham and following the yellow brick road out of Tronsis. It stretches only as far as the West End of Glasgow but packs in a dust-storm of diversions en route.

Somehow the writer, director and star takes one of cinema’s most sensitive tales and makes it raucous, irreverent and very, very funny. And he does so without mocking the original. In fact the movie is about the only thing that doesn’t get mocked in a show packed with put-downs of everything from rival Christmas shows to Alan Cumming in Burn, not forgetting the dazzling cast themselves.

Continue reading...
Match ID: 0 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 movie

Irene Cara, Oscar-winning singer of hits including ‘Fame,’ dies at 63
Sat, 26 Nov 2022 11:45:55 EST
The Oscar- and Grammy-winning singer and actress was best known for the title tracks of the movies "Fame" and "Flashdance."
Match ID: 1 Score: 20.00 source: www.washingtonpost.com age: 1 day
qualifiers: 20.00 movie

A Criminal Ratted Out His Friend to the FBI. Now He's Trying to Make Amends.
Sat, 26 Nov 2022 12:00:23 +0000

The FBI paid a convicted sex offender $90,000 to set up his friend and his friend’s mentally ill buddy in a terrorism sting.

The post A Criminal Ratted Out His Friend to the FBI. Now He’s Trying to Make Amends. appeared first on The Intercept.


Match ID: 2 Score: 20.00 source: theintercept.com age: 1 day
qualifiers: 20.00 movie

The 15 Best Holiday Movies to Stream This Season: Netflix, Hulu, Amazon, Disney+
Fri, 25 Nov 2022 14:00:00 +0000
Need help navigating all the services to find something good? Here’s what to watch to pass the time—and get into the spirit.
Match ID: 3 Score: 20.00 source: www.wired.com age: 2 days
qualifiers: 20.00 movie

The Revelatory Portraits of Paolo Di Paolo
Fri, 25 Nov 2022 11:00:00 +0000
The new documentary “The Treasure of His Youth,” directed by Bruce Weber, features interviews with the Italian photographer, whose subjects include Anna Magnani.
Match ID: 4 Score: 20.00 source: www.newyorker.com age: 2 days
qualifiers: 20.00 movie

Life in the Movies Versus Real Life
Fri, 25 Nov 2022 11:00:00 +0000
In the movies, Prince Charming will sweep you off your feet. In real life, podcasts have made it so that every man is unbearable.
Match ID: 5 Score: 20.00 source: www.newyorker.com age: 2 days
qualifiers: 20.00 movie

"Tantura" Exposes the Lie at the Heart of Israel's Founding Myth
Fri, 25 Nov 2022 11:00:32 +0000

A new documentary challenges Israel’s narrative about 1948 and the forced displacement of Palestinians.

The post “Tantura” Exposes the Lie at the Heart of Israel’s Founding Myth appeared first on The Intercept.


Match ID: 6 Score: 20.00 source: theintercept.com age: 2 days
qualifiers: 20.00 movie

Thanksgiving Movies Leave a Lot to Be Desired. America Needs a New One—Let's Make It 'Coco'
Thu, 24 Nov 2022 14:00:00 +0000
It's the best all-ages movie in years—and it's ready to stream. Watch it this week with your own cartoonish family.
Match ID: 7 Score: 17.14 source: www.wired.com age: 3 days
qualifiers: 17.14 movie

What “Tár” Knows About the Artist as Abuser
Thu, 24 Nov 2022 11:00:00 +0000
Todd Field’s film about the downfall of a world-famous conductor shows the toll that untouchability takes even on the person it supposedly benefits.
Match ID: 8 Score: 17.14 source: www.newyorker.com age: 3 days
qualifiers: 17.14 movie

Amazon plans to pour $1 billion annually on theater movies
Wed, 23 Nov 2022 19:07:50 GMT

Amazon.com Inc. will spend more than $1 billion annually on 12 to 15 movies that initially play in theaters, according to a Bloomberg report Wednesday. The ambitious outlay, believed to be the most by an internet company on theater-first movies, comes as Amazon dukes it out with video-streaming rivals Walt Disney Co. , Apple Inc. , and Netflix Inc. who are plowing billions of dollars a year to create movie and episodic content. Amazon had no immediate comment.

Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.


Match ID: 9 Score: 14.29 source: www.marketwatch.com age: 4 days
qualifiers: 14.29 movie

The Stubborn, Enduring Vision of Jean-Marie Straub
Wed, 23 Nov 2022 11:00:00 +0000
He was one of the least known of great filmmakers, and among the most original of his time.
Match ID: 10 Score: 14.29 source: www.newyorker.com age: 4 days
qualifiers: 14.29 movie

10 Things for Americans to Be Grateful for at Thanksgiving 2022
Wed, 23 Nov 2022 11:00:34 +0000

Elon Musk, functioning elections, “Andor,” and death — Americans can genuinely give thanks for all these things and more this holiday.

The post 10 Things for Americans to Be Grateful for at Thanksgiving 2022 appeared first on The Intercept.


Match ID: 11 Score: 14.29 source: theintercept.com age: 4 days
qualifiers: 14.29 movie

How the First Transistor Worked
Sun, 20 Nov 2022 16:00:00 +0000


The vacuum-tube triode wasn’t quite 20 years old when physicists began trying to create its successor, and the stakes were huge. Not only had the triode made long-distance telephony and movie sound possible, it was driving the entire enterprise of commercial radio, an industry worth more than a billion dollars in 1929. But vacuum tubes were power-hungry and fragile. If a more rugged, reliable, and efficient alternative to the triode could be found, the rewards would be immense.

The goal was a three-terminal device made out of semiconductors that would accept a low-current signal into an input terminal and use it to control the flow of a larger current flowing between two other terminals, thereby amplifying the original signal. The underlying principle of such a device would be something called the field effect—the ability of electric fields to modulate the electrical conductivity of semiconductor materials. The field effect was already well known in those days, thanks to diodes and related research on semiconductors.


A photo of a cutaway of a point-contact of a transistor.  In the cutaway photo of a point-contact, two thin conductors are visible; these connect to the points that make contact with a tiny slab of germanium. One of these points is the emitter and the other is the collector. A third contact, the base, is attached to the reverse side of the germanium.AT&T ARCHIVES AND HISTORY CENTER

But building such a device had proved an insurmountable challenge to some of the world’s top physicists for more than two decades. Patents for transistor-like devices had been filed starting in 1925, but the first recorded instance of a working transistor was the legendary point-contact device built at AT&T Bell Telephone Laboratories in the fall of 1947.

Though the point-contact transistor was the most important invention of the 20th century, there exists, surprisingly, no clear, complete, and authoritative account of how the thing actually worked. Modern, more robust junction and planar transistors rely on the physics in the bulk of a semiconductor, rather than the surface effects exploited in the first transistor. And relatively little attention has been paid to this gap in scholarship.

It was an ungainly looking assemblage of germanium, plastic, and gold foil, all topped by a squiggly spring. Its inventors were a soft-spoken Midwestern theoretician, John Bardeen, and a voluble and “ somewhat volatile” experimentalist, Walter Brattain. Both were working under William Shockley, a relationship that would later prove contentious. In November 1947, Bardeen and Brattain were stymied by a simple problem. In the germanium semiconductor they were using, a surface layer of electrons seemed to be blocking an applied electric field, preventing it from penetrating the semiconductor and modulating the flow of current. No modulation, no signal amplification.


Sometime late in 1947 they hit on a solution. It featured two pieces of barely separated gold foil gently pushed by that squiggly spring into the surface of a small slab of germanium.

Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate. Indeed, the current edition of that bible of undergraduate EEs, The Art of Electronics by Horowitz and Hill, makes no mention of the point-contact transistor at all, glossing over its existence by erroneously stating that the junction transistor was a “Nobel Prize-winning invention in 1947.” But the transistor that was invented in 1947 was the point-contact; the junction transistor was invented by Shockley in 1948.

So it seems appropriate somehow that the most comprehensive explanation of the point-contact transistor is contained within John Bardeen’s lecture for that Nobel Prize, in 1956. Even so, reading it gives you the sense that a few fine details probably eluded even the inventors themselves. “A lot of people were confused by the point-contact transistor,” says Thomas Misa, former director of the Charles Babbage Institute for the History of Science and Technology, at the University of Minnesota.

Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate.

A year after Bardeen’s lecture, R. D. Middlebrook, a professor of electrical engineering at Caltech who would go on to do pioneering work in power electronics, wrote: “Because of the three-dimensional nature of the device, theoretical analysis is difficult and the internal operation is, in fact, not yet completely understood.”

Nevertheless, and with the benefit of 75 years of semiconductor theory, here we go. The point-contact transistor was built around a thumb-size slab of n-type germanium, which has an excess of negatively charged electrons. This slab was treated to produce a very thin surface layer that was p-type, meaning it had an excess of positive charges. These positive charges are known as holes. They are actually localized deficiencies of electrons that move among the atoms of the semiconductor very much as a real particle would. An electrically grounded electrode was attached to the bottom of this slab, creating the base of the transistor. The two strips of gold foil touching the surface formed two more electrodes, known as the emitter and the collector.

That’s the setup. In operation, a small positive voltage—just a fraction of a volt—is applied to the emitter, while a much larger negative voltage—4 to 40 volts—is applied to the collector, all with reference to the grounded base. The interface between the p-type layer and the n-type slab created a junction just like the one found in a diode: Essentially, the junction is a barrier that allows current to flow easily in only one direction, toward lower voltage. So current could flow from the positive emitter across the barrier, while no current could flow across that barrier into the collector.

A photo of rows of people sitting in front of microscopes and stacks of transistors. The Western Electric Type-2 point-contact transistor was the first transistor to be manufactured in large quantities, in 1951, at Western Electric’s plant in Allentown, Pa. By 1960, when this photo was taken, the plant had switched to producing junction transistors.AT&T ARCHIVES AND HISTORY CENTER

Now, let’s look at what happens down among the atoms. First, we’ll disconnect the collector and see what happens around the emitter without it. The emitter injects positive charges—holes—into the p-type layer, and they begin moving toward the base. But they don’t make a beeline toward it. The thin layer forces them to spread out laterally for some distance before passing through the barrier into the n-type slab. Think about slowly pouring a small amount of fine powder onto the surface of water. The powder eventually sinks, but first it spreads out in a rough circle.

Now we connect the collector. Even though it can’t draw current by itself through the barrier of the p-n junction, its large negative voltage and pointed shape do result in a concentrated electric field that penetrates the germanium. Because the collector is so close to the emitter, and is also negatively charged, it begins sucking up many of the holes that are spreading out from the emitter. This charge flow results in a concentration of holes near the p-n barrier underneath the collector. This concentration effectively lowers the “height” of the barrier that would otherwise prevent current from flowing between the collector and the base. With the barrier lowered, current starts flowing from the base into the collector—much more current than what the emitter is putting into the transistor.

The amount of current depends on the height of the barrier. Small decreases or increases in the emitter’s voltage cause the barrier to fluctuate up and down, respectively. Thus very small changes in the the emitter current control very large changes at the collector, so voilà! Amplification. (EEs will notice that the functions of base and emitter are reversed compared with those in later transistors, where the base, not the emitter, controls the response of the transistor.)

Ungainly and fragile though it was, it was a semiconductor amplifier, and its progeny would change the world. And its inventors knew it. The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil, with that tiny slit separating the emitter and collector contacts. This configuration gave reliable power gain, and the duo knew then that they had succeeded. In his carpool home that night, Brattain told his companions he’d just done “the most important experiment that I’d ever do in my life” and swore them to secrecy. The taciturn Bardeen, too, couldn’t resist sharing the news. As his wife, Jane, prepared dinner that night, he reportedly said, simply, “We discovered something today.” With their children scampering around the kitchen, she responded, “That’s nice, dear.

It was a transistor, at last, but it was pretty rickety. The inventors later hit on the idea of electrically forming the collector by passing large currents through it during the transistor’s manufacturing. This technique enabled them to get somewhat larger current flows that weren’t so tightly confined within the surface layer. The electrical forming was a bit hit-or-miss, though. “They would just throw out the ones that didn’t work,” Misa notes.

Nevertheless, point-contact transistors went into production at many companies, under license to AT&T, and, in 1951, at AT&T’s own manufacturing arm, Western Electric. They were used in hearing aids, oscillators, telephone-routing gear, in an experimental TV receiver built at RCA, and in the Tradic, the first airborne digital computer, among other systems. In fact, point-contact transistors remained in production until 1966, in part due to their superior speed compared with the alternatives.

The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil…

The Bell Labs group wasn’t alone in its successful pursuit of a transistor. In Aulnay-sous-Bois, a suburb northeast of Paris, two German physicists, Herbert Mataré and Heinrich Welker, were also trying to build a three-terminal semiconductor amplifier. Working for a French subsidiary of Westinghouse, they were following up on very intriguing observations Mataré had made while developing germanium and silicon rectifiers for the German military in 1944. The two succeeded in creating a reliable point-contact transistor in June 1948.

They were astounded, a week or so later, when Bell Labs finally revealed the news of its own transistor, at a press conference on 30 June 1948. Though they were developed completely independently, and in secret, the two devices were more or less identical.

Here the story of the transistor takes a weird turn, breathtaking in its brilliance and also disturbing in its details. Bardeen’s and Brattain’s boss, William Shockley, was furious that his name was not included with Bardeen’s and Brattain’s on the original patent application for the transistor. He was convinced that Bardeen and Brattain had merely spun his theories about using fields in semiconductors into their working device, and had failed to give him sufficient credit. Yet in 1945, Shockley had built a transistor based on those very theories, and it hadn’t worked.

A photo of a man in a jacket placing a transistor in a device. In 1953, RCA engineer Gerald Herzog led a team that designed and built the first "all-transistor" television (although, yes, it had a cathode-ray tube). The team used point-contact transistors produced by RCA under a license from Bell Labs. TRANSISTOR MUSEUM JERRY HERZOG ORAL HISTORY

At the end of December, barely two weeks after the initial success of the point-contact transistor, Shockley traveled to Chicago for the annual meeting of the American Physical Society. On New Year’s Eve, holed up in his hotel room and fueled by a potent mix of jealousy and indignation, he began designing a transistor of his own. In three days he scribbled some 30 pages of notes. By the end of the month, he had the basic design for what would become known as the bipolar junction transistor, or BJT, which would eventually supersede the point-contact transistor and reign as the dominant transistor until the late 1970s.

A photo of a group of transistors With insights gleaned from the Bell Labs work, RCA began developing its own point-contact transistors in 1948. The group included the seven shown here—four of which were used in RCA's experimental, 22-transistor television set built in 1953. These four were the TA153 [top row, second from left], the TA165 [top, far right], the TA156 [bottom row, middle] and the TA172 [bottom, right].TRANSISTOR MUSEUM JONATHAN HOPPE COLLECTION

The BJT was based on Shockley’s conviction that charges could, and should, flow through the bulk semiconductors rather than through a thin layer on their surface. The device consisted of three semiconductor layers, like a sandwich: an emitter, a base in the middle, and a collector. They were alternately doped, so there were two versions: n-type/p-type/n-type, called “NPN,” and p-type/n-type/p-type, called “PNP.”

The BJT relies on essentially the same principles as the point-contact, but it uses two p-n junctions instead of one. When used as an amplifier, a positive voltage applied to the base allows a small current to flow between it and the emitter, which in turn controls a large current between the collector and emitter.

Consider an NPN device. The base is p-type, so it has excess holes. But it is very thin and lightly doped, so there are relatively few holes. A tiny fraction of the electrons flowing in combines with these holes and are removed from circulation, while the vast majority (more than 97 percent) of electrons keep flowing through the thin base and into the collector, setting up a strong current flow.

But those few electrons that do combine with holes must be drained from the base in order to maintain the p-type nature of the base and the strong flow of current through it. That removal of the “trapped” electrons is accomplished by a relatively small flow of current through the base. That trickle of current enables the much stronger flow of current into the collector, and then out of the collector and into the collector circuit. So, in effect, the small base current is controlling the larger collector circuit.

Electric fields come into play, but they do not modulate the current flow, which the early theoreticians thought would have to happen for such a device to function. Here’s the gist: Both of the p-n junctions in a BJT are straddled by depletion regions, in which electrons and holes combine and there are relatively few mobile charge carriers. Voltage applied across the junctions sets up electric fields at each, which push charges across those regions. These fields enable electrons to flow all the way from the emitter, across the base, and into the collector.

In the BJT, “the applied electric fields affect the carrier density, but because that effect is exponential, it only takes a little bit to create a lot of diffusion current,” explains Ioannis “John” Kymissis, chair of the department of electrical engineering at Columbia University.

An illustration of a point-contact transistor. The very first transistors were a type known as point contact, because they relied on metal contacts touching the surface of a semiconductor. They ramped up output current—labeled “Collector current” in the top diagram—by using an applied voltage to overcome a barrier to charge flow. Small changes to the input, or “emitter,” current modulate this barrier, thus controlling the output current.

An illustration of a Bipolar Junction Transistor The bipolar junction transistor accomplishes amplification using much the same principles but with two semiconductor interfaces, or junctions, rather than one. As with the point-contact transistor, an applied voltage overcomes a barrier and enables current flow that is modulated by a smaller input current. In particular, the semiconductor junctions are straddled by depletion regions, across which the charge carriers diffuse under the influence of an electric field.Chris Philpot

The BJT was more rugged and reliable than the point-contact transistor, and those features primed it for greatness. But it took a while for that to become obvious. The BJT was the technology used to make integrated circuits, from the first ones in the early 1960s all the way until the late 1970s, when metal-oxide-semiconductor field-effect transistors (MOSFETs) took over. In fact, it was these field-effect transistors, first the junction field-effect transistor and then MOSFETs, that finally realized the decades-old dream of a three-terminal semiconductor device whose operation was based on the field effect—Shockley’s original ambition.

Such a glorious future could scarcely be imagined in the early 1950s, when AT&T and others were struggling to come up with practical and efficient ways to manufacture the new BJTs. Shockley himself went on to literally put the silicon into Silicon Valley. He moved to Palo Alto and in 1956 founded a company that led the switch from germanium to silicon as the electronic semiconductor of choice. Employees from his company would go on to found Fairchild Semiconductor, and then Intel.

Later in his life, after losing his company because of his terrible management, he became a professor at Stanford and began promulgating ungrounded and unhinged theories about race, genetics, and intelligence. In 1951 Bardeen left Bell Labs to become a professor at the University of Illinois at Urbana-Champaign, where he won a second Nobel Prize for physics, for a theory of superconductivity. (He is the only person to have won two Nobel Prizes in physics.) Brattain stayed at Bell Labs until 1967, when he joined the faculty at Whitman College, in Walla Walla, Wash.

Shockley died a largely friendless pariah in 1989. But his transistor would change the world, though it was still not clear as late as 1953 that the BJT would be the future. In an interview that year, Donald G. Fink, who would go on to help establish the IEEE a decade later, mused, “Is it a pimpled adolescent, now awkward, but promising future vigor? Or has it arrived at maturity, full of languor, surrounded by disappointments?”

It was the former, and all of our lives are so much the better because of it.

This article appears in the December 2022 print issue as “The First Transistor and How it Worked .”


Match ID: 12 Score: 5.71 source: spectrum.ieee.org age: 7 days
qualifiers: 5.71 movie

The Stereotype of the Woke Teen Is 'Tárring' Art
Wed, 16 Nov 2022 14:00:00 +0000
An archetype culled from the depths of social media seems to be short-circuiting screenwriters’ creativity.
Match ID: 13 Score: 2.86 source: www.wired.com age: 11 days
qualifiers: 2.86 movie

Celebrating the Art of Code
Tue, 15 Nov 2022 17:32:36 +0000


We use software as an engine to power and move our digital world. We create software from code—that is, special languages that we can read and write, and that determine what computers and digital technologies actually do. Writing code is an art, a creative human activity undertaken by both individuals and by teams, and using sophisticated tools. The code that people create can be art in another sense also: For those who are familiar with these special languages, they can see beauty within them, much as we all can appreciate the design of a stunning garden or public park.


The Computer History Museum invites you to accompany us for a yearlong tour through our “Art of Code” exhibition, which will provide many opportunities to explore remarkable stories, events, and historical source-code releases. We will hear from experts and luminaries about how software is created and the important consequences it has for society. We will explore firsthand the source code behind major developments in computing. And we will engage in discussions about critical issues today and their relationship to code.

A cassette tape with a label that says "apple computer inc." with handwritten "DOS 13 source" and "5/30/78 3:44pm Woz". This cassette contains the Apple II’s DOS source code, which CHM released in 2013.Computer History Museum

What’s in store during CHM’s Art of Code

In September, we kicked off the Art of Code by marking the 50th anniversary of the breakthrough software language and environment Smalltalk. Smalltalk embraced a fresh, modular, and dynamic approach to the art of code called object-oriented programming. It was also a major step in the use of computers by children. The reverberations of each are still felt today. You can watch the full program of our Smalltalk event here and read a blog recap here.

Black and white photo of children sitting in front an older desktop computer. Children animating horses in Smalltalk-72 on an Alto computer. In 2020, CHM began hosting the Smalltalk Zoo, a collection of historical versions of Smalltalk from 1972 to 1995 in an in-browser emulation.PARC Library

In October, we celebrated this year’s new CHM Fellows, who all have made remarkable contributions to the art of code, including Smalltalk, the pathbreaking Plato system of online communities, learning, and collaboration, and the development of the Internet itself. Watch the full program here and read a blog recap here.

We will announce other events for the Art of Code throughout the coming year.

 Black and white photo of two men working with a mainframe computer Ken Thompson [seated] and Dennis Ritchie work with a DEC PDP-11, to which they migrated the UNIX software in 1971. In 2019, CHM released the source code for the earliest version of UNIX.Computer History Museum

Upcoming source code releases: PostScript, Apple Lisa

For programmers, developers, coders, and other students of code, Art of Code will have a remarkable series of historical source-code releases over the coming year. We will begin with the public release of the source code for PostScript, the innovative software behind printing as we know it, the rise of Adobe, and PDF. Closely following will be CHM’s public release of the source code for the Apple Lisa computer, a remarkably influential project by Apple that did so much to bring the graphical user interface and “What You See Is What You Get” approaches into personal computing.

A set of 5 computer floppy disks in sleeves labelled \u201cMicrosoft.\u201d This set of floppy disks contains Microsoft’s MS-DOS 2.0. In 2010, CHM released the source code for MS-DOS 1.1 and 2.0.Computer History Museum

Later in the year, look for events and offerings marking the 50th anniversaries of some of the pathbreaking developments in networked personal computing at Xerox PARC: the revolutionary Xerox Alto system and the era-defining network technology of Ethernet. CHM will also be releasing a digital archive of PARC’s extraordinary historical contributions, including source code, documents, images, and more.

Join us!

Please join the Computer History Museum for this Art of Code journey by signing up here for news and updates, and begin your own journey through our Art of Code resources by watching this movie, exploring this playlist of CHM’s historical source-code releases, and visiting Make Software, Change the World! in person and online.

Editor's note: This post originally appeared on the blog of the Computer History Museum.


Match ID: 14 Score: 2.86 source: spectrum.ieee.org age: 12 days
qualifiers: 2.86 movie

NASA’s DART Mission Aims to Save the World
Fri, 23 Sep 2022 15:52:53 +0000


Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.

Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.

Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.

DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.

“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.

“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”

So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.

The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.

An infographic showing the orientation of Didymos,  Dimorphos, DART, and LICIACube. The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA

But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.

NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.

The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.

A golden cubesat with a bright light and lines The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA

What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”

So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.

“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”

There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.

“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”


Match ID: 15 Score: 2.86 source: spectrum.ieee.org age: 65 days
qualifiers: 2.86 movie

Most Frequently Asked Questions About NFTs(Non-Fungible Tokens)
Sun, 06 Feb 2022 10:04:00 +0000

 

NFTs

Non-fungible tokens (NFTs) are the most popular digital assets today, capturing the attention of cryptocurrency investors, whales and people from around the world. People find it amazing that some users spend thousands or millions of dollars on a single NFT-based image of a monkey or other token, but you can simply take a screenshot for free. So here we share some freuently asked question about NFTs.

1) What is an NFT?

NFT stands for non-fungible  token, which is a cryptographic token on a blockchain with unique identification codes that distinguish it from other tokens. NFTs are unique and not interchangeable, which means no two NFTs are the same. NFTs can be a unique artwork, GIF, Images, videos, Audio album. in-game items, collectibles etc.

2) What is Blockchain?

A blockchain is a distributed digital ledger that allows for the secure storage of data. By recording any kind of information—such as bank account transactions, the ownership of Non-Fungible Tokens (NFTs), or Decentralized Finance (DeFi) smart contracts—in one place, and distributing it to many different computers, blockchains ensure that data can’t be manipulated without everyone in the system being aware.

3) What makes an NFT valuable?


The value of an NFT comes from its ability to be traded freely and securely on the blockchain, which is not possible with other current digital ownership solutionsThe NFT points to its location on the blockchain, but doesn’t necessarily contain the digital property. For example, if you replace one bitcoin with another, you will still have the same thing. If you buy a non-fungible item, such as a movie ticket, it is impossible to replace it with any other movie ticket because each ticket is unique to a specific time and place.

4) How do NFTs work?

One of the unique characteristics of non-fungible tokens (NFTs) is that they can be tokenised to create a digital certificate of ownership that can be bought, sold and traded on the blockchain. 

As with crypto-currency, records of who owns what are stored on a ledger that is maintained by thousands of computers around the world. These records can’t be forged because the whole system operates on an open-source network. 

NFTs also contain smart contracts—small computer programs that run on the blockchain—that give the artist, for example, a cut of any future sale of the token.

5) What’s the connection between NFTs and cryptocurrency?

Non-fungible tokens (NFTs) aren't cryptocurrencies, but they do use blockchain technology. Many NFTs are based on Ethereum, where the blockchain serves as a ledger for all the transactions related to said NFT and the properties it represents.5) How to make an NFT?

Anyone can create an NFT. All you need is a digital wallet, some ethereum tokens and a connection to an NFT marketplace where you’ll be able to upload and sell your creations

6) How to validate the authencity of an NFT?

When you purchase a stock in NFT, that purchase is recorded on the blockchain—the bitcoin ledger of transactions—and that entry acts as your proof of ownership.

7) How is an NFT valued? What are the most expensive NFTs?

The value of an NFT varies a lot based on the digital asset up for grabs. People use NFTs to trade and sell digital art, so when creating an NFT, you should consider the popularity of your digital artwork along with historical statistics.

In the year 2021, a digital artist called Pak created an artwork called The Merge. It was sold on the Nifty Gateway NFT market for $91.8 million.

8) Can NFTs be used as an investment?

Non-fungible tokens can be used in investment opportunities. One can purchase an NFT and resell it at a profit. Certain NFT marketplaces let sellers of NFTs keep a percentage of the profits from sales of the assets they create.

9) Will NFTs be the future of art and collectibles?

Many people want to buy NFTs because it lets them support the arts and own something cool from their favorite musicians, brands, and celebrities. NFTs also give artists an opportunity to program in continual royalties if someone buys their work. Galleries see this as a way to reach new buyers interested in art.

10) How do we buy an NFTs?

There are many places to buy digital assets, like opensea and their policies vary. On top shot, for instance, you sign up for a waitlist that can be thousands of people long. When a digital asset goes on sale, you are occasionally chosen to purchase it.

11) Can i mint NFT for free?

To mint an NFT token, you must pay some amount of gas fee to process the transaction on the Etherum blockchain, but you can mint your NFT on a different blockchain called Polygon to avoid paying gas fees. This option is available on OpenSea and this simply denotes that your NFT will only be able to trade using Polygon's blockchain and not Etherum's blockchain. Mintable allows you to mint NFTs for free without paying any gas fees.

12) Do i own an NFT if i screenshot it?

The answer is no. Non-Fungible Tokens are minted on the blockchain using cryptocurrencies such as Etherum, Solana, Polygon, and so on. Once a Non-Fungible Token is minted, the transaction is recorded on the blockchain and the contract or license is awarded to whoever has that Non-Fungible Token in their wallet.

12) Why are people investing so much in NFT?


 Non-fungible tokens have gained the hearts of people around the world, and they have given digital creators the recognition they deserve. One of the remarkable things about non-fungible tokens is that you can take a screenshot of one, but you don’t own it. This is because when a non-fungible token is created, then the transaction is stored on the blockchain, and the license or contract to hold such a token is awarded to the person owning the token in their digital wallet.

You can sell your work and creations by attaching a license to it on the blockchain, where its ownership can be transferred. This lets you get exposure without losing full ownership of your work. Some of the most successful projects include Cryptopunks, Bored Ape Yatch Club NFTs, SandBox, World of Women and so on. These NFT projects have gained popularity globally and are owned by celebrities and other successful entrepreneurs. Owning one of these NFTs gives you an automatic ticket to exclusive business meetings and life-changing connections.

Final Saying

That’s a wrap. Hope you guys found this article enlightening. I just answer some question with my limited knowledge about NFTs. If you have any questions or suggestions, feel free to drop them in the comment section below. Also I have a question for you, Is bitcoin an NFTs? let me know in The comment section below






Match ID: 16 Score: 2.86 source: www.crunchhype.com age: 294 days
qualifiers: 2.86 movie

Filter efficiency 97.792 (17 matches/770 results)


********** XKCD **********
return to top



What If 2 Gift Guide
BABIES OR LITERATURE BUT NOT BOTH: Baby shoes
Match ID: 0 Score: 1000.00 source: xkcd.com
qualifiers: 1000.00 xkcd

Paper Title
CONFLICT OF INTEREST STATEMENT: The authors hope these results are correct because we all want to be cool people who are good at science.
Match ID: 1 Score: 1000.00 source: xkcd.com
qualifiers: 1000.00 xkcd

Change in Slope
Squinting at a graph is fine for getting a rough idea of the answer, but if you want to pretend to know it exactly, you need statistics.
Match ID: 2 Score: 1000.00 source: xkcd.com
qualifiers: 1000.00 xkcd

Account Problems
My password is just every Unicode codepoint concatenated into a single UTF-8 string.
Match ID: 3 Score: 1000.00 source: xkcd.com
qualifiers: 1000.00 xkcd

Robotic Falcon Keeps Birds Away From Airports
Sun, 06 Nov 2022 14:00:00 +0000


Collisions with birds are a serious problem for commercial aircraft, costing the industry billions of dollars and killing thousands of animals every year. New research shows that a robotic imitation of a peregrine falcon could be an effective way to keep them out of flight paths.

Worldwide, so-called birdstrikes are estimated to cost the civil aviation industry almost US $1.4 billion annually. Nearby habitats are often deliberately made unattractive to birds, but airports also rely on a variety of deterrents designed to scare them away, such as loud pyrotechnics or speakers that play distress calls from common species.

However, the effectiveness of these approaches tends to decrease over time, as the birds get desensitized by repeated exposure, says Charlotte Hemelrijk, a professor on the faculty of science and engineering at the University of Groningen, in the Netherlands. Live hawks or blinding lasers are also sometimes used to disperse flocks, she says, but this is controversial as it can harm the animals, and keeping and training falcons is not cheap.

“The birds don’t distinguish [RobotFalcon] from a real falcon, it seems.”
—Charlotte Hemelrijk, University of Groningen

In an effort to find a more practical and lasting solution, Hemelrijk and colleagues designed a robotic peregrine falcon that can be used to chase flocks away from airports. The device is the same size and shape as a real hawk, and its fiberglass and carbon-fiber body has been painted to mimic the markings of its real-life counterpart.

Rather than flapping like a bird, the RobotFalcon relies on two small battery-powered propellers on its wings, which allows it to travel at around 30 miles per hour for up to 15 minutes at a time. A human operator controls the machine remotely from a hawk’s-eye perspective via a camera perched above the robot’s head.

To see how effective the RobotFalcon was at scaring away birds, the researchers tested it against a conventional quadcopter drone over three months of field testing, near the Dutch city of Workum. They also compared their results to 15 years of data collected by the Royal Netherlands Air Force that assessed the effectiveness of conventional deterrence methods such as pyrotechnics and distress calls.

Flock-herding Falcon Drone Patrols Airport Flight Paths youtu.be

In a paper published in the Journal of the Royal Society Interface, the team showed that the RobotFalcon cleared fields of birds faster and more effectively than the drone. It also kept birds away from fields longer than distress calls, the most effective of the conventional approaches.

There was no evidence of birds getting habituated to the RobotFalcon over three months of testing, says Hemelrijk, and the researchers also found that the birds exhibited behavior patterns associated with escaping from predators much more frequently with the robot than with the drone. “The way of reacting to the RobotFalcon is very similar to the real falcon,” says Hemelrijk. “The birds don’t distinguish it from a real falcon, it seems.”

Other attempts to use hawk-imitating robots to disperse birds have had less promising results, though. Morgan Drabik-Hamshare, a research wildlife biologist at the DoA, and her colleagues published a paper in Scientific Reports last year that described how they pitted a robotic peregrine falcon with flapping wings against a quadcopter and a fixed-wing remote-controlled aircraft.

They found the robotic falcon was the least effective of the three at scaring away turkey vultures, with the quadcopter scaring the most birds off and the remote-controlled plane eliciting the quickest response. “Despite the predator silhouette, the vultures did not perceive the predator UAS [unmanned aircraft system] as a threat,” Drabik-Hamshare wrote in an email.

Zihao Wang, an associate lecturer at the University of Sydney, in Australia, who develops UAS for bird deterrence, says the RobotFalcon does seem to be effective at dispersing flocks. But he points out that its wingspan is nearly twice the diagonal length of the quadcopter it was compared with, which means it creates a much larger silhouette when viewed from the birds’ perspective. This means the birds could be reacting more to its size than its shape, and he would like to see the RobotFalcon compared with a similar size drone in the future.

The unique design also means the robot requires an experienced and specially trained operator, Wang adds, which could make it difficult to roll out widely. A potential solution could be to make the system autonomous, he says, but it’s unclear how easy this would be.

Hemelrijk says automating the RobotFalcon is probably not feasible, both due to strict regulations around the use of autonomous drones near airports as well as the sheer technical complexity. Their current operator is a falconer with significant experience in how hawks target their prey, she says, and creating an autonomous system that could recognize and target bird flocks in a similar way would be highly challenging.

But while the need for skilled operators is a limitation, Hemelrijk points out that most airports already have full-time staff dedicated to bird deterrence, who could be trained. And given the apparent lack of habituation and the ability to chase birds in a specific direction—so that they head away from runways—she thinks the robotic falcon could be a useful addition to their arsenal.


Match ID: 4 Score: 142.86 source: spectrum.ieee.org age: 21 days
qualifiers: 142.86 xkcd

Filter efficiency 99.351 (5 matches/770 results)

ABOUT THE PROJECT

RSS Rabbit links users to publicly available RSS entries.
Vet every link before clicking! The creators accept no responsibility for the contents of these entries.

Relevant

Fresh

Convenient

Agile

CONTACT

We're not prepared to take user feedback yet. Check back soon!

rssRabbit quadric