Date/Time of Last Update: Sun Jun 26 00:00:31 2022 UTC
********** CLIMATE **********
return to top
Foetus fronts legal challenge over emissions in South Korea
Fri, 24 Jun 2022 10:21:03 GMT
Lawyers representing 20-week-old foetus allege state is breaching rights of future generations
A 20-week-old foetus is fronting a legal challenge in South Korea that argues the state is breaching the rights of future generations by not doing enough to cut national emissions.
Parents and lawyers representing the foetus, as well as 61 babies and children under 11, claim national carbon targets do not go far enough to stop runaway climate change and that this is unconstitutional. Continue reading...
Match ID: 0 Score: 30.00 source: www.theguardian.com age: 1 day
qualifiers: 15.00 climate change, 15.00 carbon
The Nightmare Politics and Sticky Science of Hacking the Climate
Wed, 22 Jun 2022 14:36:14 +0000
Spraying aerosols and sucking carbon out of the air would bring down temperatures, yes. But the unintended consequences of geoengineering could be enormous.
Match ID: 1 Score: 25.71 source: www.wired.com age: 3 days
qualifiers: 12.86 climate change, 12.86 carbon
AI Can Help Make Recycling Better
Sat, 25 Jun 2022 15:02:00 +0000
Garbage is a global problem that each of us contributes to. Since the 1970s, we've all been told we can help fix that problem by assiduously recycling bottles and cans, boxes and newspapers.
So far, though, we haven’t been up to the task. Only 16 percent of the 2.1 billion tonnes of solid waste that the world produces every year gets recycled. The U.S. Environmental Protection Agency estimates that the United States recycled only about 32 percent of its garbage in 2018, putting the country in the middle of the pack worldwide. Germany, on the high end, captures about 65 percent, while Chile and Turkey barely do anything, recycling a mere 1 percent of their trash, according to a 2015 report by the Organization for Economic Cooperation and Development (OECD).
Here in the United States, of the 32 percent of the trash that we try to recycle, about 80 to 95 percent actually gets recycled, as Jason Calaiaro of AMP Robotics points out in “AI Takes a Dumpster Dive.” The technology that Calaiaro’s company is developing could move us closer to 100 percent. But it would have no effect on the two-thirds of the waste stream that never makes it to recyclers.
Certainly, the marginal gains realized by AI and robotics will help the bottom lines of recycling companies, making it profitable for them to recover more useful materials from waste. But to make a bigger difference, we need to address the problem at the beginning of the process: Manufacturers and packaging companies must shift to more sustainable designs that use less material or more recyclable ones.
According to the Joint Research Centre of the European Commission, more than “80 percent of all product-related environmental impacts are determined during the design phase of a product.” One company that applies AI at the start of the design process is Digimind GmbH based in Berlin. As CEO Katharina Eissing told Packaging Europe last year, Digimind’s AI-aided platform lets package designers quickly assess the outcome of changes they make to designs. In one case, Digimind reduced the weight of a company’s 1.5-liter plastic bottles by 13.7 percent, a seemingly small improvement that becomes more impressive when you consider that the company produces 1 billion of these bottles every year.
That’s still just a drop in the polyethylene terephthalate bucket: The world produced an estimated 583 billion PET bottles last year, according to Statista. To truly address our global garbage problem, our consumption patterns must change–canteens instead of single-use plastic bottles, compostable paper boxes instead of plastic clamshell containers, reusable shopping bags instead of “disposable” plastic ones. And engineers involved in product design need to develop packaging free of PET, polystyrene, and polycarbonate, which break down into tiny particles called microplastics that researchers are now finding in human blood and feces.
As much as we may hope that AI can solve our problems for us, that’s wishful thinking. Human ingenuity got us into this mess and humans will have to regulate, legislate, and otherwise incentivize the private sector to get us out of it.
Match ID: 2 Score: 15.00 source: spectrum.ieee.org age: 0 days
qualifiers: 15.00 carbon
A Warming Climate Takes a Toll on the Vanishing Rio Grande
Sat, 25 Jun 2022 12:00:00 +0000
Rising temperatures and an unprecedented drought pose a grave and growing peril to the river and its ecosystems.
Match ID: 3 Score: 15.00 source: www.wired.com age: 0 days
qualifiers: 15.00 climate change
Australia’s first fixed pill testing site to launch in Canberra with hopes of sparking a national initiative
Sat, 25 Jun 2022 08:02:30 GMT
Organisers say the centre is a ‘real watershed’ moment and will open two nights a week from 19 July
Australia’s first fixed pill testing site will be up and running within weeks, and the organisers hope it could be the start of a national program.
A pill testing trial at Canberra’s Groovin the Moo festival in 2019 potentially saved seven lives after the program detected “toxic” chemicals mixed into drugs.
Sign up to receive an email with the top stories from Guardian Australia every morning Continue reading...
Match ID: 4 Score: 15.00 source: www.theguardian.com age: 0 days
qualifiers: 15.00 toxic
How to Hasten India’s Transition Away From Coal
Sat, 25 Jun 2022 03:00:00 +0000
The rising threat of global warming requires that every country act now. The question is how much any one country should do.
India is 126th in the world in
per capita carbon dioxide emissions, according to a 2020 European Union report. One might argue that the onus of reversing global warming should fall on the developed world, which on a per capita basis consumes much more energy and emits significantly more greenhouse gases. However, India ranks third in the world in total greenhouse gas emissions—the result of having the second-largest population and being third largest in energy consumption.
As India’s GDP and per capita income continue to climb, so too will its energy consumption. For instance, just 8 percent of Indian homes had air-conditioning in 2018, but that share is
likely to rise to 50 percent by 2050. The country’s electricity consumption in 2019 was nearly six times as great as in 1990. Greenhouse gas emissions will certainly grow too, because India’s energy generation is dominated by fossil fuels—coal-fired power plants for electricity, coal- and gas-fired furnaces for industrial heating, liquid petroleum gas for cooking, and gasoline and diesel for transportation.
Fossil fuels dominate even though renewable energy generation in many parts of the world now
costs less than fossil-fuel-based electricity. While electricity from older coal plants in India costs 2.7 U.S. cents per kilowatt-hour and 5.5 cents from newer plants that have additional pollution-control equipment, the cost of solar energy has dropped to 2.7 cents per kilowatt-hour, and wind power to 3.4 cents per kilowatt-hour. As renewable energy has steadily gotten cheaper, the installed capacity has grown, to 110 gigawatts. That amounts to 27 percent of capacity, compared to coal’s share, which is 52 percent. The government of India has set a target of 450 GW of renewable energy capacity by 2030.
Yet in terms of energy
generated, renewable energy in India still falls short. In 2021, about 73 percent of the country’s electricity was produced from coal, and only 9.6 percent from solar and wind power. That’s because solar and wind power aren’t available around the clock, so the proportion of the installed capacity that gets used is just 20 to 30 percent. For coal, the capacity utilization rate can go as high as 90 percent.
As renewable energy capacity grows, the only way to drastically reduce coal in the electricity mix is by adding energy storage. Although some of the newer solar plants and wind farms are being set up with large amounts of battery storage, it could be decades before such investments have a significant impact. But there is another way for India to move faster toward its decarbonization goal: by focusing the renewable-energy push in India’s commercial and industrial sectors.
India has some 40,000 commercial complexes, which house offices and research centers as well as shopping centers and restaurants. Together they consume about 8 percent of the country’s electricity. The total footprint of such complexes is expected to triple by 2030, compared to 2010. To attract tenants, the managers of these complexes like to project their properties as users of renewable energy.
A 2-megawatt solar plant located 500 kilometers from IITM Research Park provides dedicated electricity to the complex. A 2.1-MW wind farm now under construction will feed IITMRP through a similar arrangement.IIT Madras
India’s industrial sector, meanwhile, consumes about 40 percent of the country’s electricity, and many industrial operators would also be happy to adopt a greater share of renewable energy if they can see a clear return on investment.
Right now, many of these complexes use rooftop solar, but limited space means they can only get a small share of their energy that way. These same complexes can, however, leverage a special power-transmission and “wheeling” policy that’s offered in India. Under this arrangement, an independent power-generation company sets up solar- or wind-power plants for multiple customers, with each customer investing in the amount of capacity it needs. In India, this approach is known as a group-captive model. The generating station injects the electricity onto the grid, and the same amount is immediately delivered, or wheeled in, to the customer, using the utility’s existing transmission and distribution network. A complex can add energy storage to save any excess electricity for later use. If enough commercial, industrial, and residential complexes adopt this approach, India could rapidly move away from coal-based electricity and meet a greater share of its energy needs with renewable energy. Our group at the Indian Institute of Technology Madras has been developing a pilot to showcase how a commercial complex can benefit from this approach.
The commercial complex known as the IITM Research Park, or IITMRP, in Chennai, is a 110,000-square-meter facility that houses R&D facilities for more than 250 companies, including about 150 startups, and employs about 5,000 workers. It uses an average of 40 megawatt-hours of electricity per weekday, or about 12 gigawatt-hours per year. Within the campus, there is 1 megawatt of rooftop solar, which provides about 10 percent of IITMRP’s energy. The complex is also investing in 2 MW of captive solar and 2.1 MW of captive wind power off-site, the electricity from which will be wheeled in. This will boost the renewable-energy usage to nearly 90 percent in about three years. Should the local power grid fail, the complex has backup diesel generators.
Of course, the generation of solar and wind energy varies from minute to minute, day to day, and season to season. The total generated energy will rarely meet IITMRP’s demand exactly; it will usually either exceed demand or fall short.
To get closer to 100 percent renewable energy, the complex needs to store some of its wind and solar power. To that end, the complex is building two complementary kinds of energy storage. The first is a 2-MWh, 750-volt direct-current lithium-ion battery facility. The second is a chilled-water storage system with a capacity equivalent to about 2.45 MWh. Both systems were designed and fabricated at IITMRP.
The battery system’s stored electricity can be used wherever it’s needed. The chilled-water system serves a specific, yet crucial function: It helps cool the buildings. For commercial complexes in tropical climates like Chennai’s, nearly 40 percent of the energy goes toward air-conditioning, which can be costly. In the IITMRP system, a central heating, ventilation, and air-conditioning (HVAC) system chills water to about 6 °C, which is then circulated to each office. A 300-cubic-meter underground tank stores the chilled water for use within about 6 to 8 hours. That relatively short duration is because the temperature of the chilled water in the tank rises about 1 °C every 2 hours.
The IITMRP’s chilled-water system provides air-conditioning to the complex. Water is chilled to about 6 °C and then stored in this 300-cubic-meter underground tank for later circulation to the offices.IIT Madras
The heat transfer capacity of the chilled-water system is 17,500 megajoules, which as mentioned is equivalent to 2.45 MWh of battery storage. The end-to-end round-trip energy loss is about 5 percent. And unlike with a battery system, you can “charge” and “discharge” the chilled-water tank several times a day without diminishing its life span.
Although energy storage adds to the complex’s capital costs, our calculations show that it ultimately reduces the cost of power. The off-site solar and wind farms are located, respectively, 500 and 600 kilometers from IITMRP. The cost of the power delivered to the complex includes generation (including transmission losses) of 5.14 cents/kWh as well as transmission and distribution charges of 0.89 cents/kWh. In addition, the utilities that supply the solar and wind power impose a charge to cover electricity drawn during times of peak demand. On average, this demand charge is about 1.37 cents/kWh. Thus, the total generation cost for the solar and wind power delivered to IITMRP is about 7.4 cents/kWh.
There’s also a cost associated with energy storage. Because most of the renewable energy coming into the complex will be used immediately, only the excess needs to be stored—about 30 percent of the total, according to our estimate.
So the average cost of round-the-clock renewable energy works out to 9.3 cents/kWh, taking into account the depreciation, financing, and operation costs over the lifetime of the storage. In the future, as the cost of energy storage continues to decline, the average cost will remain close to 9 cents/kWh, even if half of the energy generated goes to storage. And the total energy cost could drop further with declines in interest rates, the cost of solar and wind energy, or transmission and demand charges.
For now, the rate of 9.3 cents/kWh compares quite favorably to what IITMRP pays for regular grid power—about 15 cents/kWh. That means, with careful design, the complex can approach 100 percent renewable energy and still save about a third on the energy costs that it pays today. Keep in mind that grid power in India primarily comes from coal-based generation, so for IITMRP and other commercial complexes, using renewable energy plus storage has a big environmental upside.
IITM Research Park’s lithium-ion battery facility stores excess electricity for use after the sun goes down or when there’s a dip in wind power.IIT Madras
Electricity tariffs are lower for India’s industrial and residential complexes, so the cost advantage of this approach may not be as pronounced in those settings. But renewable energy can also be a selling point for the owners of such complexes—they know many tenants like having their business or home located in a complex that’s green.
Although IITMRP’s annual consumption is about 12 GWh, the energy usage, or load, varies slightly from month to month, from 970 to 1,100 MWh. Meanwhile, the energy generated from the captive off-site solar and wind plants and the rooftop solar plant will vary quite a bit more. The top chart ("Monthly Load and Available Renewable Energy at IITMRP") shows the estimated monthly energy generated and the monthly load.
As is apparent, there is some excess energy available in May and July, and an overall energy deficit at other times. In October, November, and December, the deficit is substantial, because wind-power generation tends to be lowest during those months. Averaged over a year, the deficit works out to be 11 percent; the arrangement we’ve described, in other words, will allow IITMRP to obtain 89 percent of its energy from renewables.
For the complex to reach 100 percent renewable energy, it’s imperative that any excess energy be stored and then used later to make up for the renewable energy deficits. When the energy deficits are particularly high, the only way to boost renewable energy usage further will be to add another source of generation, or else add long-term energy storage that’s capable of storing energy over months. Researchers at IITMRP are working on additional sources of renewable energy generation, including ocean, wave, and tidal energy, along with long-term energy storage, such as zinc-air batteries.
IITM Research Park’s electricity load and available renewable energy vary across months [top] and over the course of a single day [bottom]. To make up for the deficit in renewable energy, especially during October, November, and December, additional renewable generation or long-term energy storage will be needed. At other times of the year, the available renewable energy tends to track the load closely throughout the day, with any excess energy sent to storage.
For other times of the year, the complex can get by on a smaller amount of shorter-term storage. How much storage? If we look at the energy generated and the load on an hourly basis over a typical weekday, we see that the total daily load generally matches the total daily demand, but with small fluctuations in surplus and deficit. Those fluctuations represent the amount of energy that has to move in and out of storage. In the bottom chart ("Daily Load and Available Renewable Energy at IITMRP"), the cumulative deficit peaks at 1.15 MWh, and the surplus peaks at 1.47 MWh. Thus, for much of the year, a storage size of 2.62 MWh should ensure that no energy is wasted.
This is a surprisingly modest amount of storage for a complex as large as IITMRP. It’s possible because for much of the year, the load follows a pattern similar to the renewable energy generated. That is, the load peaks during the hours when the sun is out, so most of the solar energy is used directly, with a small amount of excess being stored for use after the sun goes down. The load drops during the evening and at night, when the wind power is enough to meet most of the complex’s demand, with the surplus again going into storage to be used the next day, when demand picks up.
On weekends, the demand is, of course, much less, so more of the excess energy can be stored for later use on weekdays. Eventually, the complex’s lithium-ion battery storage will be expanded to 5 MWh, to take advantage of that energy surplus. The batteries plus the chilled-water system will ensure that enough storage is available to take care of weekday deficits and surpluses most of the time.
As mentioned earlier, India has some 40,000 commercial complexes like IITMRP, and that number is expected to grow rapidly. Deploying energy storage for each complex and wheeling in solar and wind energy make sense both financially and environmentally. Meanwhile, as the cost of energy storage continues to fall, industrial complexes and large residential complexes could be enticed to adopt a similar approach. In a relatively short amount of time—a matter of years, rather than decades—renewable energy usage in India could rise to about 50 percent.
On the way to that admittedly ambitious goal, the country’s power grids will also benefit from the decentralized energy management within these complexes. The complexes will generally meet their own supply and demand, enabling the grid to remain balanced. And, with thousands of complexes each deploying megawatts’ worth of stationary batteries and chilled-water storage, the country’s energy-storage industry will get a big boost. Given the government’s commitment to expanding India’s renewable capacity and usage, the approach we’re piloting at IITMRP will help accelerate the push toward cleaner and greener power for all.
This article appears in the July 2022 print issue as “Weaning India from Coal.”
Match ID: 5 Score: 15.00 source: spectrum.ieee.org age: 0 days
qualifiers: 15.00 carbon
Weather tracker: Europe’s heatwave sends temperature records tumbling
Fri, 24 Jun 2022 12:54:17 GMT
Countries across the continent have experienced all-time highs, raising fears of wildfires
A hot topic over the past couple of weeks has been the heatwave that has been scorching large parts of Europe. This week has been no different with more than 200 monthly temperature records broken across France, and countries including Switzerland, Austria, Germany and Spain recording all-time highs. For example, Cazaux and Bordeaux experienced monthly all-time temperature records of 41.9C and 40.5C respectively.
One consequence of this prolonged heat is the drying out of soil and vegetation, permitting the development of wildfires across Spain, with tens of thousands of acres of land likely to be affected. According to scientists at the University of Lleida in Spain, climate change will extend the duration of fire seasons across many regions of Europe. Continue reading...
Match ID: 6 Score: 15.00 source: www.theguardian.com age: 1 day
qualifiers: 15.00 climate change
Aphorisms for the Anthropocene
Fri, 24 Jun 2022 10:00:00 +0000
Don’t count your chickens before they’ve been inoculated with ten different antibiotics that allow them to grow to unnatural proportions.
Match ID: 7 Score: 15.00 source: www.newyorker.com age: 1 day
qualifiers: 15.00 climate change
Megatruck Runs on the Lightest Gas
Thu, 23 Jun 2022 15:00:00 +0000
The Big Picture features technology through the lens of photographers.
Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition.
Enjoy the latest images, and if you have suggestions, leave a comment below.
Megatruck Runs on the Lightest Gas
Big things are happening in the world of hydrogen-powered vehicles. One of the latest monumental happenings is the debut of
Anglo American’s 510-ton hydrogen-powered mining truck. The behemoth, which will put in work at a South African platinum mine, will replace an entire 40-truck fleet that services the mine. Together, those trucks consume about one million liters of diesel fuel each year. The new truck, whose power plant features eight 100-kilowatt hydrogen fuel cells and a 1.2-megawatt battery pack, is just the first earth-moving step in Anglo American’s NuGen project aimed at replacing its global fleet of 400 diesel mining trucks with hydrogen-powered versions. According to the company’s estimates, the switch will be the equivalent of taking half a million diesel-fueled passenger cars off the road.
Waldo Swiegers/Bloomberg/Getty Images
Snooping on penguins for clues regarding how they relate to their polar environment is a job for machines and not men. That is the conclusion reached by a team of researchers that is studying how climate change is threatening penguins’ icy Antarctic habitat and puzzling out how to protect the species that are native to both polar regions. Rather than subjecting members of the team to the bitter cold weather in penguins’ neighborhoods, they’re studying these ecosystems using hybrid autonomous and remote-controlled
Husky UGV robots. Four-wheeled robots like the one pictured here are equipped with arrays of sensors such as cameras and RFID scanners that read ID tags in chipped penguins. These allow the research team, representing several American and European research institutes, to track individual penguins, assess how successfully they are breeding, and get a picture of overall penguin population dynamics–all from their labs and offices in more temperate climates.
This is not a hailstorm with pieces of ice that are straight-edged instead of ball-shaped. The image is meant to illustrate an innovation in imaging that will allow cameras to capture stunning details of objects up close and far afield at the same time. The metalens is inspired by the compound eyes of a long-extinct invertebrate sea creature that could home in on distant objects and not lose focus on things that were up close. In a single photo, the lens can produce sharp images of objects as close as 3 centimeters and as far away as 1.7 kilometers. Previously, image resolution suffered as depth of field increased, and vice versa. But researchers from several labs in China and at the National Institute of Standards and Technology (NIST) in Gaithersburg, Md., have been experimenting with metasurfaces, which are surfaces covered with forests of microscopic pillars (the array of ice-cube-like shapes in the illustration). Tuning the size and shape of the pillars and arranging them so they are separated by distances shorter than the wavelength of light makes the metasurfaces capable of capturing images with unprecedented depth of field.
Painters specializing in automobile detailing might want to begin seeking out new lines of work. Their art may soon be the exclusive province of a robotic arm that can replicate images drawn on paper and in computer programs with unrivaled precision.
ABB’s PixelPaint computerized arm makes painting go much faster than is possible with a human artisan because its 1,000 paint nozzles deliver paint to a car’s surface much the same way that an inkjet printer deposits pigment on a sheet of paper. Because there’s no overspray, there is no need for the time-consuming masking and tape-removal steps. This level of precision, which puts 100 percent of the paint on the car, also eliminates paint waste, so paint jobs are less expensive. Heretofore, artistic renderings still needed the expert eye and practiced hand of a skilled artist. But PixelPaint has shown itself capable of laying down designs with a level of intricacy human eyes and hands cannot execute.
Match ID: 8 Score: 15.00 source: spectrum.ieee.org age: 2 days
qualifiers: 15.00 climate change
Rep. Cori Bush Boosts Biden’s Efforts to Fight Climate Change With Executive Authority
Wed, 22 Jun 2022 20:39:06 +0000
Bush’s push would give the White House $100 million in clean energy funding to use through the Defense Production Act.
The post Rep. Cori Bush Boosts Biden’s Efforts to Fight Climate Change With Executive Authority appeared first on The Intercept.
Match ID: 9 Score: 12.86 source: theintercept.com age: 3 days
qualifiers: 12.86 climate change
The Magnet That Made the Modern World
Tue, 21 Jun 2022 22:14:58 +0000
For sheer drama and resonance, few tech breakthroughs can match the invention of the neodymium-iron-boron permanent magnet in the early 1980s. It’s one of the great stories of corporate intrigue: General Motors in the United States and Sumitomo in Japan independently conceived the technology and then worked in secret, racing to commercialize the technology, and without even being aware of the other’s efforts. The two project leaders—Masato Sagawa of Sumitomo and John Croat of GM—surprised each other by announcing their results at the same conference in Pittsburgh in 1983.
Up for grabs was a market potentially worth billions of dollars. The best permanent magnets at the time, samarium-cobalt, were strong and reliable but expensive. They were used in electric motors, generators, audio speakers, hard-disk drives, and other high-volume products. Today, some
95 percent of permanent magnets are neodymium-iron-boron. The global market for these magnets is expected to reach US $20 billion a year within a couple of years, as the automobile industry shifts toward electric vehicles and as utilities turn increasingly to wind turbines to meet growing demand.
IEEE recently honored Sagawa and Croat by awarding them its Medal for Environmental and Safety Technologies at the 2022 Vision, Innovation, and Challenges Summit. IEEE Spectrum spoke with the two inventors, including an hourlong interview with both of them (only the second time the two have been interviewed together). They revealed their reasons for zeroing in on the rare-earth element neodymium, the major challenges they faced in making a commercial magnet out of it, the extraordinary intellectual-property deal that allowed both GM and Sumitomo to market their magnets worldwide, and their opinions on whether there will ever be a successful permanent magnet that does not use rare-earth elements.
John Croat and Masato Sagawa on…
You were trying to make a cheaper magnet, as I understand it. You weren’t even necessarily trying to make a stronger one, although that turned out to be the case. What made you think you could make a cheaper magnet?
John Croat: Well, the problem with samarium-cobalt…they were an excellent magnet. They had good temperature properties. You’ve probably heard the phrase that rare earths aren’t really that rare, but samarium is one of the more rare ones. It constitutes only about 0.8 percent of the composition of the ores that are typically exploited today for rare earths. So it was a fairly expensive rare earth. And, of course, cobalt was very expensive. During my early years at General Motors Research Labs, there was a war in Central African Zaire [now known as the Democratic Republic of the Congo], which is a big cobalt supplier. And the price of cobalt went up to something like $45 a kilogram. Remember, this was in the 1970s, so it basically stopped our research on samarium-cobalt magnets.
Masato, what do you remember? What do you recall of the state of the permanent-magnet market and technology in the 1970s in Japan?
Masato Sagawa: I joined Fujitsu in 1972, so that’s in the same age as with John. And I was given from the company to improve the samarium-cobalt magnet, to improve the mechanical strength. But I wondered why there is no iron compound. Iron is much cheaper and much more [available] than cobalt, and iron has higher magnetic moment than cobalt. So if I can produce rare-earth iron magnets, I thought I will have higher magnetic strengths and much lower cost. So I started to research the samarium-cobalt—or rare-earth iron compound. But it’s an official subject in Fujitsu. And I worked hard on the samarium-cobalt. And I succeeded in the development of samarium-cobalt magnet with high strength. And I asked the company to work on a rare-earth iron compound permanent magnet. But I was not allowed. But I had an idea. Rare-earth, iron and, I think, a small amount of additive elements like some carbon or boron, which are known to have a very small atomic diameter. I studied the rare-earth, iron, boron or rare-earth, iron, carbon. So underground, I did this research for several years. And I reached this neodymium-boron several years later. It was in 1982.
Back to top
What was it that made you focus on neodymium, iron, and boron? Why those?
Croat: Well, of course, when samarium-cobalt magnets were developed, everyone in this field thought about developing a rare-earth-iron magnet because iron is virtually free compared to cobalt. Now, in terms of the rare earths, as I said, rare earths are not really that rare. The light rare earths, lanthanum, cerium, praseodymium, and neodymium, constitute about 90 percent of the composition of a typical rare-earth deposit…. So we knew at the start that if we wanted to make an economically viable magnet, both Dr. Sagawa and I realized that we had to make the permanent magnet from one of these four rare earths: lanthanum, cerium, neodymium, or praseodymium. The problem with lanthanum and cerium, as you know, the lanthanides are formed by filling the 4F electrons in the 4F series. However, lanthanum and cerium, the two most abundant rare-earths, had no 4F electrons. And we knew by this time, based on the work with samarium-cobalt magnets, that one of the things that you had to have was these 4F electrons to give you the coercivity for the material.
Back to top
Can you give us a quick definition of coercivity?
Croat: Coercivity is the resistance to demagnetization. In a permanent magnet, as you say, the moments are all aligned parallel. If you put a magnetic field in the reverse direction, the coercivity will resist the magnet flipping into the opposite direction.
We knew that we wanted iron instead of cobalt…. And both of us set out with the intention of making a rare-earth iron permanent magnet from neodymium or praseodymium. The problem was that there was no intermetallic compounds available. Unlike in this rare-earth cobalt phase diagram—there was lots of interesting intermetallic compounds—the rare-earth-iron phase diagrams do not contain suitable usable intermetallic compounds.
In plain language, what is an intermetallic phase, and why is it important?
Croat: An intermetallic compound or an intermetallic phase is a phase with a fixed ratio of the components. Like, terbium-iron two has one terbium and two irons. And it sits on a crystal lattice in very specific sites on the lattice. You have to have that. That’s one of the quintessential requirements for any rare-earth transition-metal permanent magnet.
It provides the structure and stability you need or the reproducibility?
Croat: All of that. In other words, it’s the thing that holds the magnetic moment in place in the structure. You have to have this crystal structure.
So what was the solution?
Croat: The fact that there was no intermetallic compound was a baffling problem for some time. But then, in 1976, I and a couple of colleagues saw a paper by Art Clark. He was working at the Naval Surface Weapons Laboratory. He had taken a sputtered sample of terbium iron two [TbFe2] and annealed it at increasingly higher temperatures. And at about 350 °C, the coercivity shot up to about 3.5 kilo oersted. And we surmised, and I think correctly at the time, that what had happened was that during the crystallization process, a metastable phase had formed. This was exciting because this is the first time anyone had ever developed a coercivity in a rare-earth iron material. It was also exciting because of the fact that TbFe2 is a cubic material. And a cubic material should not develop coercivity. You have to have a crystal structure with a uniaxial crystal lattice, like hexagonal, rhombohedral, or tetragonal.
And so I started out with that thesis: to create magnetically hard metastable phases that are practical for permanent magnets. And by using rapid solidification, I started making melt-spun materials and crystallizing them. And it worked very well. I had developed very high coercivities right away. The problem with these materials were that they were all unstable. I started to heat them up at about 450 °C, and they would decompose into their equilibrium structure, and the coercivity would go away. So I began to add things to see if I could make them more stable. And one of the things I added was boron. And one day I found that when I heated my sample up containing boron, it did not decompose into its equilibrium structure. And so I knew that I had discovered a ternary neodymium-iron-boron intermetallic phase, a very interesting, technically important intermetallic phase. And it turns out that Masato discovered the same one [laughter].
Sagawa-san, you mentioned that you were interested in a sintering process, which was similar to the process that was then being used to manufacture samarium-cobalt magnets.... When you were working on a way to make neodymium-iron-boron magnets using sintering, did you encounter specific challenges that were difficult, that took a lot of effort to solve?
Sagawa: I was not able to give coercivity to the neodymium-iron-boron alloy. And I tried many processes. But the cost of sintering is good because to give coercivity to the alloy, you have to make a cellular structure in the alloy. So to produce cellular structure, the sintering is a very good way because first, you make single crystal or powder and you align the powder and then sintering. And during sintering, you form automatically cellular structure.
So I tried to form cellular structure. I tested many, many kinds of elements starting from copper. Copper is used in the case of samarium-cobalt magnets. And starting from copper, I tested many, many additive elements almost throughout the periodic table. But I was not able to give coercivity by making additional elements. And at last, I found a good additive element. It’s not another element—it’s neodymium itself. Additional neodymium gives home to cellular structure forming a grain boundary area around the neodymium-iron-boron particles. So I succeeded in giving coercivity to the neodymium-iron-boron by sintering and a neodymium-rich composition. And I succeeded in developing a neodymium-boron sintered magnet with record-high BH maximum [a measure of the maximum magnetic energy that can be stored in a magnet] in the world. It was in 1982.
This work is mostly happening in the late 1970s, early 1980s. You’re both working on almost the same problem on different sides of the world. Sagawa-san, when did you first find out that General Motors was also working on the same challenge that you were working on?
Masato Sagawa of Sumitomo [left] announced the invention of a revolutionary neodymium-iron-boron permanent magnet at a conference in Pittsburgh, in November 1983. At the same meeting, John Croat of General Motors announced the invention of a magnet using the exact same elements.Masato Sagawa
Sagawa: It was when I made the first presentation at the MMM Conference, Magnetism and Magnetic Material Conference, held in Pittsburgh in 1983.
Croat: November 1983.
Sagawa: November 1983. At the same conference, John Croat and his group presented a paper on the same neodymium-boron alloy magnets.
So for years, you both had been working on this problem, attacking the same problem. And you both found out about the other effort at the same conference in Pittsburgh in 1983?
That’s astounding. Did you talk to each other at that conference? Did you get together and say anything to each other?
Croat: I think we introduced ourselves to each other, but I don’t remember much more than that.
What do you recall, Sagawa-san? Do you recall any conversation with John at that meeting?
Sagawa: I remember that I saw John, but I don’t remember if we talked together or not.
Croat: I think it would have been logical if we did, but I cannot remember it. We probably considered ourselves competitors [laughter].
You both came up with independent means of manufacturing. General Motors came up with a technique called melt-spinning, and Sumitomo’s was a sintering process. They had different characteristics. The sintered magnets seem to have more structural strength or resilience. The GM magnets can be produced more inexpensively. They both found large market applications, somewhat different but still large uses. John, why don’t you take a crack in just explaining what their market niches became and still are to this day?
Croat: Yes. The rapidly solidified materials are isotropic. And during the rapid solidification process, you form a magnetic powder. That powder is blended with an epoxy and made into a magnet. But it turned out that these magnets were ideal for making small ring magnets that go into micromotors like spindle motors for hard-disk drives or CD-ROMs or for stepper motors. So that has—
Croat: For robots and things of that nature, servo motors for robots, but also spindle and stepper motors for various applications. And that has been the primary market for these bonded magnets because making a thin-wall ring magnet by the sintering process is very difficult. They tend to crack and break apart. But in contrast, the sintered-magnet market, which is much bigger actually than the bonded-magnet market, has been used primarily for bigger motors, wind-turbine generators, MRIs. Most of the electric-vehicle motors are sintered magnets. So again, most of the market is motors. But the market is bigger for the sintered-magnet market than it is for the bonded-magnet market. But there are two distinctly different markets in general.
Sagawa: I think one of the most important applications of the neodymium-iron-boron magnet is the hard-disk drive. If the neodymium-boron was not found, it would have been difficult to miniaturize the hard-disk drive. Before the appearance of the neodymium-boron magnet, the hard-disk drive was very big. It was difficult to lift by one person, 10 kilo or 20 kilogram or so. Now it becomes very small. And this is because of the invention of neodymium-boron sintered magnet which is used in the actuator motor. And also, the bonded-magnet neodymium is used in the spindle motor to rotate the hard disk. This was a very important invention for the start of our IT society.
Hard-disk drives contain several neodymium permanent magnets. There’s one in the spindle motor that rotates the disk, and typically two others in the read-write arm, also known as the actuator arm (the triangular shaped structure in the photo) that detects and writes data on the disk.Getty Images
Back to top
You had little or no contact with each other until this meeting in Pittsburgh in 1983, by which time you’d already established all your intellectual property. And yet there was a long-running—well, not that long-running, but a patent case between General Motors and Sumitomo. John, can you start off and tell us a little bit about what happened there?
Croat: Yes. I guess we didn’t mention it, but both Sumitomo and General Motors filed patents shortly after the invention of this material, which turned out to be early 1982, apparently within weeks of each other. But it turns out, because of patent law, the way patent law is written, General Motors ended up with the patents in North America, and Sumitomo ended up with the patents for the composition neodymium-iron-boron in Japan and Europe. General Motors had the neodymium-iron-boron composition in North America. This meant that neither company could market worldwide, and they had to market worldwide to be economically viable. So they actually had a dispute, of course. I don’t know if they actually sued each other. But anyway, they had a negotiation. And I remember being part of these negotiations where we ended up with an agreement where we cross-licensed each other, which allowed both companies to market the material worldwide—manufacture and market the material worldwide.
But you could only manufacture and market your type of material, which, in your case, was this melt-spun, rapid—
Croat: Solidification, melt-spinning.
Solidification. And Sumitomo had the sintering worldwide, North America, Asia, Europe, everywhere.
Croat: It turned out it was based on the particle size of the material. Sumitomo had the rights to manufacture magnets with a particle size greater than one micron, General Motors less than one micron.
Sagawa: Oh, you remember!
Croat: Yes. [Both laugh]
Back to top
Right now, of course, there’s a lot of controversy over the fact that an enormous amount of the world’s market for rare-earth elements is controlled by China, the mining, the production, and so on. So many countries, particularly in Europe and North America, are looking to broaden their base of suppliers for rare earths. But at the same time, there’s this existing market for these magnets. So is this having an effect of any kind on the future directions of R&D in permanent magnets?
Croat: I am no longer close enough to the R&D to know what’s going on, but I think there has been no change. People are still interested in making permanent magnets primarily containing a rare earth.
I don’t see how they’re ever going to get the rare earth out of a rare-earth transition metal magnet and make a good high-performance magnet. So the rare-earth supply problem is going to continue and will maybe even grow in the future as the market for these magnets grows. And I think the only way that they can overcome that is that Japan and Korea and Western Europe and North America will have to have some kind of government help to establish a rare-earth market outside of [China]. There are a lot of countries that have rare earths. India, for example, has rare earths. Australia, Canada have rare earths. United States, of course, has several big deposits. But what happened was, of course, the Chinese reduced the price to the point back in the 1990s and drove everybody else out of business. So somehow, some political will has to be put forth to change the dynamics of the rare-earth market today.
Sagawa: I think it’s impossible to produce high-grade magnet without rare earths. It’s concluded recently. There are very active research on an iron-nickel compound; it was promising. It has high-saturation magnetization and a very high anisotropy field. But I think, in recent research in Japan, it was concluded [that] it’s impossible to produce high-performance permanent magnet from this iron-nickel compound. And this is the last research subject on the rare-earth-free compound consisting of only 3D [orbital] -electron elements, iron-cobalt-nickel.
Match ID: 10 Score: 10.71 source: spectrum.ieee.org age: 4 days
qualifiers: 10.71 carbon
Satellite Imagery for Everyone
Sat, 19 Feb 2022 16:00:00 +0000
Every day, satellites circling overhead capture trillions of pixels of high-resolution imagery of the surface below. In the past, this kind of information was mostly reserved for specialists in government or the military. But these days, almost anyone can use it.
That’s because the cost of sending payloads, including imaging satellites, into orbit has dropped drastically. High-resolution satellite images, which used to cost tens of thousands of dollars, now can be had for the price of a cup of coffee.
What’s more, with the recent advances in artificial intelligence, companies can more easily extract the information they need from huge digital data sets, including ones composed of satellite images. Using such images to make business decisions on the fly might seem like science fiction, but it is already happening within some industries.
These underwater sand dunes adorn the seafloor between Andros Island and the Exuma islands in the Bahamas. The turquoise to the right reflects a shallow carbonate bank, while the dark blue to the left marks the edge of a local deep called Tongue of the Ocean. This image was captured in April 2020 using the Moderate Resolution Imaging Spectroradiometer on NASA’s Terra satellite.
Joshua Stevens/NASA Earth Observatory
Here’s a brief overview of how you, too, can access this kind of information and use it to your advantage. But before you’ll be able to do that effectively, you need to learn a little about how modern satellite imagery works.
The orbits of Earth-observation satellites generally fall into one of two categories: GEO and LEO. The former is shorthand for geosynchronous equatorial orbit. GEO satellites are positioned roughly 36,000 kilometers above the equator, where they circle in sync with Earth’s rotation. Viewed from the ground, these satellites appear to be stationary, in the sense that their bearing and elevation remain constant. That’s why GEO is said to be a geostationary orbit.
Such orbits are, of course, great for communications relays—it’s what allows people to mount satellite-TV dishes on their houses in a fixed orientation. But GEO satellites are also appropriate when you want to monitor some region of Earth by capturing images over time. Because the satellites are so high up, the resolution of that imagery is quite coarse, however. So these orbits are primarily used for observation satellites designed to track changing weather conditions over broad areas.
Being stationary with respect to Earth means that GEO satellites are always within range of a downlink station, so they can send data back to Earth in minutes. This allows them to alert people to changes in weather patterns almost in real time. Most of this kind of data is made available for free by the U.S. National Oceanographic and Atmospheric Administration.
In March 2021, the container ship Ever Given ran aground, blocking the Suez Canal for six days. This satellite image of the scene, obtained using synthetic-aperture radar, shows the kind resolution that is possible with this technology.
The other option is LEO, which stands for low Earth orbit. Satellites placed in LEO are much closer to the ground, which allows them to obtain higher-resolution images. And the lower you can go, the better the resolution you can get. The company Planet, for example, increased the resolution of its recently completed satellite constellation, SkySat, from 72 centimeters per pixel to just 50 cm—an incredible feat—by lowering the orbits its satellites follow from 500 to 450 km and improving the image processing.
The best commercially available spatial resolution for optical imagery is 25 cm, which means that one pixel represents a 25-by-25-cm area on the ground—roughly the size of your laptop. A handful of companies capture data with 25-cm to 1-meter resolution, which is considered high to very high resolution in this industry. Some of these companies also offer data from 1- to 5-meter resolution, considered medium to high resolution. Finally, several government programs have made optical data available at 10-, 15-, 30-, and 250-meter resolutions for free with open data programs. These include NASA/U.S. Geological Survey Landsat, NASA MODIS (Moderate Resolution Imaging Spectroradiometer), and ESA Copernicus. This imagery is considered low resolution.
Because the satellites that provide the highest-resolution images are in the lowest orbits, they sense less area at once. To cover the entire planet, a satellite can be placed in a polar orbit, which takes it from pole to pole. As it travels, Earth rotates under it, so on its next pass, it will be above a different part of Earth.
Many of these satellites don’t pass directly over the poles, though. Instead, they are placed in a near-polar orbit that has been specially designed to take advantage of a subtle bit of physics. You see, the spinning Earth bulges outward slightly at the equator. That extra mass causes the orbits of satellites that are not in polar orbits to shift or (technically speaking) to precess. Satellite operators often take advantage of this phenomenon to put a satellite in what’s called a sun-synchronous orbit. Such orbits allow the repeated passes of the satellite over a given spot to take place at the same time of day. Not having the pattern of shadows shift between passes helps the people using these images to detect changes.
It usually takes 24 hours for a satellite in polar orbit to survey the entire surface of Earth. To image the whole world more frequently, satellite companies use multiple satellites, all equipped with the same sensor and following different orbits. In this way, these companies can provide more frequently updated images of a given location. For example, Maxar’s Worldview Legion constellation, launching later this year, includes six satellites.
After a satellite captures some number of images, all that data needs to be sent down to Earth and processed. The time required for that varies.
DigitalGlobe (which Maxar acquired in 2017) recently announced that it had managed to send data from a satellite down to a ground station and then store it in the cloud in less than a minute. That was possible because the image sent back was of the parking lot of the ground station, so the satellite didn’t have to travel between the collection point and where it had to be to do the data “dumping,” as this process is called.
In general, Earth-observation satellites in LEO don’t capture imagery all the time—they do that only when they are above an area of special interest. That’s because these satellites are limited to how much data they can send at one time. Typically, they can transmit data for only 10 minutes or so before they get out of range of a ground station. And they cannot record more data than they’ll have time to dump.
Currently, ground stations are located mostly near the poles, the most visited areas in polar orbits. But we can soon expect distances to the nearest ground station to shorten because both Amazon and Microsoft have announced intentions to build large networks of ground stations located all over the world. As it turns out, hosting the terabytes of satellite data that are collected daily is big business for these companies, which sell their cloud services (Amazon Web Services and Microsoft’s Azure) to satellite operators.
For now, if you are looking for imagery of an area far from a ground station, expect a significant delay—maybe hours—between capture and transmission of the data. The data will then have to be processed, which adds yet more time. The fastest providers currently make their data available within 48 hours of capture, but not all can manage that. While it is possible, under ideal weather conditions, for a commercial entity to request a new capture and get the data it needs delivered the same week, such quick turnaround times are still considered cutting edge.
The best commercially available spatial resolution is 25 centimeters for optical imagery, which means that one pixel represents something roughly the size of your laptop.
I’ve been using the word “imagery,” but it’s important to note that satellites do not capture images the same way ordinary cameras do. The optical sensors in satellites are calibrated to measure reflectance over specific bands of the electromagnetic spectrum. This could mean they record how much red, green, and blue light is reflected from different parts of the ground. The satellite operator will then apply a variety of adjustments to correct colors, combine adjacent images, and account for parallax, forming what’s called a true-color composite image, which looks pretty much like what you would expect to get from a good camera floating high in the sky and pointed directly down.
Imaging satellites can also capture data outside of the visible-light spectrum. The near-infrared band is widely used in agriculture, for example, because these images help farmers gauge the health of their crops. This band can also be used to detect soil moisture and a variety of other ground features that would otherwise be hard to determine.
Longer-wavelength “thermal” IR does a good job of penetrating smoke and picking up heat sources, making it useful for wildfire monitoring. And synthetic-aperture radar satellites, which I discuss in greater detail below, are becoming more common because the images they produce aren’t affected by clouds and don’t require the sun for illumination.
You might wonder whether aerial imagery, say, from a drone, wouldn’t work at least as well as satellite data. Sometimes it can. But for many situations, using satellites is the better strategy. Satellites can capture imagery over areas that would be difficult to access otherwise because of their remoteness, for example. Or there could be other sorts of accessibility issues: The area of interest could be in a conflict zone, on private land, or in another place that planes or drones cannot overfly.
So with satellites, organizations can easily monitor the changes taking place at various far-flung locations. Satellite imagery allows pipeline operators, for instance, to quickly identify incursions into their right-of-way zones. The company can then take steps to prevent a disastrous incident, such as someone puncturing a gas pipeline while construction is taking place nearby.
This SkySat image shows the effect of a devastating landslide that took place on 30 December 2020. Debris from that landslide destroyed buildings and killed 10 people in the Norwegian village of Ask.
The ability to compare archived imagery with recently acquired data has helped a variety of industries. For example, insurance companies sometimes use satellite data to detect fraudulent claims (“Looks like your house had a damaged roof when you bought it…”). And financial-investment firms use satellite imagery to evaluate such things as retailers’ future profits based on parking-lot fullness or to predict crop prices before farmers report their yields for the season.
Satellite imagery provides a particularly useful way to find or monitor the location of undisclosed features or activities.
Sarah Parcak of the University of Alabama, for example, uses satellite imagery to locate archaeological sites of interest. 52Impact, a consulting company in the Netherlands, identified undisclosed waste dump sites by training an algorithm to recognize their telltale spectral signature. Satellite imagery has also helped identify illegal fishing activities, fight human trafficking, monitor oil spills, get accurate reporting on COVID-19 deaths, and even investigate Uyghur internment camps in China—all situations where the primary actors couldn’t be trusted to accurately report what’s going on.
Despite these many successes, investigative reporters and nongovernmental organizations aren’t yet using satellite data regularly, perhaps because even the small cost of the imagery is a deterrent. Thankfully, some kinds of low-resolution satellite data can be had for free.
The first place to look for free satellite imagery is the
Copernicus Open Access Hub and EarthExplorer. Both offer free access to a wide range of open data. The imagery is lower resolution than what you can purchase, but if the limited resolution meets your needs, why spend money?
If you require medium- or high-resolution data, you might be able to buy it directly from the relevant satellite operator. This field recently went through a period of mergers and acquisitions, leaving only a handful of providers, the big three in the West being
Maxar and Planet in the United States and Airbus in Germany. There are also a few large Asian providers, such as SI Imaging Services in South Korea and Twenty First Century Aerospace Technology in Singapore. Most providers have a commercial branch, but they primarily target government buyers. And they often require large minimum purchases, which is unhelpful to companies looking to monitor hundreds of locations or fewer.
Expect the distance to the nearest ground station to shorten because both
Amazon and Microsoft have announced intentions to build large networks of ground stations located all over the world.
Fortunately, approaching a satellite operator isn’t the only option. In the past five years, a cottage industry of consultants and local resellers with exclusive deals to service a certain market has sprung up. Aggregators and resellers spend years negotiating contracts with multiple providers so they can offer customers access to data sets at more attractive prices, sometimes for as little as a few dollars per image. Some companies providing geographic information systems—including
Esri, L3Harris, and Safe Software—have also negotiated reselling agreements with satellite-image providers.
Traditional resellers are middlemen who will connect you with a salesperson to discuss your needs, obtain quotes from providers on your behalf, and negotiate pricing and priority schedules for image capture and sometimes also for the processing of the data. This is the case for
Apollo Mapping, European Space Imaging, Geocento, LandInfo, Satellite Imaging Corp., and many more. The more innovative resellers will give you access to digital platforms where you can check whether an image you need is available from a certain archive and then order it. Examples include LandViewer from EOS and Image Hunter from Apollo Mapping.
More recently, a new crop of aggregators began offering customers the ability to programmatically access Earth-observation data sets. These companies work best for people looking to integrate such data into their own applications or workflows. These include the company I work for,
SkyWatch, which provides such a service, called EarthCache. Other examples are UP42 from Airbus and Sentinel Hub from Sinergise.
While you will still need to talk with a sales rep to activate your account—most often to verify you will use the data in ways that fits the company’s terms of service and licensing agreements—once you’ve been granted access to their applications, you will be able to programmatically order archive data from one or multiple providers. SkyWatch is, however, the only aggregator allowing users to programmatically request future data to be collected (“tasking a satellite”).
While satellite imagery is fantastically abundant and easy to access today, two changes are afoot that will expand further what you can do with satellite data: faster revisits and greater use of synthetic-aperture radar (SAR).
The first of these developments is not surprising. As more Earth-observation satellites are put into orbit, more images will be taken, more often. So how frequently a given area is imaged by a satellite will increase. Right now, that’s typically two or three times a week. Expect the revisit rate soon to become several times a day. This won’t entirely address the challenge of clouds obscuring what you want to view, but it will help.
The second development is more subtle. Data from the two satellites of the European Space Agency’s
Sentinel-1 SAR mission, available at no cost, has enabled companies to dabble in SAR over the last few years.
With SAR, the satellite beams radio waves down and measures the return signals bouncing off the surface. It does that continually, and clever processing is used to turn that data into images. The use of radio allows these satellites to see through clouds and to collect measurements day and night. Depending on the radar band that’s employed, SAR imagery can be used to judge material properties, moisture content, precise movements, and elevation.
As more companies get familiar with such data sets, there will no doubt be a growing demand for satellite SAR imagery, which has been widely used by the military since the 1970s. But it’s just now starting to appear in commercial products. You can expect those offerings to grow dramatically, though.
Indeed, a large portion of the money being invested in this industry is currently going to fund large SAR constellations, including those of
Capella Space, Iceye, Synspective, XpressSAR, and others. The market is going to get crowded fast, which is great news for customers. It means they will be able to obtain high-resolution SAR images of the place they’re interested in, taken every hour (or less), day or night, cloudy or clear.
People will no doubt figure out wonderful new ways to employ this information, so the more folks who have access to it, the better. This is something my colleagues at SkyWatch and I deeply believe, and it’s why we’ve made it our mission to help democratize access to satellite imagery.
One day in the not-so-distant future, Earth-observation satellite data might become as ubiquitous as GPS, another satellite technology first used only by the military. Imagine, for example, being able to take out your phone and say something like, “Show me this morning’s soil-moisture map for Grover’s Corners High; I want to see whether the baseball fields are still soggy.”
This article appears in the March 2022 print issue as “A Boom with a View.”
Editor's note: The original version of this article incorrectly stated that Maxar's Worldview Legion constellation launched last year.
Match ID: 11 Score: 7.86 source: spectrum.ieee.org age: 126 days
qualifiers: 5.71 air pollution, 2.14 carbon
Government set to miss air pollution goals - report
Fri, 17 Jun 2022 01:14:18 GMT
People cannot easily find out how dirty the air is where they live, the National Audit Office says.
Match ID: 12 Score: 5.71 source: www.bbc.co.uk age: 8 days
qualifiers: 5.71 air pollution
Friendly fungi help forests fight climate change
Sat, 18 Jun 2022 23:47:24 GMT
Young science writer competition winner digs into an underground biological battle against global warming.
Match ID: 13 Score: 4.29 source: www.bbc.co.uk age: 7 days
qualifiers: 4.29 climate change
4,000 Robots Roam the Oceans, Climate in Their Crosshairs
Fri, 17 Jun 2022 13:00:01 +0000
In the puzzle of climate change, Earth’s oceans are an immense and crucial piece. The oceans act as an enormous reservoir of both heat and carbon dioxide, the most abundant greenhouse gas. But
gathering accurate and sufficient data about the oceans to feed climate and weather models has been a huge technical challenge.
Over the years, though, a basic picture of ocean heating patterns has emerged. The sun’s infrared, visible-light, and ultraviolet radiation warms the oceans, with the heat absorbed particularly in Earth’s lower latitudes and in the eastern areas of the vast ocean basins. Thanks to wind-driven currents and large-scale patterns of circulation, the
heat is generally driven westward and toward the poles, being lost as it escapes to the atmosphere and space.
This heat loss comes mainly from a combination of evaporation and reradiation into space. This oceanic heat movement helps make Earth habitable by smoothing out local and seasonal temperature extremes. But the
transport of heat in the oceans and its eventual loss upward are affected by many factors, such as the ability of the currents and wind to mix and churn, driving heat down into the ocean. The upshot is that no model of climate change can be accurate unless it accounts for these complicating processes in a detailed way. And that’s a fiendish challenge, not least because Earth’s five great oceans occupy 140 million square miles, or 71 percent of the planet’s surface.
“We can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.”
Providing such detail is the purpose of the
Argo program, run by an international consortium involving 30 nations. The group operates a global fleet of some 4,000 undersea robotic craft scattered throughout the world’s oceans. The vessels are called “floats,” though they spend nearly all of their time underwater, diving thousands of meters while making measurements of temperature and salinity. Drifting with ocean currents, the floats surface every 10 days or so to transmit their information to data centers in Brest, France, and Monterey, Calif. The data is then made available to researchers and weather forecasters all over the world.
The Argo system, which produces more than 100,000 salinity and temperature profiles per year, is a huge improvement over traditional methods, which depended on measurements made from ships or with buoys. The remarkable technology of these floats and the systems technology that was created to operate them as a network was recognized this past May with the
IEEE Corporate Innovation Award, at the 2022 Vision, Innovation, and Challenges Summit. Now, as Argo unveils an ambitious proposal to increase the number of floats to 4,700 and increase their capabilities,
IEEE Spectrum spoke with Susan Wijffels, senior scientist at the Woods Hole Oceanographic Institution on Cape Cod, Mass., and cochair of the Argo steering committee.
Susan Wijffels on…
Back to top
Why do we need a vast network like Argo to help us understand how Earth’s climate is changing?
Susan Wijffels: Well, the reason is that the ocean is a key player in Earth’s climate system. So, we know that, for instance, our average climate is really, really dependent on the ocean. But actually, how the climate varies and changes, beyond about a two-to-three-week time scale, is highly controlled by the ocean. And so, in a way, you can think that the future of climate—the future of Earth—is going to be determined partly by what we do, but also by how the ocean responds.
Aren’t satellites already making these kind of measurements?
Wijffels: The satellite observing system, a wonderful constellation of satellites run by many nations, is very important. But they only measure the very, very top of the ocean. They penetrate a couple of meters at the most. Most are only really seeing what’s happening in the upper few millimeters of the ocean. And yet, the ocean itself is very deep, 5, 6 kilometers deep, around the world. And it’s what’s happening in the deep ocean that is critical, because things are changing in the ocean. It’s getting warmer, but not uniformly warm. There’s a rich structure to that warming, and that all matters for what’s going to happen in the future.
How was this sort of oceanographic data collected historically, before Argo?
Wijffels: Before Argo, the main way we had of getting subsurface information, particularly things like salinity, was to measure it from ships, which you can imagine is quite expensive. These are research vessels that are very expensive to operate, and you need to have teams of scientists aboard. They’re running very sensitive instrumentation. And they would simply prepare a package and lower it down the side into the ocean. And to do a 2,000-meter profile, it would maybe take a couple of hours. To go to the seafloor, it can take 6 hours or so.
The ships really are wonderful. We need them to measure all kinds of things. But to get the global coverage we’re talking about, it’s just prohibitive. In fact, there are not enough research vessels in the world to do this. And so, that’s why we needed to try and exploit robotics to solve this problem.
Back to top
Pick a typical Argo float and tell us something about it, a day in the life of an Argo float or a week in the life. How deep is this float typically, and how often does it transmit data?
Wijffels: They spend 90 percent of their time at 1,000 meters below the surface of the ocean—an environment where it’s dark and it’s cold. A float will drift there for about nine and a half days. Then it will make itself a little bit smaller in volume, which increases its density relative to the seawater around it. That allows it to then sink down to 2,000 meters. Once there, it will halt its downward trajectory, and switch on its sensor package. Once it has collected the intended complement of data, it expands, lowering its density. As the then lighter-than-water automaton floats back up toward the surface, it takes a series of measurements in a single column. And then, once they reach the sea surface, they transmit that profile back to us via a satellite system. And we also get a location for that profile through the global positioning system satellite network. Most Argo floats at sea right now are measuring temperature and salinity at a pretty high accuracy level.
How big is a typical data transmission, and where does it go?
Wijffels: The data is not very big at all. It’s highly compressed. It’s only about 20 or 30 kilobytes, and it goes through the Iridium network now for most of the float array. That data then comes ashore from the satellite system to your national data centers. It gets encoded and checked, and then it gets sent out immediately. It gets logged onto the Internet at a global data assembly center, but it also gets sent immediately to all the operational forecasting centers in the world. So the data is shared freely, within 24 hours, with everyone that wants to get hold of it.
This visualization shows some 3,800 of Argo’s floats scattered across the globe.Argo Program
You have 4,000 of these floats now spread throughout the world. Is that enough to do what your scientists need to do?
Wijffels: Currently, the 4,000 we have is a legacy of our first design of Argo, which was conceived in 1998. And at that time, our floats couldn’t operate in the sea-ice zones and couldn’t operate very well in enclosed seas. And so, originally, we designed the global array to be 3,000 floats; that was to kind of track what I think of as the slow background changes. These are changes happening across 1,000 kilometers in around three months—sort of the slow manifold of what’s happening to subsurface ocean temperature and salinity.
So, that’s what that design is for. But now, we have successfully piloted floats in the polar oceans and the seasonal sea-ice zones. So we know we can operate them there. And we also know now that there are some special areas like the equatorial oceans where we might need higher densities [of floats]. And so, we have a new design. And for that new design, we need to get about 4,700 operating floats into the water.
But we’re just starting now to really go to governments and ask them to provide the funds to expand the fleet. And part of the new design calls for floats to go deeper. Most of our floats in operation right now go only as deep as about 2,000 meters. But we now can build floats that can withstand the oceans’ rigors down to depths of 6,000 meters. And so, we want to build and sustain an array of about 1,200 deep-profiling floats, with an additional 1,000 of the newly built units capable of tracking the oceans by geochemistry. But this is new. These are big, new missions for the Argo infrastructure that we’re just starting to try and build up. We’ve done a lot of the piloting work; we’ve done a lot of the preparation. But now, we need to find sustained funding to implement that.
A new generation of deep-diving Argo floats can reach a depth of 6,000 meters. A spherical glass housing protects the electronics inside from the enormous pressure at that depth.MRV Systems/Argo Program
What is the cost of a typical float?
Wijffels: A typical cold float, which just measures temperature, salinity, and operates to 2,000 meters, depending on the country, costs between $20,000 and $30,000 U.S. dollars. But they each last five to seven years. And so, the cost per profile that we get, which is what really matters for us, is very low—particularly compared with other methods [of acquiring the same data].
Back to top
What kind of insights can we get from tracking heat and salinity and how they’re changing across Earth’s oceans?
Wijffels: There are so many things I could talk about, so many amazing discoveries that have come from the Argo data stream. There’s more than a paper a day that comes out using Argo. And that’s probably a conservative view. But I mean, one of the most important things we need to measure is how the ocean is warming. So, as the Earth system warms, most of that extra heat is actually being trapped in the ocean. Now, it’s a good thing that that heat is taken up and sequestered by the ocean, because it makes the rate of surface temperature change slower. But as it takes up that heat, the ocean expands. So, that’s actually driving sea-level rise. The ocean is pumping heat into the polar regions, which is causing both sea-ice and ice-sheet melt. And we know it’s starting to change regional weather patterns as well. With all that in mind, tracking where that heat is, and how the ocean circulation is moving it around, is really, really important for understanding both what's happening now to our climate system and what's going to happen to it in the future.
What has Argo’s data told us about how ocean temperatures have changed over the past 20 years? Are there certain oceans getting warmer? Are there certain parts of oceans getting warmer and others getting colder?
Wijffels: The signal in the deep ocean is very small. It’s a fraction, a hundredth of a degree, really. But we have very high precision instruments on Argo. The warming signal came out very quickly in the Argo data sets when averaged across the global ocean. If you measure in a specific place, say a time series at a site, there's a lot of noise there because the ocean circulation is turbulent, and it can move heat around from place to place. So, any given year, the ocean can be warm, and then it can be cool…that’s just a kind of a lateral shifting of the signal.
“We have discovered through Argo new current systems that we knew nothing about....There’s just been a revolution in our ability to make discoveries and understand how the ocean works.”
But when you measure globally and monitor the global average over time, the warming signal becomes very, very apparent. And so, as we’ve seen from past data—and Argo reinforces this—the oceans are warming faster at the surface than at their depths. And that’s because the ocean takes a while to draw the heat down. We see the Southern Hemisphere warming faster than the Northern Hemisphere. And there’s a lot of work that’s going on around that. The discrepancy is partly due to things like aerosol pollution in the Northern Hemisphere’s atmosphere, which actually has a cooling effect on our climate.
But some of it has to do with how the winds are changing. Which brings me to another really amazing thing about Argo: We’ve had a lot of discussion in our community about hiatuses or slowdowns of global warming. And that’s because of the surface temperature, which is the metric that a lot of people use. The oceans have a big effect on the global average surface temperature estimates because the oceans comprise the majority of Earth’s surface area. And we see that the surface temperature can peak when there’s a big El Niño–Southern Oscillation event. That’s because, in the Pacific, a whole bunch of heat from the subsurface [about 200 or 300 meters below the surface] suddenly becomes exposed to the surface. [
Editor’s note: The El Niño–Southern Oscillation is a recurring, large-scale variation in sea-surface temperatures and wind patterns over the tropical eastern Pacific Ocean.]
What we see is this kind of chaotic natural phenomena, such as the El Niño–Southern Oscillation. It just transfers heat vertically in the ocean. And if you measure vertically through the El Niño or the tropical Pacific, that all cancels out. And so, the actual change in the amount of heat in the ocean doesn’t see those hiatuses that appear in surface measurements. It’s just a staircase. And we can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.
Argo was obviously designed and established for research into climate change, but so many large scientific instruments turn out to be useful for scientific questions other than the ones they were designed for. Is that the case with Argo?
Wijffels: Absolutely. Climate change is just one of the questions Argo was designed to address. It’s really being used now to study nearly all aspects of the ocean, from ocean mixing to just mapping out what the deep circulation, the currents in the deep ocean, look like. We now have very detailed maps of the surface of the ocean from the satellites we talked about, but understanding what the currents are in the deep ocean is actually very, very difficult. This is particularly true of the slow currents, not the turbulence, which is everywhere in the ocean like it is in the atmosphere. But now, we can do that using Argo because Argo gives us a map of the sort of pressure field. And from the pressure field, we can infer the currents. We have discovered through Argo new current systems that we knew nothing about. People are using this knowledge to study the ocean eddy field and how it moves heat around the ocean.
People have also made lots of discoveries about salinity; how salinity affects ocean currents and how it is reflecting what’s happening in our atmosphere. There’s just been a revolution in our ability to make discoveries and understand how the ocean works.
During a typical 10-day cycle, an Argo float spends most of its time at a depth of 2,000 meters, making readings before ascending to the surface and then transmitting its data via a satellite network.Argo Program
As you pointed out earlier, the signal from the deep ocean is very subtle, and it’s a very small signal. So, naturally, that would prompt an engineer to ask, “How accurate are these measurements, and how do you know that they’re that accurate?”
Wijffels: So, at the inception of the program, we put a lot of resources into a really good data-management and quality-assurance system. That’s the Argo Data Management system, which broke new ground for oceanography. And so, part of that innovation is that we have, in every nation that deploys floats, expert teams that look at the data. When the data is about a year old, they look at that data, and they assess it in the context of nearby ship data, which is usually the gold standard in terms of accuracy. And so, when a float is deployed, we know the sensors are routinely calibrated. And so, if we compare a freshly calibrated float’s profile with an old one that might be six or seven years old, we can make important comparisons. What’s more, some of the satellites that Argo is designed to work with also give us ability to check whether the float sensors are working properly.
And through the history of Argo, we have had issues. But we’ve tackled them head on. We have had issues that originated in the factories producing the sensors. Sometimes, we’ve halted deployments for years while we waited for a particular problem to be fixed. Furthermore, we try and be as vigilant as we can and use whatever information we have around every float record to ensure that it makes sense. We want to make sure that there’s not a big bias, and that our measurements are accurate.
Back to top
You mentioned earlier there’s a new generation of floats capable of diving to an astounding 6,000 meters. I imagine that as new technology becomes available, your scientists and engineers are looking at this and incorporating it. Tell us how advances in technology are improving your program.
Wijffels: [There are] three big, new things that we want to do with Argo and that we’ve proven we can do now through regional pilots. The first one, as you mentioned, is to go deep. And so that meant reengineering the float itself so that it could withstand and operate under really high pressure. And there are two strategies to that. One is to stay with an aluminum hull but make it thicker. Floats with that design can go to about 4,000 meters. The other strategy was to move to a glass housing. So the float goes from a metal cylinder to a glass sphere. And glass spheres have been used in ocean science for a long time because they’re extremely pressure resistant. So, glass floats can go to those really deep depths, right to the seafloor of most of the global ocean.
The game changer is a set of sensors that are sensitive and accurate enough to measure the tiny climate-change signals that we’re looking for in the deep ocean. And so that requires an extra level of care in building those sensors and a higher level of calibration. And so we’re working with sensor manufacturers to develop and prove calibration methods with tighter tolerances and ways of building these sensors with greater reliability. And as we prove that out, we go to sea on research vessels, we take the same sensors that were in our shipboard systems, and compare them with the ones that we’re deploying on the profiling floats. So, we have to go through a whole development cycle to prove that these work before we certify them for global implementation.
You mentioned batteries. Are batteries what is ultimately the limit on lifetime? I mean, I imagine you can’t recharge a battery that’s 2,000 meters down.
Wijffels: You’re absolutely right. Batteries are one of the key limitations for floats right now as regards their lifetime, and what they’re capable of. If there were a leap in battery technology, we could do a lot more with the floats. We could maybe collect data profiles faster. We could add many more extra sensors.
So, battery power and energy management Is a big, important aspect of what we do. And in fact, the way that we task the floats, it’s been a problem with particularly lithium batteries because the floats spend about 90 percent of their time sitting in the cold and not doing very much. During their drift phase, we sometimes turn them on to take some measurements. But still, they don’t do very much. They don’t use their buoyancy engines. This is the engine that changes the volume of the float.
And what we’ve learned is that these batteries can
passivate. And so, we might think we’ve loaded a certain number of watts onto the float, but we never achieved the rated power level because of this passivation problem. But we’ve found different kinds of batteries that really sidestep that passivation problem. So, yes, batteries have been one thing that we’ve had to figure out so that energy is not a limiting factor in float operation.
Match ID: 14 Score: 4.29 source: spectrum.ieee.org age: 8 days
qualifiers: 2.14 climate change, 2.14 carbon
Congo peat: The 'lungs of humanity' which are under threat
Thu, 16 Jun 2022 23:53:12 GMT
Carbon-rich peatlands are under threat from development, posing a risk for future climate change.
Match ID: 15 Score: 4.29 source: www.bbc.co.uk age: 9 days
qualifiers: 2.14 climate change, 2.14 carbon
Climate change: Bonn talks end in acrimony over compensation
Thu, 16 Jun 2022 17:58:54 GMT
Key talks end with rich and poor countries at loggerheads on the divisive issue of loss and damage.
Match ID: 16 Score: 2.14 source: www.bbc.co.uk age: 9 days
qualifiers: 2.14 climate change
NASA, FEMA Release Comprehensive Climate Action Guide
Wed, 08 Jun 2022 12:37 EDT
NASA and the Federal Emergency Management Agency (FEMA) have released a guide which provides resources for adapting to and mitigating impacts of climate change.
Match ID: 17 Score: 2.14 source: www.nasa.gov age: 17 days
qualifiers: 2.14 climate change
NASA Invites Media to Learn About Mission Studying Thunderstorms
Fri, 03 Jun 2022 11:25 EDT
NASA will host a media teleconference at 10 a.m. CDT Tuesday, June 7, to discuss research about intense summer thunderstorms over the central United States and their effects on Earth’s atmosphere and climate change.
Match ID: 18 Score: 2.14 source: www.nasa.gov age: 22 days
qualifiers: 2.14 climate change
Why is climate 'doomism' going viral – and who's fighting it?
Sun, 22 May 2022 23:16:59 GMT
Climate "doomers" believe it’s far too late to do anything about climate change - but they're wrong.
Match ID: 19 Score: 2.14 source: www.bbc.co.uk age: 34 days
qualifiers: 2.14 climate change
Climate change: Airlines miss all but one target - report
Mon, 09 May 2022 23:39:19 GMT
Campaigners say the aviation industry cannot be relied on to tackle their role in climate change.
Match ID: 20 Score: 2.14 source: www.bbc.co.uk age: 47 days
qualifiers: 2.14 climate change
Climate change: 'Fifty-fifty chance' of breaching 1.5C warming limit
Mon, 09 May 2022 23:38:30 GMT
Scientists say there's now a strong chance that the world will warm by more than 1.5C by 2026.
Match ID: 21 Score: 2.14 source: www.bbc.co.uk age: 47 days
qualifiers: 2.14 climate change
Climate change: Don't let doom win, project tells worriers
Thu, 28 Apr 2022 01:39:31 GMT
The BBC gets an exclusive look at a new project to help students deal with rising climate anxiety.
Match ID: 22 Score: 2.14 source: www.bbc.co.uk age: 58 days
qualifiers: 2.14 climate change
U.N. Kills Any Plans to Use Mercury as a Rocket Propellant
Tue, 19 Apr 2022 18:00:01 +0000
A recent United Nations provision has banned the use of mercury in spacecraft propellant. Although no private company has actually used mercury propellant in a launched spacecraft, the possibility was alarming enough—and the dangers extreme enough—that the ban was enacted just a few years after one U.S.-based startup began toying with the idea. Had the company gone through with its intention to sell mercury propellant thrusters to some of the companies building massive satellite constellations over the coming decade, it would have resulted in Earth’s upper atmosphere being laced with mercury.
Mercury is a neurotoxin. It’s also bio-accumulative, which means it’s absorbed by the body at a faster rate than the body can remove it. The most common way to get mercury poisoning is through eating contaminated seafood. “It’s pretty nasty,” says Michael Bender, the international coordinator of the Zero Mercury Working Group (ZMWG). “Which is why this is one of the very few instances where the governments of the world came together pretty much unanimously and ratified a treaty.”
Bender is referring to the 2013 Minamata Convention on Mercury, a U.N. treaty named for a city in Japan whose residents suffered from mercury poisoning from a nearby chemical factory for decades. Because mercury pollutants easily find their way into the oceans and the atmosphere, it’s virtually impossible for one country to prevent mercury poisoning within its borders. “Mercury—it’s an intercontinental pollutant,” Bender says. “So it required a global treaty.”
Today, the only remaining permitted uses for mercury are in fluorescent lighting and dental amalgams, and even those are being phased out. Mercury is otherwise found as a by-product of other processes, such as the burning of coal. But then a company hit on the idea to use it as a spacecraft propellant.
In 2018, an employee at Apollo Fusion approached the Public Employees for Environmental Responsibility (PEER), a nonprofit that investigates environmental misconduct in the United States. The employee—who has remained anonymous—alleged that the Mountain View, Calif.–based space startup was planning to build and sell thrusters that used mercury propellant to multiple companies building low Earth orbit (LEO) satellite constellations.
Four industry insiders ultimately confirmed that Apollo Fusion was building thrusters that utilized mercury propellant. Apollo Fusion, which was acquired by rocket manufacturing startup Astra in June 2021, insisted that the composition of its propellant mixture should be considered confidential information. The company withdrew its plans for a mercury propellant in April 2021. Astra declined to respond to a request for comment for this story.
Apollo Fusion wasn’t the first to consider using mercury as a propellant. NASA originally tested it in the 1960s and 1970s with two Space Electric Propulsion Tests (SERT), one of which was sent into orbit in 1970. Although the tests demonstrated mercury’s effectiveness as a propellant, the same concerns over the element’s toxicity that have seen it banned in many other industries halted its use by the space agency as well.
“I think it just sort of fell off a lot of folks’ radars,” says Kevin Bell, the staff counsel for PEER. “And then somebody just resurrected the research on it and said, ‘Hey, other than the environmental impact, this was a pretty good idea.’ It would give you a competitive advantage in what I imagine is a pretty tight, competitive market.”
That’s presumably why Apollo Fusion was keen on using it in their thrusters. Apollo Fusion as a startup emerged more or less simultaneously with the rise of massive LEO constellations that use hundreds or thousands of satellites in orbits below 2,000 kilometers to provide continual low-latency coverage. Finding a slightly cheaper, more efficient propellant for one large geostationary satellite doesn’t move the needle much. But doing the same for thousands of satellites that need to be replaced every several years? That’s a much more noticeable discount.
Were it not for mercury’s extreme toxicity, it would actually make an extremely attractive propellant. Apollo Fusion wanted to use a type of ion thruster called a Hall-effect thruster. Ion thrusters strip electrons from the atoms that make up a liquid or gaseous propellant, and then an electric field pushes the resultant ions away from the spacecraft, generating a modest thrust in the opposite direction. The physics of rocket engines means that the performance of these engines increases with the mass of the ion that you can accelerate.
Mercury is heavier than either xenon or krypton, the most commonly used propellants, meaning more thrust per expelled ion. It’s also liquid at room temperature, making it efficient to store and use. And it’s cheap—there’s not a lot of competition with anyone looking to buy mercury.
Bender says that ZMWG, alongside PEER, caught wind of Apollo Fusion marketing its mercury-based thrusters to at least three companies deploying LEO constellations—One Web, Planet Labs, and SpaceX. Planet Labs, an Earth-imaging company, has at least 200 CubeSats in low Earth orbit. One Web and SpaceX, both wireless-communication providers, have many more. One Web plans to have nearly 650 satellites in orbit by the end of 2022. SpaceX already has nearly 1,500 active satellites aloft in its Starlink constellation, with an eye toward deploying as many as 30,000 satellites before its constellation is complete. Other constellations, like Amazon’s Kuiper constellation, are also planning to deploy thousands of satellites.
In 2019, a group of researchers in Italy and the United States estimated how much of the mercury used in spacecraft propellant might find its way back into Earth’s atmosphere. They figured that a hypothetical LEO constellation of 2,000 satellites, each carrying 100 kilograms of propellant, would emit 20 tonnes of mercury every year over the course of a 10-year life span. Three quarters of that mercury, the researchers suggested, would eventually wind up in the oceans.
That amounts to 1 percent of global mercury emissions from a constellation only a fraction of the size of the one planned by SpaceX alone. And if multiple constellations adopted the technology, they would represent a significant percentage of global mercury emissions—especially, the researchers warned, as other uses of mercury are phased out as planned in the years ahead.
Fortunately, it’s unlikely that any mercury propellant thrusters will even get off the ground. Prior to the fourth meeting of the Minamata Convention, Canada, the European Union, and Norway highlighted the dangers of mercury propellant, alongside ZMWG. The provision to ban mercury usage in satellites was passed on 26 March 2022.
The question now is enforcement. “Obviously, there aren’t any U.N. peacekeepers going into space to shoot down” mercury-based satellites, says Bell. But the 137 countries, including the United States, who are party to the convention have pledged to adhere to its provisions—including the propellant ban.
The United States is notable in that list because as Bender explains, it did not ratify the Minamata Convention via the U.S. Senate but instead deposited with the U.N. an instrument of acceptance. In a 7 November 2013 statement (about one month after the original Minamata Convention was adopted), the U.S. State Department said the country would be able to fulfill its obligations “under existing legislative and regulatory authority.”
Bender says the difference is “weedy” but that this appears to mean that the U.S. government has agreed to adhere to the Minamata Convention’s provisions because it already has similar laws on the books. Except there is still no existing U.S. law or regulation banning mercury propellant. For Bender, that creates some uncertainty around compliance when the provision goes into force in 2025.
Still, with a U.S. company being the first startup to toy with mercury propellant, it might be ideal to have a stronger U.S. ratification of the Minamata Convention before another company hits on the same idea. “There will always be market incentives to cut corners and do something more dangerously,” Bell says.
Update 19 April 2022: In an email, a spokesperson for Astra stated that the company's propulsion system, the Astra Spacecraft Engine, does not use mercury. The spokesperson also stated that Astra has no plans to use mercury propellant and that the company does not have anything in orbit that uses mercury.
Updated 20 April 2022 to clarify that Apollo Fusion was building thrusters that used mercury, not that they had actually used them.
Match ID: 23 Score: 2.14 source: spectrum.ieee.org age: 67 days
qualifiers: 2.14 toxic
Ahrefs vs SEMrush: Which SEO Tool Should You Use?
Tue, 01 Mar 2022 12:16:00 +0000
SEMrush and Ahrefs are among the most popular tools in the SEO industry. Both companies have been in business for years and have thousands of customers per month.
If you're a professional SEO or trying to do digital marketing on your own, at some point you'll likely consider using a tool to help with your efforts. Ahrefs and SEMrush are two names that will likely appear on your shortlist.
In this guide, I'm going to help you learn more about these SEO tools and how to choose the one that's best for your purposes.
What is SEMrush?
SEMrush is a popular SEO tool with a wide range of features—it's the leading competitor research service for online marketers. SEMrush's SEO Keyword Magic tool offers over 20 billion Google-approved keywords, which are constantly updated and it's the largest keyword database.
The program was developed in 2007 as SeoQuake is a small Firefox extension
- Most accurate keyword data: Accurate keyword search volume data is crucial for SEO and PPC campaigns by allowing you to identify what keywords are most likely to bring in big sales from ad clicks. SEMrush constantly updates its databases and provides the most accurate data.
- Largest Keyword database: SEMrush's Keyword Magic Tool now features 20-billion keywords, providing marketers and SEO professionals the largest database of keywords.
- All SEMrush users receive daily ranking data, mobile volume information, and the option to buy additional keywords by default with no additional payment or add-ons needed
- Most accurate position tracking tool: This tool provides all subscribers with basic tracking capabilities, making it suitable for SEO professionals. Plus, the Position Tracking tool provides local-level data to everyone who uses the tool.
- SEO Data Management: SEMrush makes managing your online data easy by allowing you to create visually appealing custom PDF reports, including Branded and White Label reports, report scheduling, and integration with GA, GMB, and GSC.
- Toxic link monitoring and penalty recovery: With SEMrush, you can make a detailed analysis of toxic backlinks, toxic scores, toxic markers, and outreach to those sites.
- Content Optimization and Creation Tools: SEMrush offers content optimization and creation tools that let you create SEO-friendly content. Some features include the SEO Writing Assistant, On-Page SEO Check, er/SEO Content Template, Content Audit, Post Tracking, Brand Monitoring.
Ahrefs is a leading SEO platform that offers a set of tools to grow your search traffic, research your competitors, and monitor your niche. The company was founded in 2010, and it has become a popular choice among SEO tools. Ahrefs has a keyword index of over 10.3 billion keywords and offers accurate and extensive backlink data updated every 15-30 minutes and it is the world's most extensive backlink index database.
- Backlink alerts data and new keywords: Get an alert when your site is linked to or discussed in blogs, forums, comments, or when new keywords are added to a blog posting about you.
- Intuitive interface: The intuitive design of the widget helps you see the overall health of your website and search engine ranking at a glance.
- Site Explorer: The Site Explorer will give you an in-depth look at your site's search traffic.
- Domain Comparison
- Reports with charts and graphs
- A question explorer that provides well-crafted topic suggestions
Direct Comparisons: Ahrefs vs SEMrush
Now that you know a little more about each tool, let's take a look at how they compare. I'll analyze each tool to see how they differ in interfaces, keyword research resources, rank tracking, and competitor analysis.
Ahrefs and SEMrush both offer comprehensive information and quick metrics regarding your website's SEO performance. However, Ahrefs takes a bit more of a hands-on approach to getting your account fully set up, whereas SEMrush's simpler dashboard can give you access to the data you need quickly.
In this section, we provide a brief overview of the elements found on each dashboard and highlight the ease with which you can complete tasks.
The Ahrefs dashboard is less cluttered than that of SEMrush, and its primary menu is at the very top of the page, with a search bar designed only for entering URLs.
Additional features of the Ahrefs platform include:
- You can see analytics from the dashboard, including search engine rankings to domain ratings, referring domains, and backlink
- Jumping from one tool to another is easy. You can use the Keyword Explorer to find a keyword to target and then directly track your ranking with one click.
- The website offers a tooltip helper tool that allows you to hover your mouse over something that isn't clear and get an in-depth explanation.
When you log into the SEMrush Tool, you will find four main modules. These include information about your domains, organic keyword analysis, ad keyword, and site traffic.
You'll also find some other options like
- A search bar allows you to enter a domain, keyword, or anything else you wish to explore.
- A menu on the left side of the page provides quick links to relevant information, including marketing insights, projects, keyword analytics, and more.
- The customer support resources located directly within the dashboard can be used to communicate with the support team or to learn about other resources such as webinars and blogs.
- Detailed descriptions of every resource offered. This detail is beneficial for new marketers, who are just starting.
Both Ahrefs and SEMrush have user-friendly dashboards, but Ahrefs is less cluttered and easier to navigate. On the other hand, SEMrush offers dozens of extra tools, including access to customer support resources.
When deciding on which dashboard to use, consider what you value in the user interface, and test out both.
If you're looking to track your website's search engine ranking, rank tracking features can help. You can also use them to monitor your competitors.
Let's take a look at Ahrefs vs. SEMrush to see which tool does a better job.
The Ahrefs Rank Tracker is simpler to use. Just type in the domain name and keywords you want to analyze, and it spits out a report showing you the search engine results page (SERP) ranking for each keyword you enter.
Rank Tracker looks at the ranking performance of keywords and compares them with the top rankings for those keywords. Ahrefs also offers:
You'll see metrics that help you understand your visibility, traffic, average position, and keyword difficulty.
It gives you an idea of whether a keyword would be profitable to target or not.
SEMRush offers a tool called Position Tracking. This tool is a project tool—you must set it up as a new project. Below are a few of the most popular features of the SEMrush Position Tracking tool:
All subscribers are given regular data updates and mobile search rankings upon subscribing
The platform provides opportunities to track several SERP features, including Local tracking.
Intuitive reports allow you to track statistics for the pages on your website, as well as the keywords used in those pages.
Identify pages that may be competing with each other using the Cannibalization report.
Ahrefs is a more user-friendly option. It takes seconds to enter a domain name and keywords. From there, you can quickly decide whether to proceed with that keyword or figure out how to rank better for other keywords.
SEMrush allows you to check your mobile rankings and ranking updates daily, which is something Ahrefs does not offer. SEMrush also offers social media rankings, a tool you won't find within the Ahrefs platform. Both are good which one do you like let me know in the comment.
Keyword research is closely related to rank tracking, but it's used for deciding which keywords you plan on using for future content rather than those you use now.
When it comes to SEO, keyword research is the most important thing to consider when comparing the two platforms.
The Ahrefs Keyword Explorer provides you with thousands of keyword ideas and filters search results based on the chosen search engine.
Ahrefs supports several features, including:
- It can search multiple keywords in a single search and analyze them together. At SEMrush, you also have this feature in Keyword Overview.
- Ahrefs has a variety of keywords for different search engines, including Google, YouTube, Amazon, Bing, Yahoo, Yandex, and other search engines.
- When you click on a keyword, you can see its search volume and keyword difficulty, but also other keywords related to it, which you didn't use.
SEMrush's Keyword Magic Tool has over 20 billion keywords for Google. You can type in any keyword you want, and a list of suggested keywords will appear.
The Keyword Magic Tool also lets you to:
- Show performance metrics by keyword
- Search results are based on both broad and exact keyword matches.
- Show data like search volume, trends, keyword difficulty, and CPC.
- Show the first 100 Google search results for any keyword.
- Identify SERP Features and Questions related to each keyword
- SEMrush has released a new Keyword Gap Tool that uncovers potentially useful keyword opportunities for you, including both paid and organic keywords.
Both of these tools offer keyword research features and allow users to break down complicated tasks into something that can be understood by beginners and advanced users alike.
If you're interested in keyword suggestions, SEMrush appears to have more keyword suggestions than Ahrefs does. It also continues to add new features, like the Keyword Gap tool and SERP Questions recommendations.
Both platforms offer competitor analysis tools, eliminating the need to come up with keywords off the top of your head. Each tool is useful for finding keywords that will be useful for your competition so you know they will be valuable to you.
Ahrefs' domain comparison tool lets you compare up to five websites (your website and four competitors) side-by-side.it also shows you how your site is ranked against others with metrics such as backlinks, domain ratings, and more.
Use the Competing Domains section to see a list of your most direct competitors, and explore how many keywords matches your competitors have.
To find more information about your competitor, you can look at the Site Explorer and Content Explorer tools and type in their URL instead of yours.
SEMrush provides a variety of insights into your competitors' marketing tactics. The platform enables you to research your competitors effectively. It also offers several resources for competitor analysis including:
Traffic Analytics helps you identify where your audience comes from, how they engage with your site, what devices visitors use to view your site, and how your audiences overlap with other websites.
SEMrush's Organic Research examines your website's major competitors and shows their organic search rankings, keywords they are ranking for, and even if they are ranking for any (SERP) features and more.
The Market Explorer search field allows you to type in a domain and lists websites or articles similar to what you entered. Market Explorer also allows users to perform in-depth data analytics on These companies and markets.
SEMrush wins here because it has more tools dedicated to competitor analysis than Ahrefs. However, Ahrefs offers a lot of functionality in this area, too. It takes a combination of both tools to gain an advantage over your competition.
- Lite Monthly: $99/month
- Standard Monthly: $179/month
- Annually Lite: $990/year
- Annually Standard: $1790/year
- Pro Plan: $119.95/month
- Guru Plan:$229.95/month
- Business Plan: $449.95/month
Which SEO tool should you choose for digital marketing?
When it comes to keyword data research, you will become confused about which one to choose.
Consider choosing Ahrefs if you
- Like friendly and clean interface
- Searching for simple keyword suggestions
- Want to get more keywords for different search engines like Amazon, Bing, Yahoo, Yandex, Baidu, and more
Consider SEMrush if you:
- Want more marketing and SEO features
- Need competitor analysis tool
- Need to keep your backlinks profile clean
- Looking for more keyword suggestions for Google
Both tools are great. Choose the one which meets your requirements and if you have any experience using either Ahrefs or SEMrush let me know in the comment section which works well for you.
Match ID: 24 Score: 2.14 source: www.crunchhype.com age: 116 days
qualifiers: 2.14 toxic
Eviation’s Maiden Flight Could Usher in Electric Aviation Era
Mon, 07 Feb 2022 19:01:19 +0000
The first commercial all-electric passenger plane is just weeks away from its maiden flight, according to its maker Israeli startup Eviation. If successful, the nine-seater Alice aircraft would be the most compelling demonstration yet of the potential for battery-powered flight. But experts say there’s still a long way to go before electric aircraft makes a significant dent in the aviation industry.
The Alice is currently undergoing high-speed taxi tests at Arlington Municipal Airport close to Seattle, says Eviation CEO Omer Bar-Yohay. This involves subjecting all of the plane’s key systems and fail-safe mechanisms to a variety of different scenarios to ensure they are operating as expected before its first flight. The company is five or six good weather days away from completing those tests, says Bar-Yohay, after which the plane should be cleared for takeoff. Initial flights won’t push the aircraft to its limits, but the Alice should ultimately be capable of cruising speeds of 250 knots (463 kilometers per hour) and a maximum range of 440 nautical miles (815 kilometers).
Electric aviation has received considerable attention in recent years as the industry looks to reduce its carbon emissions. And while the Alice won’t be the first all-electric aircraft to take to the skies, Bar-Yohay says it will be the first designed with practical commercial applications in mind. Eviation plans to offer three configurations—a nine-seater commuter model, a six-seater executive model for private jet customers, and a cargo version with a capacity of 12.74 cubic meters. The company has already received advance orders from logistics giant DHL and Massachusetts-based regional airline Cape Air.
“It’s not some sort of proof-of-concept or demonstrator,” says Bar-Yohay. “It’s the first all-electric with a real-life mission, and I think that’s the big differentiator.”
Getting there has required a major engineering effort, says Bar-Yohay, because the requirements for an all-electric plane are very different from those of conventional aircraft. The biggest challenge is weight, thanks to the fact that batteries provide considerably less mileage to the pound compared to energy-dense jet fuels.
That makes slashing the weight of other components a priority and the plane features lightweight composite materials “where no composite has gone before,”’, says Bar-Yohay. The company has also done away with the bulky mechanical systems used to adjust control surfaces on the wings, and replaced them with a much lighter fly-by-wire system that uses electronic actuators controlled via electrical wires.
The company’s engineers have had to deal with a host of other complications too, from having to optimize the aerodynamics to the unique volume and weight requirements dictated by the batteries to integrating brakes designed for much heavier planes. “There is just so much optimization, so many specific things that had to be solved,” says Bar-Yohay. “In some cases, there are just no components out there that do what you need done, which weren’t built for a train, or something like that.”
Despite the huge amount of work that’s gone into it, Bar-Yohay says the Alice will be comparable in price to similar sized turboprop aircraft like the Beechcraft King Air and cheaper than small business jets like the Embraer Phenom 300. And crucially, he adds, the relative simplicity of electrical motors and actuators compared with mechanical control systems and turboprops or jets means maintenance costs will be markedly lower.
This is a conceptual rendering of Eviation's Alice, the first commercial all-electric passenger plane, in flight.Eviation
Combined with the lower cost of electricity compared to jet fuel, and even accounting for the need to replace batteries every 3,000 flight hours, Eviation expects Alice’s operating costs to be about half those of similar sized aircraft.
But there are question marks over whether the plane has an obvious market, says aviation analyst Richard Aboulafia, managing director at AeroDynamic Advisory. It’s been decades since anyone has built a regional commuter with less than 70 seats, he says, and most business jets typically require more than the 440 nautical mile range the Alice offers. Scaling up to bigger aircraft or larger ranges is also largely out of the company’s hands as it will require substantial breakthroughs in battery technology. “You need to move on to a different battery chemistry,” he says. “There isn’t even a 10-year road map to get there.”
An aircraft like the Alice isn’t meant to be a straight swap for today’s short-haul aircraft though, says Lynette Dray, a research fellow at University College London who studies the decarbonization of aviation. More likely it would be used for short intercity hops or for creating entirely new route networks better suited to its capabilities.
This is exactly what Bar-Yohay envisages, with the Alice’s reduced operating costs opening up new short-haul routes that were previously impractical or uneconomical. It could even make it feasible to replace larger jets with several smaller ones, he says, allowing you to provide more granular regional travel by making use of the thousands of runways around the country currently used only for recreational aviation.
The economics are far from certain though, says Dray, and if the ultimate goal is to decarbonize the aviation sector, it’s important to remember that aircraft are long-lived assets. In that respect, sustainable aviation fuels that can be used by existing aircraft are probably a more promising avenue.
Even if the Alice’s maiden flight goes well, it still faces a long path to commercialization, says Kiruba Haran, a professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign. Aviation’s stringent safety requirements mean the company must show it can fly the aircraft for a long period, over and over again without incident, which has yet to be done with an all-electric plane at this scale.
Nonetheless, if the maiden flight goes according to plan it will be a major milestone for electric aviation, says Haran. “It’s exciting, right?” he says. “Anytime we do something more than, or further than, or better than, that’s always good for the industry.”
And while battery-powered electric aircraft may have little chance of disrupting the bulk of commercial aviation in the near-term, Haran says hybrid schemes that use a combination of batteries and conventional fuels (or even hydrogen) to power electric engines could have more immediate impact. The successful deployment of the Alice could go a long way to proving the capabilities of electric propulsion and building momentum behind the technology, says Haran.
“There are still a lot of skeptics out there,” he says. “This kind of flight demo will hopefully help bring those people along.”
Match ID: 25 Score: 2.14 source: spectrum.ieee.org age: 138 days
qualifiers: 2.14 carbon
Spin Me Up, Scotty—Up Into Orbit
Fri, 21 Jan 2022 16:34:49 +0000
At first, the dream of riding a rocket into space was
laughed off the stage by critics who said you’d have to carry along fuel that weighed more than the rocket itself. But the advent of booster rockets and better fuels let the dreamers have the last laugh.
Hah, the critics said: To put a kilogram of payload into orbit we just need 98 kilograms of rocket plus rocket fuel.
What a ratio, what a cost. To transport a kilogram of cargo, commercial air freight services typically charge about US $10; spaceflight costs reach $10,000. Sure, you can save money by reusing the booster, as
Elon Musk and Jeff Bezos are trying to do, but it would be so much better if you could dispense with the booster and shoot the payload straight into space.
The first people to think along these lines used cannon launchers, such as those in
Project HARP (High Altitude Research Project), in the 1960s. Research support dried up after booster rockets showed their mettle. Another idea was to shoot payloads into orbit along a gigantic electrified ramp, called a railgun, but that technology still faces hurdles of a basic scientific nature, not least the need for massive banks of capacitors to provide the jolt of energy.
Imagine a satellite spinning in a vacuum chamber at many times the speed of sound. The gates of that chamber open up, and the satellite shoots out faster than the air outside can rush back in—creating a sonic boom when it hits the wall of air.
SpinLaunch, a company founded in 2015 in Long Beach, Calif., proposes a gentler way to heave satellites into orbit. Rather than shoot the satellite in a gun, SpinLaunch would sling it from the end of a carbon-fiber tether that spins around in a vacuum chamber for as long as an hour before reaching terminal speed. The tether lets go milliseconds before gates in the chamber open up to allow the satellite out.
“Because we’re slowly accelerating the system, we can keep the power demands relatively low,”
David Wrenn, vice president for technology, tells IEEE Spectrum. “And as there’s a certain amount of energy stored in the tether itself, you can recapture that through regenerative braking.”
The company reports they've raised about $100 million. Among the backers are the investment arms of
Airbus and Google and the Defense Innovation Unit, part of the U.S. Department of Defense.
SpinLaunch began with a lab centrifuge that measures about 12 meters in diameter. In November, a 33-meter version at
Space Port America test-launched a payload thousands of meters up. Such a system could loft a small rocket, which would finish the job of reaching orbit. A 100-meter version, now in the planning stage, should be able to handle a 200-kg payload.
Wrenn answers all the obvious questions. How can the tether withstand the
g-force when spinning at hypersonic speed? “A carbon-fiber cable with a cross-sectional area of one square inch (6.5 square centimeters) can suspend a mass of 300,000 pounds (136,000 kg),” he says.
How much preparation do you need between shots? Not much, because the chamber doesn’t have to be superclean. If the customer wants to loft a lot of satellites—a likely desideratum, given the trend toward massive constellations of small satellites–the setup could include motors powerful enough to spin up in 30 minutes. “Upwards of 10 launches per day are possible,” Wrenn says.
How tight must the vacuum be? A “rough” vacuum suffices, he says. SpinLaunch maintains the vacuum with a system of airlocks operated by those millisecond-fast gates.
Most parts, including the steel for the vacuum chamber and carbon fiber, are off-the-shelf, but those gates are proprietary. All Wrenn will say is that they’re not made of steel.
So imagine a highly intricate communications satellite, housed in some structure, spinning at many times the speed of sound. The gates open up, the satellite shoots out far faster than the air outside can rush back in. Then the satellite hits the wall of air, creating a sonic boom.
No problem, says Wrenn. Electronic systems have been hurtling from vacuums into air ever since the cannon-launching days of HARP, some 60 years ago. SpinLaunch has done work already on engineering certain satellite components to withstand the ordeal—“deployable solar panels, for example,” he says.
After the online version of this article appeared, several readers objected to the SpinLaunch system, above all to the stress it would put on the liquid-fueled rocket at the end of that carbon-fiber tether.
“The system has to support up to 8,000 gs; most payloads at launch are rated at 6 or 10 gs,” said John Bucknell, a rocket scientist who heads the startup Virtus Solis Technologies, which aims to collect solar energy in space and beam it to earth.
Keith Lostrom, a chip engineer, went even further. “Drop a brick onto an egg—that is a tiny fraction of the damage that SpinLaunch’s centripedal acceleration would do to a liquid-fuel orbital launch rocket,” he wrote, in an emailed message.
Wrenn denies that the g-force is a dealbreaker. For one thing, he argues, the turbopumps in liquid-fuel rockets spin at over 30,000 rotations per minute, subjecting the liquid oxygen and fuel to “much more aggressive conditions than the uniform g-force that SpinLaunch has.”
Besides, he says, finite element analysis and high-g testing in the company’s 12-meter accelerator “has led to confidence it’s not a fundamental issue for us. We’ve already hot-fired our SpinLaunch-compatible upper-stage engine on the test stand.”
SpinLaunch says it will announce the site for its full-scale orbital launcher within the next five months. It will likely be built on a coastline, far from populated areas and regular airplane service. Construction costs would be held down if the machine can be built up the side of a hill. If all goes well, expect to see the first satellite slung into orbit sometime around 2025.
This article was updated on 24 Feb. 2022 to include additional perspectives on the technology.
Match ID: 26 Score: 2.14 source: spectrum.ieee.org age: 155 days
qualifiers: 2.14 carbon
One Year into the Biden Administration, NASA Looks to Future
Wed, 19 Jan 2022 11:33 EST
Over the past year, NASA has made valuable contributions to Biden-Harris Administration’s goals – leading on the global stage, addressing the urgent issue of climate change, creating high paying jobs, and inspiring future generations.
Match ID: 27 Score: 2.14 source: www.nasa.gov age: 157 days
qualifiers: 2.14 climate change
12 Exciting Engineering Milestones to Look for in 2022
Thu, 30 Dec 2021 16:00:00 +0000
Psyche’s Deep-Space Lasers
In August, NASA will launch
the Psyche mission, sending a deep-space orbiter to a weird metal asteroid orbiting between Mars and Jupiter. While the probe’s main purpose is to study Psyche’s origins, it will also carry an experiment that could inform the future of deep-space communications. The Deep Space Optical Communications (DSOC) experiment will test whether lasers can transmit signals beyond lunar orbit. Optical signals, such as those used in undersea fiber-optic cables, can carry more data than radio signals can, but their use in space has been hampered by difficulties in aiming the beams accurately over long distances. DSOC will use a 4-watt infrared laser with a wavelength of 1,550 nanometers (the same used in many optical fibers) to send optical signals at multiple distances during Psyche’s outward journey to the asteroid.
The Great Electric Plane Race
For the first time in almost a century, the U.S.-based National Aeronautic Association (NAA)
will host a cross-country aircraft race. Unlike the national air races of the 1920s, however, the Pulitzer Electric Aircraft Race, scheduled for 19 May, will include only electric-propulsion aircraft. Both fixed-wing craft and helicopters are eligible. The competition will be limited to 25 contestants, and each aircraft must have an onboard pilot. The course will start in Omaha and end four days later in Manteo, N.C., near the site of the Wright brothers’ first flight. The NAA has stated that the goal of the cross-country, multiday race is to force competitors to confront logistical problems that still plague electric aircraft, like range, battery charging, reliability, and speed.
6-Gigahertz Wi-Fi Goes Mainstream
Wi-Fi is getting a boost with
1,200 megahertz of new spectrum in the 6-gigahertz band, adding a third spectrum band to the more familiar 2.4 GHz and 5 GHz. The new band is called Wi-Fi 6E because it extends Wi-Fi’s capabilities into the 6-GHz band. As a rule, higher radio frequencies have higher data capacity, but a shorter range. With its higher frequencies, 6-GHz Wi-Fi is expected to find use in heavy traffic environments like offices and public hotspots. The Wi-Fi Alliance introduced a Wi-Fi 6E certification program in January 2021, and the first trickle of 6E routers appeared by the end of the year. In 2022, expect to see a bonanza of Wi-Fi 6E–enabled smartphones.
3-Nanometer Chips Arrive
Taiwan Semiconductor Manufacturing Co. (TSMC) plans to begin producing 3-nanometer semiconductor chips
in the second half of 2022. Right now, 5-nm chips are the standard. TSMC will make its 3-nm chips using a tried-and-true semiconductor structure called the FinFET (short for “fin field-effect transistor”). Meanwhile, Samsung and Intel are moving to a different technique for 3 nm called nanosheet. (TSMC is eventually planning to abandon FinFETs.) At one point, TSMC’s sole 3-nm chip customer for 2022 was Apple, for the latter’s iPhone 14, but supply-chain issues have made it less certain that TSMC will be able to produce enough chips—which promise more design flexibility—to fulfill even that order.
Seoul Joins the Metaverse
After Facebook (now Meta) announced it was hell-bent on making the metaverse real, a host of other tech companies followed suit. Definitions differ, but the basic idea of the metaverse involves merging virtual reality and augmented reality with actual reality. Also jumping on the metaverse bandwagon is the government of the South Korean capital, Seoul, which plans to develop a “metaverse platform” by the end of 2022. To build this first public metaverse, Seoul will invest 3.9 billion won (US $3.3 million). The platform will offer
public services and cultural events, beginning with the Metaverse 120 Center, a virtual-reality portal for citizens to address concerns that previously required a trip to city hall. Other planned projects include virtual exhibition halls for school courses and a digital representation of Deoksu Palace. The city expects the project to be complete by 2026.
IBM’s Condors Take Flight
In 2022, IBM will debut a new quantum processor—its biggest yet—as a stepping-stone to a
1,000-qubit processor by the end of 2023. This year’s iteration will contain 433 qubits, three times as much as the company’s 127-qubit Eagle processor, which was launched last year. Following the bird theme, the 433- and 1,000-qubit processors will be named Condor. There have been quantum computers with many more qubits; D-Wave Systems, for example, announced a 5,000-qubit computer in 2020. However, D-Wave’s computers are specialized machines for optimization problems. IBM’s Condors aim to be the largest general-purpose quantum processors.
New Dark-Matter Detector
The Forward Search Experiment (FASER) at CERN is slated to switch on in July 2022. The exact date depends on when the Large Hadron Collider is set to renew proton-proton collisions after three years of upgrades and maintenance. FASER will
begin a hunt for dark matter and other particles that interact extremely weakly with “normal” matter. CERN, the fundamental physics research center near Geneva, has four main detectors attached to its Large Hadron Collider, but they aren’t well-suited to detecting dark matter. FASER won’t attempt to detect the particles directly; instead, it will search for the more strongly interacting Standard Model particles created when dark matter interacts with something else. The new detector was constructed while the collider was shut down from 2018 to 2021. Located 480 meters “downstream” of the ATLAS detector, FASER will also hunt for neutrinos produced in huge quantities by particle collisions in the LHC loop. The other CERN detectors have so far failed to detect such neutrinos.
Pong Turns 50
Atari changed the course of video games when it released its first game, Pong, in 1972. While not the first video game—or even the first to be presented in an upright, arcade-style cabinet—Pong was the first to be commercially successful. The game was developed by engineer Allan Alcorn and originally assigned to him as a test after he was hired, before he began working on actual projects. However, executives at Atari saw potential in Pong’s simple game play and decided to develop it into a real product. Unlike the countless video games that came after it, the original Pong did not use any code or microprocessors. Instead, it was built from a television and transistor-transistor logic.
The Green Hydrogen Boom
Utility company Energias de Portugal (EDP), based in Lisbon, is on track to begin operating a 3-megawatt green hydrogen plant in Brazil by the end of the year. Green hydrogen is hydrogen produced in sustainable ways, using solar or wind-powered electrolyzers to split water molecules into hydrogen and oxygen. According to the International Energy Agency, only
0.1 percent of hydrogen is produced this way. The plant will replace an existing coal-fired plant and generate hydrogen—which can be used in fuel cells—using solar photovoltaics. EDP’s roughly US $7.9 million pilot program is just the tip of the green hydrogen iceberg. Enegix Energy has announced plans for a $5.4 billion green hydrogen plant in the same Brazilian state, Ceará, where the EDP plant is being built. The green hydrogen market is predicted to generate a revenue of nearly $10 billion by 2028, according to a November 2021 report by Research Dive.
A Permanent Space Station for China
China is scheduled
to complete its Tiangong (“Heavenly Palace”) space station in 2022. The station, China’s first long-term space habitat, was preceded by the Tiangong-1 and Tiangong-2 stations, which orbited from 2011 to 2018 and 2016 to 2019, respectively. The new station’s core module, the Tianhe, was launched in April 2021. A further 10 missions by the end of 2022 will deliver other components and modules, with construction to be completed in orbit. The final station will have two laboratory modules in addition to the core module. Tiangong will orbit at roughly the same altitude as the International Space Station but will be only about one-fifth the mass of the ISS.
A Cool Form of Energy Storage
Cryogenic energy-storage company Highview Power
will begin operations at its Carrington plant near Manchester, England, this year. Cryogenic energy storage is a long-term method of storing electricity by cooling air until it liquefies (about –196 °C). Crucially, the air is cooled when electricity is cheaper—at night, for example—and then stored until electricity demand peaks. The liquid air is then allowed to boil back into a gas, which drives a turbine to generate electricity. The 50-megawatt/250-megawatt-hour Carrington plant will be Highview Power’s first commercial plant using its cryogenic storage technology, dubbed CRYOBattery. Highview Power has said it plans to build a similar plant in Vermont, although it has not specified a timeline yet.
Seattle-based startup Nori is set to offer
a cryptocurrency for carbon removal. Nori will mint 500 million tokens of its Ethereum-based currency (called NORI). Individuals and companies can purchase and trade NORI, and eventually exchange any NORI they own for an equal number of carbon credits. Each carbon credit represents a tonne of carbon dioxide that has already been removed from the atmosphere and stored in the ground. When exchanged in this way, a NORI is retired, making it impossible for owners to try to “double count” carbon credits and therefore seem like they’re offsetting more carbon than they actually have. The startup has acknowledged that Ethereum and other blockchain-based technologies consume an enormous amount of energy, so the carbon it sequesters could conceivably originate in cryptocurrency mining. However, 2022 will also see Ethereum scheduled to switch to a much more energy-efficient method of verifying its blockchain, called proof-of-stake, which Nori will take advantage of when it launches.
Match ID: 28 Score: 2.14 source: spectrum.ieee.org age: 177 days
qualifiers: 2.14 carbon
Filter efficiency 96.128 (29 matches/749 results)