Campaigners say revoking of post-Brexit protections amounts to legislative vandalism
Hundreds of Britain’s environmental laws covering water quality, sewage pollution, clean air, habitat protections and the use of pesticides are lined up for removal from UK law under a government bill.
Environmentalists accused Liz Truss’s government of reneging on a commitment made after Brexit to halt the decline of nature by 2030. They say the revoking of 570 environmental laws that were rolled over from EU law after Brexit amounts to a deregulatory free-for-all leaving the environment unprotected.
Continue reading... Match ID: 0 Score: 40.00 source: www.theguardian.com age: 1 day qualifiers: 40.00 air pollution
Since its launch in April 2021, the Engineering Research Visioning Alliance has convened a diverse set of experts to explore three areas in which fundamental research could have the most impact: climate change; the nexus of biology and engineering; and securing critical infrastructure against hackers.
To identify priorities for each theme, ERVA—an initiative funded by the U.S. National Science Foundation—holds what are termed visioning events, wherein IEEE members and hundreds of other experts from academia, industry, and nonprofits can conceptualize bold ideas. The results are distilled into reports that identify actionable priorities for engineering research pursuit. Reports from recent visioning events are slated to be released to the public in the next few months.
IEEE is one of more than 20 professional engineering societies that have joined ERVA as affiliate partners.
Research energy storage and greenhouse gas capture solutions
Identifying technologies to address the climate crisis was ERVA’s first theme. The theme was based on results of a survey ERVA conducted last year of the engineering community about what the research priorities should be.
During the virtual visioning event in December, experts explored solar and renewable energy, carbon sequestration, water management, and geoengineering. The climate change task force released its report last month.
These are some of the research areas ERVA said should be pursued:
Energy storage, transmission, and critical materials. The materials include those that are nanoengineered, ones that could be used for nontraditional energy storage, and those that can extract additional energy from heat cycles.
Greenhouse gas capture and elimination. Research priorities included capturing and eliminating methane and nitrous oxide released in agriculture operations.
Resilient, energy-efficient, and healthful infrastructure. One identified priority was research to develop low-cost coatings for buildings and roads to reduce heat effects and increase self-cooling.
Water, ecosystem, and geoengineering assessments. The report identifies research in creating sensing, measuring, and AI models to analyze the flow of water to ensure its availability during droughts and other disruptive events caused or worsened by climate change.
“The groundwork ERVA has laid out in this report creates a blueprint for funders to invest in,” Grejner-Brzezinska says, “and catalyzes engineering research for a more secure and sustainable world. As agencies and research organizations enact legislation to reduce carbon emissions and bolster clean-energy technologies, engineering is poised to lead with research and development.”
IEEE is developing a strategy to guide the organization’s response to the global threat.
Use biology and engineering to interrupt the transfer of viruses
A virtual visioning event on Leveraging Biology to Power Engineering Impact was held in March. The hope, as explained on the event’s website, is to transform research where biology and engineering intersect: health care and medicine, agriculture, and high tech.
“As agencies and research organizations enact legislation to reduce carbon emissions and bolster clean-energy technologies, engineering is poised to lead with research and development.”
The experts considered research directions in three areas: Use biology to inspire engineers to develop new components, adapt and adopt biological constructs beyond their original function, and create engineering systems and components that improve on biology. An example would be to interrupt the transfer of viruses from one species to another so as to reduce the spread of diseases.
The task force’s report on which research areas to pursue is scheduled to be released next month, according to Grejner-Brzezinska.
Protect infrastructure from hackers
One of today’s main engineering challenges, according to ERVA, is the protection of infrastructure against hackers and other threats. At the in-person visioning event held last month at MIT on the Engineering R&D Solutions for Unhackable Infrastructure theme, researchers discussed gaps in security technologies and looked at how to design trustworthy systems and how to build resilience into interdependent infrastructures.
ERVA describes unhackable as the ability to ensure safety, security, and trust in essential systems and services that society relies on.
The task force examined research themes related to physical infrastructure such as assets and hardware; software and algorithms; and data and communication networks. It also considered new security methods for users, operators, and security administrators to thwart cyberattacks.
Planning has begun for the next visioning event, Sustainable Transportation Networks, to be held virtually on 2 and 3 November. The session is to explore innovative and sustainable transportation modes and the infrastructure networks needed to support them. Some of the areas to be discussed are green construction; longitudinal impact studies; interconnected transportation modes such as rail, marine, and air transport; and transportation equity.
Become an ERVA supporter
ERVA will convene four visioning events each year on broad engineering research themes that have the potential to solve societal challenges, Grejner-Brzezinska says. IEEE members who are experts in the fields can get involved by joining the ERVA Champions, now more than 900 strong. They are among the first to learn about upcoming visioning sessions and about openings to serve on volunteer groups such as thematic task forces, advisory boards, and standing councils. Members can sign up on the ERVA website.
“Becoming a champion is an opportunity to break out of your silos of disciplines and really come together with others in the engineering research community,” Grejner-Brzezinska says. “You can do what engineers do best: solve problems.”
Match ID: 1 Score: 25.71 source: spectrum.ieee.org age: 3 days qualifiers: 12.86 climate change, 12.86 carbon
Interview: New UN climate chief takes the fight personally Sat, 24 Sep 2022 13:43:02 EDT The United Nations official now in charge of the fight to curb climate change has a personal stake in the battle to reduce emissions Match ID: 2 Score: 15.00 source: www.washingtonpost.com age: 0 days qualifiers: 15.00 climate change
Boris Johnson was far more eco-conscious than recent Conservative predecessors. But this mini-budget is a reversion to type
The dash for growth by Kwasi Kwarteng means unshackling City bankers and property developers from the taxes and regulations that prevent them from paving over what’s left of Britain’s green and pleasant land.
The humble concrete mixer will be elevated to exalted status. There will be more executive homes built on greenfield sites. More distribution sheds dotted along busy A-roads. And more urban renewal of the kind that involves tearing down buildings in a plume of dust and carbon emissions to replace them with something not much better, at least not in environmental terms.
Continue reading... Match ID: 3 Score: 15.00 source: www.theguardian.com age: 0 days qualifiers: 15.00 carbon
Philadelphia’s Diatom Archive Is a Way, Way, Wayback Machine Sat, 24 Sep 2022 12:00:00 +0000 A cache of phytoplankton at the Academy of Natural Sciences of Drexel University is helping researchers reconstruct historical coastlines. Match ID: 4 Score: 15.00 source: www.wired.com age: 0 days qualifiers: 15.00 climate change
Yeti Holdings Inc. said Friday that Chief Financial Officer Paul Carbone is resigning effective Oct. 28, to pursue a business opportunity that will allow him to be closer to family in Boston. The provider of outdoor products such as coolers and drinkware and backpacks has commenced a search for a replacement. Shares were down 3.5% premarket and have fallen 65% in the year to date, while the S&P 500 has fallen 21%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 7 Score: 15.00 source: www.marketwatch.com age: 1 day qualifiers: 15.00 carbon
The chancellor, Kwasi Kwarteng, has announced that the effective ban on onshore wind farms is to be lifted, and the poorest households will regain access to insulation and energy efficiency measures.
Polls show that onshore wind is popular, with more than 70% of people supporting it. Jess Ralston, a senior analyst at the Energy and Climate Intelligence Unit, said: “The ban on onshore wind has been a major anomaly in British energy policy given it’s both cheap and popular with the public. So a decision to lift the ban suggests [Kwarteng] has listened to the experts and understands building more British renewables reduces our reliance on costly gas and so brings down bills.”
Continue reading... Match ID: 8 Score: 15.00 source: www.theguardian.com age: 1 day qualifiers: 15.00 carbon
Climate change risk to coastal castles - English Heritage Fri, 23 Sep 2022 00:04:19 GMT Rising sea levels are threatening ancient castles and forts at an accelerating rate, says English Heritage. Match ID: 9 Score: 15.00 source: www.bbc.co.uk age: 1 day qualifiers: 15.00 climate change
Lawns Are Dumb. But Ripping Them Out May Come With a Catch Thu, 22 Sep 2022 12:00:00 +0000 Meticulous turf is environmentally terrible. Yet grass does have one charm: It “sweats,” helping cool the local area. Match ID: 11 Score: 15.00 source: www.wired.com age: 2 days qualifiers: 15.00 climate change
Europe’s Heat Waves Offer a Grim Vision of the Future Thu, 22 Sep 2022 11:00:00 +0000 Extreme temperatures are the direct result of climate change, which means more intense heat events, wildfires, and droughts to come. Match ID: 12 Score: 15.00 source: www.wired.com age: 2 days qualifiers: 15.00 climate change
UN chief: 'Tax fossil fuel profits for climate damage' Tue, 20 Sep 2022 13:30:00 GMT Tax fossil fuel companies' profits to pay for the damage done by climate change, says UN Secretary General. Match ID: 13 Score: 10.71 source: www.bbc.co.uk age: 4 days qualifiers: 10.71 climate change
The gadgets around us are constantly learning about our lives. Smartwatches pick up on our vital signs to track our health. Home speakers listen to our conversations to recognize our voices. Smartphones play grammarian, watching what we write in order to fix our idiosyncratic typos. We appreciate these conveniences, but the information we share with our gadgets isn’t always kept between us and our electronic minders. Machine learning can require heavy hardware, so “edge” devices like phones often send raw data to central servers, which then return trained algorithms. Some people would like that training to happen locally. A new AI training method expands the training capabilities of smaller devices, potentially helping to preserve privacy.
The most powerful machine-learning systems use neural networks, complex functions filled with tunable parameters. During training, a network receives an input (such as a set of pixels), generates an output (such as the label “cat”), compares its output with the correct answer, and adjusts its parameters to do better next time. To know how to tune each of those internal knobs, the network needs to remember the effect of each one, but they regularly number in the millions or even billions. That requires a lot of memory. Training a neural network can require hundreds of times the memory called upon when merely using one (also called “inference”). In the latter case, the memory is allowed to forget what each layer of the network did as soon as it passes information to the next layer.
To reduce the memory demanded during the training phase, researchers have employed a few tricks. In one, called paging or offloading, the machine moves those activations from short-term memory to a slower but more abundant type of memory such as flash or an SD card, then brings it back when needed. In another, called rematerialization, the machine deletes the activations, then computes them again later. Previously, memory-reduction systems used one of those two tricks or, says Shishir Patil, a computer scientist at the University of California, Berkeley, and the lead author of the paper describing the innovation, they were combined using “heuristics” that are “suboptimal,” often requiring a lot of energy. The innovation reported by Patil and his collaborators formalizes the combination of paging and rematerialization.
“Taking these two techniques, combining them well into this optimization problem, and then solving it—that’s really nice,” says Jiasi Chen, a computer scientist at the University of California, Riverside, who works on edge computing but was not involved in the work.
In July, Patil presented his system, called POET (private optimal energy training), at the International Conference on Machine Learning, in Baltimore. He first gives POET a device’s technical details and information about the architecture of a neural network he wants it to train. He specifies a memory budget and a time budget. He then asks it to create a training process that minimizes energy usage. The process might decide to page certain activations that would be inefficient to recompute but rematerialize others that are simple to redo but require a lot of memory to store.
One of the keys to the breakthrough was to define the problem as a mixed integer linear programming (MILP) puzzle, a set of constraints and relationships between variables. For each device and network architecture, POET plugs its variables into Patil’s hand-crafted MILP program, then finds the optimal solution. “A main challenge is actually formulating that problem in a nice way so that you can input it into a solver,” Chen says. “So, you capture all of the realistic system dynamics, like energy, latency, and memory.”
The team tested POET on four different processors, whose RAM ranged from 32 KB to 8 GB. On each, the researchers trained three different neural network architectures: two types popular in image recognition (VGG16 and ResNet-18), plus a popular language-processing network (BERT). In many of the tests, the system could reduce memory usage by about 80 percent, without a big bump in energy use. Comparable methods couldn’t do both at the same time. According to Patil, the study showed that BERT can now be trained on the smallest devices, which was previously impossible.
“When we started off, POET was mostly a cute idea,” Patil says. Now, several companies have reached out about using it, and at least one large company has tried it in its smart speaker. One thing they like, Patil says, is that POET doesn’t reduce network precision by “quantizing,” or abbreviating, activations to save memory. So the teams that design networks don’t have to coordinate with teams that implement them in order to negotiate trade-offs between precision and memory.
Patil notes other reasons to use POET besides privacy concerns. Some devices need to train networks locally because they have low or no Internet connection. These include devices used on farms, in submarines, or in space. Other setups can benefit from the innovation because data transmission requires too much energy. POET could also make large devices—Internet servers—more memory efficient and energy efficient. But as for keeping data private, Patil says, “I guess this is very timely, right?”
Match ID: 14 Score: 10.71 source: spectrum.ieee.org age: 4 days qualifiers: 10.71 carbon
Every day, satellites circling overhead capture trillions of pixels of high-resolution imagery of the surface below. In the past, this kind of information was mostly reserved for specialists in government or the military. But these days, almost anyone can use it.
That’s because the cost of sending payloads, including imaging satellites, into orbit has dropped drastically. High-resolution satellite images, which used to cost tens of thousands of dollars, now can be had for the price of a cup of coffee.
What’s more, with the recent advances in artificial intelligence, companies can more easily extract the information they need from huge digital data sets, including ones composed of satellite images. Using such images to make business decisions on the fly might seem like science fiction, but it is already happening within some industries.
These underwater sand dunes adorn the seafloor between Andros Island and the Exuma islands in the Bahamas. The turquoise to the right reflects a shallow carbonate bank, while the dark blue to the left marks the edge of a local deep called Tongue of the Ocean. This image was captured in April 2020 using the Moderate Resolution Imaging Spectroradiometer on NASA’s Terra satellite.
Joshua Stevens/NASA Earth Observatory
Here’s a brief overview of how you, too, can access this kind of information and use it to your advantage. But before you’ll be able to do that effectively, you need to learn a little about how modern satellite imagery works.
The orbits of Earth-observation satellites generally fall into one of two categories: GEO and LEO. The former is shorthand for geosynchronous equatorial orbit. GEO satellites are positioned roughly 36,000 kilometers above the equator, where they circle in sync with Earth’s rotation. Viewed from the ground, these satellites appear to be stationary, in the sense that their bearing and elevation remain constant. That’s why GEO is said to be a geostationary orbit.
Such orbits are, of course, great for communications relays—it’s what allows people to mount satellite-TV dishes on their houses in a fixed orientation. But GEO satellites are also appropriate when you want to monitor some region of Earth by capturing images over time. Because the satellites are so high up, the resolution of that imagery is quite coarse, however. So these orbits are primarily used for observation satellites designed to track changing weather conditions over broad areas.
Being stationary with respect to Earth means that GEO satellites are always within range of a downlink station, so they can send data back to Earth in minutes. This allows them to alert people to changes in weather patterns almost in real time. Most of this kind of data is made available for free by the U.S. National Oceanographic and Atmospheric Administration.
In March 2021, the container ship Ever Given ran aground, blocking the Suez Canal for six days. This satellite image of the scene, obtained using synthetic-aperture radar, shows the kind resolution that is possible with this technology.
The other option is LEO, which stands for low Earth orbit. Satellites placed in LEO are much closer to the ground, which allows them to obtain higher-resolution images. And the lower you can go, the better the resolution you can get. The company Planet, for example, increased the resolution of its recently completed satellite constellation, SkySat, from 72 centimeters per pixel to just 50 cm—an incredible feat—by lowering the orbits its satellites follow from 500 to 450 km and improving the image processing.
The best commercially available spatial resolution for optical imagery is 25 cm, which means that one pixel represents a 25-by-25-cm area on the ground—roughly the size of your laptop. A handful of companies capture data with 25-cm to 1-meter resolution, which is considered high to very high resolution in this industry. Some of these companies also offer data from 1- to 5-meter resolution, considered medium to high resolution. Finally, several government programs have made optical data available at 10-, 15-, 30-, and 250-meter resolutions for free with open data programs. These include NASA/U.S. Geological Survey Landsat, NASA MODIS (Moderate Resolution Imaging Spectroradiometer), and ESA Copernicus. This imagery is considered low resolution.
Because the satellites that provide the highest-resolution images are in the lowest orbits, they sense less area at once. To cover the entire planet, a satellite can be placed in a polar orbit, which takes it from pole to pole. As it travels, Earth rotates under it, so on its next pass, it will be above a different part of Earth.
Many of these satellites don’t pass directly over the poles, though. Instead, they are placed in a near-polar orbit that has been specially designed to take advantage of a subtle bit of physics. You see, the spinning Earth bulges outward slightly at the equator. That extra mass causes the orbits of satellites that are not in polar orbits to shift or (technically speaking) to precess. Satellite operators often take advantage of this phenomenon to put a satellite in what’s called a sun-synchronous orbit. Such orbits allow the repeated passes of the satellite over a given spot to take place at the same time of day. Not having the pattern of shadows shift between passes helps the people using these images to detect changes.
It usually takes 24 hours for a satellite in polar orbit to survey the entire surface of Earth. To image the whole world more frequently, satellite companies use multiple satellites, all equipped with the same sensor and following different orbits. In this way, these companies can provide more frequently updated images of a given location. For example, Maxar’s Worldview Legion constellation, launching later this year, includes six satellites.
After a satellite captures some number of images, all that data needs to be sent down to Earth and processed. The time required for that varies.
DigitalGlobe (which Maxar acquired in 2017) recently announced that it had managed to send data from a satellite down to a ground station and then store it in the cloud in less than a minute. That was possible because the image sent back was of the parking lot of the ground station, so the satellite didn’t have to travel between the collection point and where it had to be to do the data “dumping,” as this process is called.
In general, Earth-observation satellites in LEO don’t capture imagery all the time—they do that only when they are above an area of special interest. That’s because these satellites are limited to how much data they can send at one time. Typically, they can transmit data for only 10 minutes or so before they get out of range of a ground station. And they cannot record more data than they’ll have time to dump.
Currently, ground stations are located mostly near the poles, the most visited areas in polar orbits. But we can soon expect distances to the nearest ground station to shorten because both Amazon and Microsoft have announced intentions to build large networks of ground stations located all over the world. As it turns out, hosting the terabytes of satellite data that are collected daily is big business for these companies, which sell their cloud services (Amazon Web Services and Microsoft’s Azure) to satellite operators.
For now, if you are looking for imagery of an area far from a ground station, expect a significant delay—maybe hours—between capture and transmission of the data. The data will then have to be processed, which adds yet more time. The fastest providers currently make their data available within 48 hours of capture, but not all can manage that. While it is possible, under ideal weather conditions, for a commercial entity to request a new capture and get the data it needs delivered the same week, such quick turnaround times are still considered cutting edge.
The best commercially available spatial resolution is 25 centimeters for optical imagery, which means that one pixel represents something roughly the size of your laptop.
I’ve been using the word “imagery,” but it’s important to note that satellites do not capture images the same way ordinary cameras do. The optical sensors in satellites are calibrated to measure reflectance over specific bands of the electromagnetic spectrum. This could mean they record how much red, green, and blue light is reflected from different parts of the ground. The satellite operator will then apply a variety of adjustments to correct colors, combine adjacent images, and account for parallax, forming what’s called a true-color composite image, which looks pretty much like what you would expect to get from a good camera floating high in the sky and pointed directly down.
Imaging satellites can also capture data outside of the visible-light spectrum. The near-infrared band is widely used in agriculture, for example, because these images help farmers gauge the health of their crops. This band can also be used to detect soil moisture and a variety of other ground features that would otherwise be hard to determine.
Longer-wavelength “thermal” IR does a good job of penetrating smoke and picking up heat sources, making it useful for wildfire monitoring. And synthetic-aperture radar satellites, which I discuss in greater detail below, are becoming more common because the images they produce aren’t affected by clouds and don’t require the sun for illumination.
You might wonder whether aerial imagery, say, from a drone, wouldn’t work at least as well as satellite data. Sometimes it can. But for many situations, using satellites is the better strategy. Satellites can capture imagery over areas that would be difficult to access otherwise because of their remoteness, for example. Or there could be other sorts of accessibility issues: The area of interest could be in a conflict zone, on private land, or in another place that planes or drones cannot overfly.
So with satellites, organizations can easily monitor the changes taking place at various far-flung locations. Satellite imagery allows pipeline operators, for instance, to quickly identify incursions into their right-of-way zones. The company can then take steps to prevent a disastrous incident, such as someone puncturing a gas pipeline while construction is taking place nearby.
This SkySat image shows the effect of a devastating landslide that took place on 30 December 2020. Debris from that landslide destroyed buildings and killed 10 people in the Norwegian village of Ask.
The ability to compare archived imagery with recently acquired data has helped a variety of industries. For example, insurance companies sometimes use satellite data to detect fraudulent claims (“Looks like your house had a damaged roof when you bought it…”). And financial-investment firms use satellite imagery to evaluate such things as retailers’ future profits based on parking-lot fullness or to predict crop prices before farmers report their yields for the season.
Despite these many successes, investigative reporters and nongovernmental organizations aren’t yet using satellite data regularly, perhaps because even the small cost of the imagery is a deterrent. Thankfully, some kinds of low-resolution satellite data can be had for free.
The first place to look for free satellite imagery is the
Copernicus Open Access Hub and EarthExplorer. Both offer free access to a wide range of open data. The imagery is lower resolution than what you can purchase, but if the limited resolution meets your needs, why spend money?
If you require medium- or high-resolution data, you might be able to buy it directly from the relevant satellite operator. This field recently went through a period of mergers and acquisitions, leaving only a handful of providers, the big three in the West being
Maxar and Planet in the United States and Airbus in Germany. There are also a few large Asian providers, such as SI Imaging Services in South Korea and Twenty First Century Aerospace Technology in Singapore. Most providers have a commercial branch, but they primarily target government buyers. And they often require large minimum purchases, which is unhelpful to companies looking to monitor hundreds of locations or fewer.
Expect the distance to the nearest ground station to shorten because both
Amazon and Microsoft have announced intentions to build large networks of ground stations located all over the world.
Fortunately, approaching a satellite operator isn’t the only option. In the past five years, a cottage industry of consultants and local resellers with exclusive deals to service a certain market has sprung up. Aggregators and resellers spend years negotiating contracts with multiple providers so they can offer customers access to data sets at more attractive prices, sometimes for as little as a few dollars per image. Some companies providing geographic information systems—including
Esri, L3Harris, and Safe Software—have also negotiated reselling agreements with satellite-image providers.
Traditional resellers are middlemen who will connect you with a salesperson to discuss your needs, obtain quotes from providers on your behalf, and negotiate pricing and priority schedules for image capture and sometimes also for the processing of the data. This is the case for
Apollo Mapping, European Space Imaging, Geocento, LandInfo, Satellite Imaging Corp., and many more. The more innovative resellers will give you access to digital platforms where you can check whether an image you need is available from a certain archive and then order it. Examples include LandViewer from EOS and Image Hunter from Apollo Mapping.
More recently, a new crop of aggregators began offering customers the ability to programmatically access Earth-observation data sets. These companies work best for people looking to integrate such data into their own applications or workflows. These include the company I work for,
SkyWatch, which provides such a service, called EarthCache. Other examples are UP42 from Airbus and Sentinel Hub from Sinergise.
While you will still need to talk with a sales rep to activate your account—most often to verify you will use the data in ways that fits the company’s terms of service and licensing agreements—once you’ve been granted access to their applications, you will be able to programmatically order archive data from one or multiple providers. SkyWatch is, however, the only aggregator allowing users to programmatically request future data to be collected (“tasking a satellite”).
While satellite imagery is fantastically abundant and easy to access today, two changes are afoot that will expand further what you can do with satellite data: faster revisits and greater use of synthetic-aperture radar (SAR).
Satellite images have helped to reveal China’s treatment of its Muslim Uyghur minority. About a million Uyghurs (and other ethnic minorities) have been interned in prisons or camps like the one shown here [top], which lies to the east of the city of Ürümqi, the capital of China’s Xinjiang Uyghur Autonomous Region. Another satellite image [bottom] shows the characteristic oval shape of a fixed-chimney Bull’s trench kiln, a type widely used for manufacturing bricks in southern Asia. This one is located in Pakistan’s Punjab province. This design poses environmental concerns because of the sooty air pollution it generates, and such kilns have also been associated with human-rights abuses.Top: CNES/Airbus/Google Earth; Bottom: Maxar Technologies/Google Earth
The first of these developments is not surprising. As more Earth-observation satellites are put into orbit, more images will be taken, more often. So how frequently a given area is imaged by a satellite will increase. Right now, that’s typically two or three times a week. Expect the revisit rate soon to become several times a day. This won’t entirely address the challenge of clouds obscuring what you want to view, but it will help.
The second development is more subtle. Data from the two satellites of the European Space Agency’s
Sentinel-1 SAR mission, available at no cost, has enabled companies to dabble in SAR over the last few years.
With SAR, the satellite beams radio waves down and measures the return signals bouncing off the surface. It does that continually, and clever processing is used to turn that data into images. The use of radio allows these satellites to see through clouds and to collect measurements day and night. Depending on the radar band that’s employed, SAR imagery can be used to judge material properties, moisture content, precise movements, and elevation.
As more companies get familiar with such data sets, there will no doubt be a growing demand for satellite SAR imagery, which has been widely used by the military since the 1970s. But it’s just now starting to appear in commercial products. You can expect those offerings to grow dramatically, though.
Indeed, a large portion of the money being invested in this industry is currently going to fund large SAR constellations, including those of
Capella Space, Iceye, Synspective, XpressSAR, and others. The market is going to get crowded fast, which is great news for customers. It means they will be able to obtain high-resolution SAR images of the place they’re interested in, taken every hour (or less), day or night, cloudy or clear.
People will no doubt figure out wonderful new ways to employ this information, so the more folks who have access to it, the better. This is something my colleagues at SkyWatch and I deeply believe, and it’s why we’ve made it our mission to help democratize access to satellite imagery.
One day in the not-so-distant future, Earth-observation satellite data might become as ubiquitous as GPS, another satellite technology first used only by the military. Imagine, for example, being able to take out your phone and say something like, “Show me this morning’s soil-moisture map for Grover’s Corners High; I want to see whether the baseball fields are still soggy.”
This article appears in the March 2022 print issue as “A Boom with a View.”
Editor's note: The original version of this article incorrectly stated that Maxar's Worldview Legion constellation launched last year.
Match ID: 15 Score: 7.86 source: spectrum.ieee.org age: 217 days qualifiers: 5.71 air pollution, 2.14 carbon
As climate change edges from crisis to emergency, the aviation sector looks set to miss its 2050 goal of net-zero emissions. In the five years preceding the pandemic, the top four U.S. airlines—American, Delta, Southwest, and United—saw a 15 percent increase in the use of jet fuel. Despite continual improvements in engine efficiencies, that number is projected to keep rising.
A glimmer of hope, however, comes from solar fuels. For the first time, scientists and engineers at the Swiss Federal Institute of Technology (ETH) in Zurich have reported a successful demonstration of an integrated fuel-production plant for solar kerosene. Using concentrated solar energy, they were able to produce kerosene from water vapor and carbon dioxide directly from air. Fuel thus produced is a drop-in alternative to fossil-derived fuels and can be used with existing storage and distribution infrastructures, and engines.
Fuels derived from synthesis gas (or syngas)—an intermediate product that is a specific mixture of carbon monoxide and hydrogen—is a known alternative to conventional, fossil-derived fuels. Syngas is produced by Fischer-Tropsch (FT) synthesis, in which chemical reactions convert carbon monoxide and water vapor into hydrocarbons. The team of researchers at ETH found that a solar-driven thermochemical method to split water and carbon dioxide using a metal oxide redox cycle can produce renewable syngas. They demonstrated the process in a rooftop solar refinery at the ETH Machine Laboratory in 2019.
Reticulated porous structure made of ceria used in the solar reactor to thermochemically split CO2 and H2O and produce syngas, a specific mixture of H2 and CO.ETH Zurich
The current pilot-scale solar tower plant was set up at the IMDEA Energy Institute in Spain. It scales up the solar reactor of the 2019 experiment by a factor of 10, says Aldo Steinfeld, an engineering professor at ETH who led the study. The fuel plant brings together three subsystems—the solar tower concentrating facility, solar reactor, and gas-to-liquid unit.
First, a heliostat field made of mirrors that rotate to follow the sun concentrates solar irradiation into a reactor mounted on top of the tower. The reactor is a cavity receiver lined with reticulated porous ceramic structures made of ceria (or cerium(IV) oxide). Within the reactor, the concentrated sunlight creates a high-temperature environment of about 1,500 °C which is hot enough to split captured carbon dioxide and water from the atmosphere to produce syngas. Finally, the syngas is processed to kerosene in the gas-to-liquid unit. A centralized control room operates the whole system.
Fuel produced using this method closes the fuel carbon cycle as it only produces as much carbon dioxide as has gone into its manufacture. “The present pilot fuel plant is still a demonstration facility for research purposes,” says Steinfeld, “but it is a fully integrated plant and uses a solar-tower configuration at a scale that is relevant for industrial implementation.”
“The solar reactor produced syngas with selectivity, purity, and quality suitable for FT synthesis,” the authors noted in their paper. They also reported good material stability for multiple consecutive cycles. They observed a value of 4.1 percent solar-to-syngas energy efficiency, which Steinfeld says is a record value for thermochemical fuel production, even though better efficiencies are required to make the technology economically competitive.
A heliostat field concentrates solar radiation onto a solar reactor mounted on top of the solar tower. The solar reactor cosplits water and carbon dioxide and produces a mixture of molecular hydrogen and carbon monoxide, which in turn is processed to drop-in fuels such as kerosene.ETH Zurich
“The measured value of energy conversion efficiency was obtained without any implementation of heat recovery,” he says. The heat rejected during the redox cycle of the reactor accounted for more than 50 percent of the solar-energy input. “This fraction can be partially recovered via thermocline heat storage. Thermodynamic analyses indicate that sensible heat recovery could potentially boost the energy efficiency to values exceeding 20 percent.”
To do so, more work is needed to optimize the ceramic structures lining the reactor, something the ETH team is actively working on, by looking at 3D-printed structures for improved volumetric radiative absorption. “In addition, alternative material compositions, that is, perovskites or aluminates, may yield improved redox capacity, and consequently higher specific fuel output per mass of redox material,” Steinfeld adds.
The next challenge for the researchers, he says, is the scale-up of their technology for higher solar-radiative power inputs, possibly using an array of solar cavity-receiver modules on top of the solar tower.
To bring solar kerosene into the market, Steinfeld envisages a quota-based system. “Airlines and airports would be required to have a minimum share of sustainable aviation fuels in the total volume of jet fuel that they put in their aircraft,” he says. This is possible as solar kerosene can be mixed with fossil-based kerosene. This would start out small, as little as 1 or 2 percent, which would raise the total fuel costs at first, though minimally—adding “only a few euros to the cost of a typical flight,” as Steinfeld puts it
Meanwhile, rising quotas would lead to investment, and to falling costs, eventually replacing fossil-derived kerosene with solar kerosene. “By the time solar jet fuel reaches 10 to 15 percent of the total jet-fuel volume, we ought to see the costs for solar kerosene nearing those of fossil-derived kerosene,” he adds.
However, we may not have to wait too long for flights to operate solely on solar fuel. A commercial spin-off of Steinfeld’s laboratory, Synhelion, is working on commissioning the first industrial-scale solar fuel plant in 2023. The company has also collaborated with the airline SWISS to conduct a flight solely using its solar kerosene.
Match ID: 16 Score: 4.29 source: spectrum.ieee.org age: 52 days qualifiers: 2.14 climate change, 2.14 carbon
The hot, humid environment of tropical marine areas such as Australia’s Great Barrier Reef can wreak havoc on the marine autonomous systems. Underwater and surface MAS are used for marine monitoring, locating objects such as mines on the seafloor, and rescuing swimmers.
“Tropical conditions can cause systems to overheat or prevent high-density lithium batteries from recharging,” says Melanie Olsen, who is a project director of the Australian Institute of Marine Science’s (AIMS) ReefWorks, a technology testing and evaluation facility in northern Australia. “And the microbial and small creatures that thrive in these tropical environments grow rapidly on underwater surfaces and degrade the sensor performance and the hydrodynamics of the robotics and autonomous systems.”
Developing technology that can stand up to these conditions is part of Olsen’s job, as is supporting ReefWorks’ broader mission of helping others move their autonomous systems out of the lab. It’s essential to test these systems and collect compliance evidence to demonstrate they meet regulatory requirements and can be certified for operations, says Olsen, an IEEE senior member. But there are very few places to test marine robotics, autonomous systems, and artificial-intelligence (RAS-AI) technologies, which hampers the growth of the industry, Olsen says. “It’s difficult for RAS-AI vendors to progress from a prototype to a commercial product because the pathway to a certified system is complex.”
That’s why AIMS established ReefWorks. The facility is used to test crewed and uncrewed tropical and marine vessels as well as robots, sensors, and other innovations. “We are Australia’s—and possibly the world’s—first such testing facility in the tropics,” Olsen says. Examples of underwater and surface MAS include the ReefScan CoralAUV, which is used for marine monitoring, and the Wave Adaptive Modular Vessel, a surface vessel used for marine monitoring, locating mines and other objects on the seafloor, and rescuing swimmers.
AIMS has been testing equipment for over a decade, but this part of AIMS’s facilities opened to the public in December 2021. ReefWorks supports the entire development cycle, from digital-model validation and developmental testing to product and operational-level testing, Olsen says. Physical tests can be done at AIMS’s three marine field ranges, which offer different testing conditions. ReefWorks also has land-based facilities, plus the National Sea Simulator sensor test tank, and drone corridors between the at-sea ranges for verifying the performance of long-range marine autonomous systems.
“Our overall objective is to establish a sustainable marine autonomous systems [MAS] sector in Australia,” she says.
One of the ways ReefWorks helps its users make the most of their time on test ranges is to offer “digital twins” and virtual worlds. A digital twin is a virtual model of a real-world object, machine, or system that can be used to assess how the real-world counterpart is performing.
“Each of our test ranges is developing a digital twin,” Olsen says. “Developers will be able to conduct a test mission on the virtual range so when they get here, they can replay missions with real-time collected data, and validate their MAS digital-model performance.”
Olsen leads a team of five people and is currently recruiting another five. She expects the staff to triple in size in a few years as ReefWorks becomes more established in the region.
Before joining AIMS, Olsen spent a decade in Australia’s Department of Defence (DOD) as a lead engineer working on future technologies and maritime electronic-warfare systems.
Olsen grew up in a farming family and wasn’t really exposed to computers or engineers until an EE lecturer from James Cook University, in Australia, came to her rural high school to give a presentation. He brought along a remote-controlled quadrotor helicopter—a decade before quadcopters were commonplace.
The lecture led Olsen to pursue a bachelor’s degree in electrical, electronics, and computer systems, also from James Cook University, in Townsville. She went on to earn a master’s degree in systems engineering from Australia’s University of New South Wales, in Canberra. In 2016, Olsen took a job at AIMS as an engineering team leader in technology development.
“I’m very passionate about new technologies and seeing them integrated in the field,” she says. “During my decade at the [Australian] DOD, I grew my skills in systems engineering to solve more complex technology-integration challenges. AIMS offered me an opportunity to apply these skills to the challenges facing the tropical marine environment.”
“We are Australia’s—and possibly the world’s—first such testing facility in the tropics.”
There are many similarities between what Olsen had been doing at DOD and her role at ReefWorks. “My work at both DOD and AIMS requires an understanding of how electronic subsystems work, determining what’s viable for the use case, understanding the importance of modeling and simulation, and being able to communicate engineering terminology to an interdisciplinary team,” she says. “Both roles are all about engineering problem-solving.”
Olsen is currently working on integrating embedded AI and Internet of Things edge computing into AIMS infrastructure. “Artificial intelligence is used to increase a marine autonomous system’s capabilities,” she says. “For example, AI is used to train a MAS to navigate and avoid colliding with coral reefs, other vessels, or other objects or to allow the MAS to identify specific marine species, reef areas suitable for reseeding, and marine mines.”
IoT edge computing is used to process data closer to its point of origin. “This has the potential to speed up the decision process for vessels and operators while minimizing the communications and data bandwidth needed, which are key limitations when operating in marine northern Australia,” Olsen says.
Since GPS doesn’t work underwater, another of her team’s projects is looking for additional ways to conduct accurate geospatial positioning and control for missions that don’t require marine autonomous systems to come to the surface.
“We’re only just starting to get a feel for what marine autonomous systems can do—not just for our tropical marine waters but in general,” she says. “There are grand challenges no one can solve right now, like dealing with ocean pollution and the impacts of climate change.”
Robotics engineers needed
There’s nowhere near enough robotics engineers in the world, Olsen says. She recommends that engineering students take courses that include group projects.
“Group projects help you grow your ability to solve problems outside your knowledge or expertise,” she says. “They teach you how to work as an interdisciplinary team, who to ask for help, and where to find it.”
This article appears in the October 2022 print issue as “Melanie Olsen.”
Match ID: 18 Score: 2.14 source: spectrum.ieee.org age: 9 days qualifiers: 2.14 climate change
Planning for the return journey is an integral part of the preparations for a crewed Mars mission. Astronauts will require a total mass of about 50 tonnes of rocket propellent for the ascent vehicle that will lift them off the planet’s surface, including 31 tonnes of oxygen approximately. The less popular option is for crewed missions to carry the required oxygen themselves. But scientists are optimistic that it could instead be produced from the carbon dioxide–rich Martian atmosphere itself, using a system called MOXIE.
Between February 2021, when it arrived on Mars aboard the Perseverance, and the end of the year, MOXIE has had several successful test runs. According to a review of the system by Hoffman and colleagues, published in Science Advances, it has demonstrated its ability to produce oxygen during both night and day, when temperatures can vary by over 100 ºC. The generation and purity rates of oxygen also meet requirements to produce rocket propellent and for breathing. The authors assert that a scaled-up version of MOXIE could produce the required oxygen for lift-off as well as for the astronauts to breathe.
Next question: How to power any oxygen-producing factories that NASA can land on Mars? Perhaps via NASA’s Kilopower fission reactors?
MOXIE is a first step toward a much larger and more complex system to support the human exploration of Mars. The researchers estimate a required generation rate of 2 to 3 kilograms per hour, compared with the current MOXIE rate of 6 to 8 grams per hour, to produce enough oxygen for lift-off for a crew arriving 26 months later. “So we’re talking about a system that’s a couple of hundred times bigger than MOXIE,” Hoffman says.
They calculate this rate accounting for eight months to get to Mars, followed by some time to set up the system. “We figure you'd probably have maybe 14 months to make all the oxygen.” Further, he says, the produced oxygen would have to be liquefied to be used a rocket propellant, something the current version of MOXIE doesn’t do.
MOXIE also currently faces several design constraints because, says Hoffman, a former astronaut, “our only ride to Mars was inside the Perseverance rover.” This limited the amount of power available to operate the unit, the amount of heat they could produce, the volume and the mass.
“MOXIE does not work nearly as efficiently as a stand-alone system that was specifically designed would,” says Hoffman. Most of the time, it’s turned off. “Every time we want to make oxygen, we have to heat it up to 800 ºC, so most of the energy goes into heating it up and running the compressor, whereas in a well-designed stand-alone system, most of the energy will go into the actual electrolysis, into actually producing the oxygen.”
However, there are still many kinks to iron out for the scaling-up process. To begin with, any oxygen-producing system will need lots of power. Hoffman thinks nuclear power is the most likely option, maybe NASA’s Kilopower fission reactors. The setup and the cabling would certainly be challenging, he says. “You’re going to have to launch to all of these nuclear reactors, and of course, they’re not going to be in exactly the same place as the [other] units,” he says. "So, robotically, you’re going to have to connect to the electrical cables to bring power to the oxygen-producing unit.”
Then there is the solid oxide electrolysis units, which Hoffman points out are carefully machined systems. Fortunately, the company that makes them, OxEon, has already designed, built, and tested a full-scale unit, a hundred times bigger than the one on MOXIE. “Several of those units would be required to produce oxygen at the quantities that we need,” Hoffman says.
He also adds that at present, there is no redundancy built into MOXIE. If any part fails, the whole system dies. “If you’re counting on a system to produce oxygen for rocket propellant and for breathing, you need very high reliability, which means you’re going to need quite a few redundant units.”
Moreover, the system has to be pretty much autonomous, Hoffman says. “It has to be able to monitor itself, run itself.” For testing purposes, every time MOXIE is powered up, there is plenty of time to plan. A full-scale MOXIE system, though, would have to run continuously, and for that it has to be able to adjust automatically to changes in the Mars atmosphere, which can vary by a factor of two over a year, and between nighttime and daytime temperature differences.
Match ID: 19 Score: 2.14 source: spectrum.ieee.org age: 16 days qualifiers: 2.14 carbon
How Pakistan floods are linked to climate change Fri, 02 Sep 2022 13:42:00 GMT Pakistan's geography - and its immense glaciers - make it vulnerable to climate change. Match ID: 20 Score: 2.14 source: www.bbc.co.uk age: 22 days qualifiers: 2.14 climate change
“Build something that will absolutely, positively work.” This was the mandate from NASA for designing and building the James Webb Space Telescope—at 6.5 meters wide the largest space telescope in history. Last December, JWST launched famously and successfully to its observing station out beyond the moon. And now according to NASA, as soon as next week, the JWST will at long last begin releasing scientific images and data.
Mark Kahan, on JWST’s product integrity team, recalls NASA’s engineering challenge as a call to arms for a worldwide team of thousands that set out to create one of the most ambitious scientific instruments in human history. Kahan—chief electro-optical systems engineer at Mountain View, Calif.–based Synopsys—and many others in JWST’s “pit crew” (as he calls the team) drew hard lessons from three decades ago, having helped repair another world-class space telescope with a debilitating case of flawed optics. Of course the Hubble Space Telescope is in low Earth orbit, and so a special space-shuttle mission to install corrective optics (
as happened in 1993) was entirely possible.
Not so with the JWST.
The meticulous care NASA demanded of JWST’s designers is all the more a necessity because Webb is well out of reach of repair crews. Its mission is to study the infrared universe, and that requires shielding the telescope and its sensors from both the heat of sunlight and the infrared glow of Earth. A good place to do that without getting too far from Earth is an empty patch of interplanetary space 1.5 million kilometers away (well beyond the moon’s orbit) near a spot physicists call the
second Lagrange point, or L2.
The pit crew’s job was “down at the detail level, error checking every critical aspect of the optical design,” says Kahan. Having learned the hard way from Hubble, the crew insisted that every measurement on Webb’s optics be made in at least two different ways that could be checked and cross-checked. Diagnostics were built into the process, Kahan says, so that “you could look at them to see what to kick” to resolve any discrepancies. Their work had to be done on the ground, but their tests had to assess how the telescope would work in deep space at cryogenic temperatures.
Three New Technologies for the Main Mirror
Superficially, Webb follows the design of all large reflecting telescopes. A big mirror collects light from stars, galaxies, nebulae, planets, comets, and other astronomical objects—and then focuses those photons onto a smaller secondary mirror that sends it to a third mirror that then ultimately directs the light to instruments that record images and spectra.
6.5-meter primary mirror is the first segmented mirror to be launched into space. All the optics had to be made on the ground at room temperature but were deployed in space and operated at 30 to 55 degrees above absolute zero. “We had to develop three new technologies” to make it work, says Lee D. Feinberg of the NASA Goddard Space Flight Center, the optical telescope element manager for Webb for the past 20 years.
The longest wavelengths that Hubble has to contend with were 2.5 micrometers, whereas
Webb is built to observe infrared light that stretches to 28 μm in wavelength. Compared with Hubble, whose primary mirror is a circle of an area 4.5 square meters, “[Webb’s primary mirror] had to be 25 square meters,” says Feinberg. Webb also “needed segmented mirrors that were lightweight, and its mass was a huge consideration,” he adds. No single-component mirror that could provide the required resolution would have fit on the Ariane 5 rocket that launched JWST. That meant the mirror would have to be made in pieces, assembled, folded, secured to withstand the stress of launch, then unfolded and deployed in space to create a surface that was within tens of nanometers of the shape specified by the designers.
The James Webb Space Telescope [left] and the Hubble Space Telescope side by side—with Hubble’s 2.4-meter-diameter mirror versus Webb’s array of hexagonal mirrors making a 6.5-meter-diameter light-collecting area. NASA Goddard Space Flight Center
NASA and the
U.S. Air Force, which has its own interests in large lightweight space mirrors for surveillance and focusing laser energy, teamed up to develop the technology. The two agencies narrowed eight submitted proposals down to two approaches for building JWST’s mirrors: one based on low-expansion glass made of a mixture of silicon and titanium dioxides similar to that used in Hubble and the other the light but highly toxic metal beryllium. The most crucial issue came down to how well the materials could withstand temperature changes from room temperature on the ground to around 50 K in space. Beryllium won because it could fully release stress after cooling without changing its shape, and it’s not vulnerable to the cracking that can occur in glass. The final beryllium mirror was a 6.5-meter array of 18 hexagonal beryllium mirrors, each weighing about 20 kilograms. The weight per unit area of JWST’s mirror was only 10 percent of that in Hubble. A 100-nanometer layer of pure gold makes the surface reflect 98 percent of incident light from JWST’s main observing band of 0.6 to 28.5 μm. “Pure silver has slightly higher reflectivity than pure gold, but gold is more robust,” says Feinberg. A thin layer of amorphous silica protects the metal film from surface damage.
In addition, a wavefront-sensing control system keeps mirror segment surfaces
aligned to within tens of nanometers. Built on the ground, the system is expected to keep mirror alignment stabilized throughout the telescope’s operational life. A backplane kept at a temperature of 35 K holds all 2.4 tonnes of the telescope and instruments rock-steady to within 32 nm while maintaining them at cryogenic temperatures during observations.
The JWST backplane, the “spine” that supports the entire hexagonal mirror structure and carries more than 2,400 kg of hardware, is readied for assembly to the rest of the telescope. NASA/Chris Gunn
Hubble’s amazing, long-exposure images of distant galaxies are possible through the use of gyroscopes and reaction wheels. The gyroscopes are used to sense unwanted rotations, and reaction wheels are used to counteract them.
But the gyroscopes used on Hubble have had a bad track record and have had to be replaced repeatedly. Only three of Hubble’s six gyros remain operational today, and NASA has devised plans for operating with one or two gyros at reduced capability.
Hubble also includes reaction wheels and magnetic torquers, used to maintain its orientation when needed or to point at different parts of the sky.
Webb uses reaction wheels similarly to turn across the sky, but instead of using mechanical gyros to sense direction,
it uses hemispherical resonator gyroscopes, which have no moving parts. Webb also has a small fine-steering mirror in the optical path, which can tilt over an angle of just 5 arc seconds. Those very fine adjustments of the light path into the instruments keep the telescope on target. “It’s a really wonderful way to go,” says Feinberg, adding that it compensates for small amounts of jitter without having to move the whole 6-tonne observatory.
Other optics distribute light from the fine-steering mirror among four instruments, two of which can observe simultaneously. Three instruments have sensors that observe wavelengths of 0.6 to 5 μm, which astronomers call the near-infrared. The fourth, called the Mid-InfraRed Instrument (MIRI), observes what astronomers call the mid-infrared spectrum, from 5 to 28.5 μm. Different instruments are needed because sensors and optics have limited wavelength ranges. (Optical engineers may blanch slightly at astronomers’ definitions of what constitutes the near- and mid-infrared wavelength ranges. These two groups simply have differing conventions for labeling the various regimes of the infrared spectrum.)
Mid-infrared wavelengths are crucial for observing
young stars and planetary systems and the earliest galaxies, but they also pose some of the biggest engineering challenges. Namely, everything on Earth and planets out to Jupiter glow in the mid-infrared. So for JWST to observe distant astronomical objects, it must avoid recording extraneous mid-infrared noise from all the various sources inside the solar system. “I have spent my whole career building instruments for wavelengths of 5 μm and longer,” says MIRI instrument scientist Alistair Glasse of the Royal Observatory, in Edinburgh. “We’re always struggling against thermal background.”
Mountaintop telescopes can see the near-infrared, but observing the mid-infrared sky requires telescopes in space. However, the thermal radiation from Earth and its atmosphere can cloud their view, and so can the telescopes themselves unless they are cooled far below room temperature. An ample supply of liquid helium and an orbit far from Earth allowed the
Spitzer Space Telescope’s primary observing mission to last for five years, but once the last of the cryogenic fluid evaporated in 2009, its observations were limited to wavelengths shorter than 5 μm.
Another challenge is the limited transparency of optical materials in the mid-infrared. “We use reflective optics wherever possible,” says Glasse, but they also pose problems, he adds. “Thermal contraction is a big deal,” he says, because the instrument was made at room temperature but is used at 7 K. To keep thermal changes uniform throughout MIRI, they made the whole structure of gold-coated aluminum lest other metals cause warping.
Detectors are another problem. Webb’s near-infrared sensors use mercury cadmium telluride photodetectors with a resolution of 2,048 x 2,048 pixels. This resolution is widely used at wavelengths below 5 μm, but sensing at
MIRI’s longer wavelengths required exotic detectors that are limited to offering only 1,024 x 1,024 pixels.
Glasse says commissioning “has gone incredibly well.” Although some stray light has been detected, he says, “we are fully expecting to meet all our science goals.”
NIRCam Aligns the Whole Telescope
The near-infrared detectors and optical materials used for observing at wavelengths shorter than 5 μm are much more mature than those for the mid-infrared, so the Near-Infrared Camera (NIRCam) does double duty by both recording images and aligning all the optics in the whole telescope. That alignment was the trickiest part of building the instrument, says NIRCam principal investigator
Marcia Rieke of the University of Arizona.
Alignment means getting all the light collected by the primary mirror to get to the right place in the final image. That’s crucial for Webb, because it has 18 separate segments that have to overlay their images perfectly in the final image, and because all those segments were built on the ground at room temperature but operate at cryogenic temperatures in space at zero gravity. When NASA recorded a test image of a single star after Webb first opened its primary mirror, it showed 18 separate bright spots, one from each segment.
When alignment was completed on 11 March, the image from NIRcam showed a single star with six spikes caused by diffraction.
Even when performing instrumental calibration tasks, JWST couldn’t help but showcase its stunning sensitivity to the infrared sky. The central star is what telescope technicians used to align JWST’s mirrors. But notice the distant galaxies and stars that photobombed the image too!NASA/STScI
Building a separate alignment system would have added to both the weight and cost of Webb, Rieke realized, and in the original 1995 plan for the telescope she proposed designing NIRCam so it could align the telescope optics once it was up in space as well as record images. “The only real compromise was that it required NIRCam to have exquisite image quality,” says Rieke, wryly. From a scientific point, she adds, using the instrument to align the telescope optics “is great because you know you’re going to have good image quality and it’s going to be aligned with you.” Alignment might be just a tiny bit off for other instruments. In the end, it took a team at Lockheed Martin to develop the computational tools to account for all the elements of thermal expansion.
Escalating costs and delays had troubled Webb for years. But for Feinberg, “commissioning has been a magical five months.” It began with the sight of sunlight hitting the mirrors. The segmented mirror deployed smoothly, and after the near-infrared cameras cooled, the mirrors focused one star into 18 spots, then aligned them to put the spots on top of each other. “Everything had to work to get it to [focus] that well,” he says. It’s been an intense time, but for Feinberg, a veteran of the Hubble repair mission, commissioning Webb was “a piece of cake.”
Corrections 26-28 July 2022: The story was updated a) to reflect the fact that the Lagrange point L2 where Webb now orbits is not that of the "Earth-moon system" (as the story had originally reported) but rather the Earth-sun system
and b) to correct misstatements in the original posting about Webb's hardware for controlling its orientation.
Corrections 12 Aug. 2022: Alistair Glasse's name was incorrectly spelled in a previous version of this story, as was NIRCam (which we'd spelled as NIRcam); Webb's tertiary mirror (we'd originally reported only its primary and secondary mirrors) was also called out in this version.
This article appears in the September 2022 print issue as “Inside the Universe Machine.”
Match ID: 22 Score: 2.14 source: spectrum.ieee.org age: 80 days qualifiers: 2.14 toxic
NASA to Industry: Let’s Develop Flight Tech to Reduce Carbon Emissions Wed, 29 Jun 2022 14:25 EDT NASA announced Wednesday the agency is seeking partners to develop technologies needed to shape a new generation of lower-emission, single-aisle airliners that passengers could see in airports in the 2030s. Match ID: 23 Score: 2.14 source: www.nasa.gov age: 86 days qualifiers: 2.14 carbon
A recent United Nations provision has banned the use of mercury in spacecraft propellant. Although no private company has actually used mercury propellant in a launched spacecraft, the possibility was alarming enough—and the dangers extreme enough—that the ban was enacted just a few years after one U.S.-based startup began toying with the idea. Had the company gone through with its intention to sell mercury propellant thrusters to some of the companies building massive satellite constellations over the coming decade, it would have resulted in Earth’s upper atmosphere being laced with mercury.
Mercury is a neurotoxin. It’s also bio-accumulative, which means it’s absorbed by the body at a faster rate than the body can remove it. The most common way to get mercury poisoning is through eating contaminated seafood. “It’s pretty nasty,” says Michael Bender, the international coordinator of the Zero Mercury Working Group (ZMWG). “Which is why this is one of the very few instances where the governments of the world came together pretty much unanimously and ratified a treaty.”
Bender is referring to the 2013 Minamata Convention on Mercury, a U.N. treaty named for a city in Japan whose residents suffered from mercury poisoning from a nearby chemical factory for decades. Because mercury pollutants easily find their way into the oceans and the atmosphere, it’s virtually impossible for one country to prevent mercury poisoning within its borders. “Mercury—it’s an intercontinental pollutant,” Bender says. “So it required a global treaty.”
Today, the only remaining permitted uses for mercury are in fluorescent lighting and dental amalgams, and even those are being phased out. Mercury is otherwise found as a by-product of other processes, such as the burning of coal. But then a company hit on the idea to use it as a spacecraft propellant.
In 2018, an employee at Apollo Fusion approached the Public Employees for Environmental Responsibility (PEER), a nonprofit that investigates environmental misconduct in the United States. The employee—who has remained anonymous—alleged that the Mountain View, Calif.–based space startup was planning to build and sell thrusters that used mercury propellant to multiple companies building low Earth orbit (LEO) satellite constellations.
Apollo Fusion wasn’t the first to consider using mercury as a propellant. NASA originally tested it in the 1960s and 1970s with two Space Electric Propulsion Tests (SERT), one of which was sent into orbit in 1970. Although the tests demonstrated mercury’s effectiveness as a propellant, the same concerns over the element’s toxicity that have seen it banned in many other industries halted its use by the space agency as well.
“I think it just sort of fell off a lot of folks’ radars,” says Kevin Bell, the staff counsel for PEER. “And then somebody just resurrected the research on it and said, ‘Hey, other than the environmental impact, this was a pretty good idea.’ It would give you a competitive advantage in what I imagine is a pretty tight, competitive market.”
That’s presumably why Apollo Fusion was keen on using it in their thrusters. Apollo Fusion as a startup emerged more or less simultaneously with the rise of massive LEO constellations that use hundreds or thousands of satellites in orbits below 2,000 kilometers to provide continual low-latency coverage. Finding a slightly cheaper, more efficient propellant for one large geostationary satellite doesn’t move the needle much. But doing the same for thousands of satellites that need to be replaced every several years? That’s a much more noticeable discount.
Were it not for mercury’s extreme toxicity, it would actually make an extremely attractive propellant. Apollo Fusion wanted to use a type of ion thruster called a Hall-effect thruster. Ion thrusters strip electrons from the atoms that make up a liquid or gaseous propellant, and then an electric field pushes the resultant ions away from the spacecraft, generating a modest thrust in the opposite direction. The physics of rocket engines means that the performance of these engines increases with the mass of the ion that you can accelerate.
Mercury is heavier than either xenon or krypton, the most commonly used propellants, meaning more thrust per expelled ion. It’s also liquid at room temperature, making it efficient to store and use. And it’s cheap—there’s not a lot of competition with anyone looking to buy mercury.
Bender says that ZMWG, alongside PEER, caught wind of Apollo Fusion marketing its mercury-based thrusters to at least three companies deploying LEO constellations—One Web, Planet Labs, and SpaceX. Planet Labs, an Earth-imaging company, has at least 200 CubeSats in low Earth orbit. One Web and SpaceX, both wireless-communication providers, have many more. One Web plans to have nearly 650 satellites in orbit by the end of 2022. SpaceX already has nearly 1,500 active satellites aloft in its Starlink constellation, with an eye toward deploying as many as 30,000 satellites before its constellation is complete. Other constellations, like Amazon’s Kuiper constellation, are also planning to deploy thousands of satellites.
In 2019, a group of researchers in Italy and the United States estimated how much of the mercury used in spacecraft propellant might find its way back into Earth’s atmosphere. They figured that a hypothetical LEO constellation of 2,000 satellites, each carrying 100 kilograms of propellant, would emit 20 tonnes of mercury every year over the course of a 10-year life span. Three quarters of that mercury, the researchers suggested, would eventually wind up in the oceans.
That amounts to 1 percent of global mercury emissions from a constellation only a fraction of the size of the one planned by SpaceX alone. And if multiple constellations adopted the technology, they would represent a significant percentage of global mercury emissions—especially, the researchers warned, as other uses of mercury are phased out as planned in the years ahead.
Fortunately, it’s unlikely that any mercury propellant thrusters will even get off the ground. Prior to the fourth meeting of the Minamata Convention, Canada, the European Union, and Norway highlighted the dangers of mercury propellant, alongside ZMWG. The provision to ban mercury usage in satellites was passed on 26 March 2022.
The question now is enforcement. “Obviously, there aren’t any U.N. peacekeepers going into space to shoot down” mercury-based satellites, says Bell. But the 137 countries, including the United States, who are party to the convention have pledged to adhere to its provisions—including the propellant ban.
The United States is notable in that list because as Bender explains, it did not ratify the Minamata Convention via the U.S. Senate but instead deposited with the U.N. an instrument of acceptance. In a 7 November 2013 statement (about one month after the original Minamata Convention was adopted), the U.S. State Department said the country would be able to fulfill its obligations “under existing legislative and regulatory authority.”
Bender says the difference is “weedy” but that this appears to mean that the U.S. government has agreed to adhere to the Minamata Convention’s provisions because it already has similar laws on the books. Except there is still no existing U.S. law or regulation banning mercury propellant. For Bender, that creates some uncertainty around compliance when the provision goes into force in 2025.
Still, with a U.S. company being the first startup to toy with mercury propellant, it might be ideal to have a stronger U.S. ratification of the Minamata Convention before another company hits on the same idea. “There will always be market incentives to cut corners and do something more dangerously,” Bell says.
Update 19 April 2022: In an email, a spokesperson for Astra stated that the company's propulsion system, the Astra Spacecraft Engine, does not use mercury. The spokesperson also stated that Astra has no plans to use mercury propellant and that the company does not have anything in orbit that uses mercury.
Updated 20 April 2022 to clarify that Apollo Fusion was building thrusters that used mercury, not that they had actually used them.
Match ID: 24 Score: 2.14 source: spectrum.ieee.org age: 158 days qualifiers: 2.14 toxic
SEMrush and Ahrefs are among the most popular tools in the SEO industry. Both companies have been in business for years and have thousands of customers per month.
If you're a professional SEO or trying to do digital marketing on your own, at some point you'll likely consider using a tool to help with your efforts. Ahrefs and SEMrush are two names that will likely appear on your shortlist.
In this guide, I'm going to help you learn more about these SEO tools and how to choose the one that's best for your purposes.
What is SEMrush?
SEMrush is a popular SEO tool with a wide range of features—it's the leading competitor research service for online marketers. SEMrush's SEO Keyword Magic tool offers over 20 billion Google-approved keywords, which are constantly updated and it's the largest keyword database.
The program was developed in 2007 as SeoQuake is a small Firefox extension
Most accurate keyword data: Accurate keyword search volume data is crucial for SEO and PPC campaigns by allowing you to identify what keywords are most likely to bring in big sales from ad clicks. SEMrush constantly updates its databases and provides the most accurate data.
Largest Keyword database: SEMrush's Keyword Magic Tool now features 20-billion keywords, providing marketers and SEO professionals the largest database of keywords.
All SEMrush users receive daily ranking data, mobile volume information, and the option to buy additional keywords by default with no additional payment or add-ons needed
Most accurate position tracking tool: This tool provides all subscribers with basic tracking capabilities, making it suitable for SEO professionals. Plus, the Position Tracking tool provides local-level data to everyone who uses the tool.
SEO Data Management: SEMrush makes managing your online data easy by allowing you to create visually appealing custom PDF reports, including Branded and White Label reports, report scheduling, and integration with GA, GMB, and GSC.
Toxic link monitoring and penalty recovery: With SEMrush, you can make a detailed analysis of toxic backlinks, toxic scores, toxic markers, and outreach to those sites.
Content Optimization and Creation Tools: SEMrush offers content optimization and creation tools that let you create SEO-friendly content. Some features include the SEO Writing Assistant, On-Page SEO Check, er/SEO Content Template, Content Audit, Post Tracking, Brand Monitoring.
Ahrefs is a leading SEO platform that offers a set of tools to grow your search traffic, research your competitors, and monitor your niche. The company was founded in 2010, and it has become a popular choice among SEO tools. Ahrefs has a keyword index of over 10.3 billion keywords and offers accurate and extensive backlink data updated every 15-30 minutes and it is the world's most extensive backlink index database.
Backlink alerts data and new keywords: Get an alert when your site is linked to or discussed in blogs, forums, comments, or when new keywords are added to a blog posting about you.
Intuitive interface: The intuitive design of the widget helps you see the overall health of your website and search engine ranking at a glance.
Site Explorer: The Site Explorer will give you an in-depth look at your site's search traffic.
Reports with charts and graphs
A question explorer that provides well-crafted topic suggestions
Direct Comparisons: Ahrefs vs SEMrush
Now that you know a little more about each tool, let's take a look at how they compare. I'll analyze each tool to see how they differ in interfaces, keyword research resources, rank tracking, and competitor analysis.
Ahrefs and SEMrush both offer comprehensive information and quick metrics regarding your website's SEO performance. However, Ahrefs takes a bit more of a hands-on approach to getting your account fully set up, whereas SEMrush's simpler dashboard can give you access to the data you need quickly.
In this section, we provide a brief overview of the elements found on each dashboard and highlight the ease with which you can complete tasks.
The Ahrefs dashboard is less cluttered than that of SEMrush, and its primary menu is at the very top of the page, with a search bar designed only for entering URLs.
Additional features of the Ahrefs platform include:
You can see analytics from the dashboard, including search engine rankings to domain ratings, referring domains, and backlink
Jumping from one tool to another is easy. You can use the Keyword Explorer to find a keyword to target and then directly track your ranking with one click.
The website offers a tooltip helper tool that allows you to hover your mouse over something that isn't clear and get an in-depth explanation.
When you log into the SEMrush Tool, you will find four main modules. These include information about your domains, organic keyword analysis, ad keyword, and site traffic.
You'll also find some other options like
A search bar allows you to enter a domain, keyword, or anything else you wish to explore.
A menu on the left side of the page provides quick links to relevant information, including marketing insights, projects, keyword analytics, and more.
The customer support resources located directly within the dashboard can be used to communicate with the support team or to learn about other resources such as webinars and blogs.
Detailed descriptions of every resource offered. This detail is beneficial for new marketers, who are just starting.
Both Ahrefs and SEMrush have user-friendly dashboards, but Ahrefs is less cluttered and easier to navigate. On the other hand, SEMrush offers dozens of extra tools, including access to customer support resources.
When deciding on which dashboard to use, consider what you value in the user interface, and test out both.
If you're looking to track your website's search engine ranking, rank tracking features can help. You can also use them to monitor your competitors.
Let's take a look at Ahrefs vs. SEMrush to see which tool does a better job.
The Ahrefs Rank Tracker is simpler to use. Just type in the domain name and keywords you want to analyze, and it spits out a report showing you the search engine results page (SERP) ranking for each keyword you enter.
Rank Tracker looks at the ranking performance of keywords and compares them with the top rankings for those keywords. Ahrefs also offers:
You'll see metrics that help you understand your visibility, traffic, average position, and keyword difficulty.
It gives you an idea of whether a keyword would be profitable to target or not.
SEMRush offers a tool called Position Tracking. This tool is a project tool—you must set it up as a new project. Below are a few of the most popular features of the SEMrush Position Tracking tool:
All subscribers are given regular data updates and mobile search rankings upon subscribing
The platform provides opportunities to track several SERP features, including Local tracking.
Intuitive reports allow you to track statistics for the pages on your website, as well as the keywords used in those pages.
Identify pages that may be competing with each other using the Cannibalization report.
Ahrefs is a more user-friendly option. It takes seconds to enter a domain name and keywords. From there, you can quickly decide whether to proceed with that keyword or figure out how to rank better for other keywords.
SEMrush allows you to check your mobile rankings and ranking updates daily, which is something Ahrefs does not offer. SEMrush also offers social media rankings, a tool you won't find within the Ahrefs platform. Both are good which one do you like let me know in the comment.
Keyword research is closely related to rank tracking, but it's used for deciding which keywords you plan on using for future content rather than those you use now.
When it comes to SEO, keyword research is the most important thing to consider when comparing the two platforms.
The Ahrefs Keyword Explorer provides you with thousands of keyword ideas and filters search results based on the chosen search engine.
Ahrefs supports several features, including:
It can search multiple keywords in a single search and analyze them together. At SEMrush, you also have this feature in Keyword Overview.
Ahrefs has a variety of keywords for different search engines, including Google, YouTube, Amazon, Bing, Yahoo, Yandex, and other search engines.
When you click on a keyword, you can see its search volume and keyword difficulty, but also other keywords related to it, which you didn't use.
SEMrush's Keyword Magic Tool has over 20 billion keywords for Google. You can type in any keyword you want, and a list of suggested keywords will appear.
The Keyword Magic Tool also lets you to:
Show performance metrics by keyword
Search results are based on both broad and exact keyword matches.
Show data like search volume, trends, keyword difficulty, and CPC.
Show the first 100 Google search results for any keyword.
Identify SERP Features and Questions related to each keyword
SEMrush has released a new Keyword Gap Tool that uncovers potentially useful keyword opportunities for you, including both paid and organic keywords.
Both of these tools offer keyword research features and allow users to break down complicated tasks into something that can be understood by beginners and advanced users alike.
If you're interested in keyword suggestions, SEMrush appears to have more keyword suggestions than Ahrefs does. It also continues to add new features, like the Keyword Gap tool and SERP Questions recommendations.
Both platforms offer competitor analysis tools, eliminating the need to come up with keywords off the top of your head. Each tool is useful for finding keywords that will be useful for your competition so you know they will be valuable to you.
Ahrefs' domain comparison tool lets you compare up to five websites (your website and four competitors) side-by-side.it also shows you how your site is ranked against others with metrics such as backlinks, domain ratings, and more.
Use the Competing Domains section to see a list of your most direct competitors, and explore how many keywords matches your competitors have.
To find more information about your competitor, you can look at the Site Explorer and Content Explorer tools and type in their URL instead of yours.
SEMrush provides a variety of insights into your competitors' marketing tactics. The platform enables you to research your competitors effectively. It also offers several resources for competitor analysis including:
Traffic Analytics helps you identify where your audience comes from, how they engage with your site, what devices visitors use to view your site, and how your audiences overlap with other websites.
SEMrush's Organic Research examines your website's major competitors and shows their organic search rankings, keywords they are ranking for, and even if they are ranking for any (SERP) features and more.
The Market Explorer search field allows you to type in a domain and lists websites or articles similar to what you entered. Market Explorer also allows users to perform in-depth data analytics on These companies and markets.
SEMrush wins here because it has more tools dedicated to competitor analysis than Ahrefs. However, Ahrefs offers a lot of functionality in this area, too. It takes a combination of both tools to gain an advantage over your competition.
Lite Monthly: $99/month
Standard Monthly: $179/month
Annually Lite: $990/year
Annually Standard: $1790/year
Pro Plan: $119.95/month
Business Plan: $449.95/month
Which SEO tool should you choose for digital marketing?
When it comes to keyword data research, you will become confused about which one to choose.
Consider choosing Ahrefs if you
Like friendly and clean interface
Searching for simple keyword suggestions
Want to get more keywords for different search engines like Amazon, Bing, Yahoo, Yandex, Baidu, and more
Consider SEMrush if you:
Want more marketing and SEO features
Need competitor analysis tool
Need to keep your backlinks profile clean
Looking for more keyword suggestions for Google
Both tools are great. Choose the one which meets your requirements and if you have any experience using either Ahrefs or SEMrush let me know in the comment section which works well for you.
Match ID: 25 Score: 2.14 source: www.crunchhype.com age: 207 days qualifiers: 2.14 toxic
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND
Enjoy today's videos!
Another robotic pet on Kickstarter, another bunting of red flags.
Let's see, we've got: "she's so playful and affectionate you'll forget she's a robot." "Everything you can dream of in a best friend and more." "Get ready to fall in love!" And that's literally like the first couple of tiles on the Kickstarter post. Look, the hardware seems fine, and there is a lot of expressiveness going on, I just wish they didn't set you up for an inevitable disappointment when after a couple of weeks it becomes apparent that yes, this is just a robotic toy, and will never be your best friend (or more).
Loona is currently on Kickstarter for about USD $300.
Inspired by the flexibility and resilience of dragonfly wings, we propose a novel design for a biomimetic drone propeller called Tombo propeller. Here, we report on the design and fabrication process of this biomimetic propeller that can accommodate collisions and recover quickly, while maintaining sufficient thrust force to hover and fly.
Meet Tom, a software engineer at Boston Dynamics, as he shares insights on programming and testing the practical—and impractical—applications of robotics. Whether Spot is conducting inspections or playing an instrument, learn how we go from code on a computer to actions in the real world.
This Ameca demo couples automated speech recognition with GPT 3 —a large language model that generates meaningful answers—the output is fed to an online TTS service which generates the voice and visemes for lip sync timing. The team at Engineered Arts Ltd. pose the questions.
The ANT project develops a navigation and motion control system for future walking systems for planetary exploration. After successful testing on ramps and rubble fields, the challenge of climbing rough inclines such as craters is being tackled.
With multitrade layout, all trades’ layouts are set down with a single pass over the floor by Dusty's FieldPrinter system. Trades experience unparalleled clarity and communication with each other, because they can see each others’ installation plans and immediately identify and resolve conflicts. Instead of fighting over the floor and pointing fingers, they start to solve problems together.
We present QUaRTM—a novel quadcopter design capable of tilting the propellers into the forward flight direction, which reduces the drag area and therefore allows for faster, more agile, and more efficient flight.
Recent advances allow for the automation of food preparation in high-throughput environments, yet the successful deployment of these robots requires the planning and execution of quick, robust, and ultimately collision-free behaviors. In this work, we showcase a novel framework for modifying previously generated trajectories of robotic manipulators in highly detailed and dynamic collision environments.
The LCT Hospital in South Korea uses “Dr. LCT” for robotic-based orthopedic knee procedures. The system is based on the KUKA LBR Med robotic platform, which is ideally suited for orthopedic surgery with its seven axes, software developed specifically for medical technology, and appropriate safety measures.
A year in review. Compilation of 2022 video highlights of the Game Changing Development (GCD) Program. The Game Changing Development Program is a part of NASA’s Space Technology Mission Directorate. The program advances space technologies that may lead to entirely new approaches for the agency’s future space missions and provide solutions to significant national needs.
Roundtable discussion on how teaching automation in schools, colleges, and universities can help shape the workers of tomorrow. ABB Robotics has put together a panel of experts in this field to discuss the challenges and opportunities.
On 8 September 2022, Mario Santillo of Ford talked to robotics students as the first speaker in the Undergraduate Robotics Pathways & Careers Speaker Series, which aims to answer the question “What can I do with a robotics degree?”
Shares of Coinbase Global Inc. slumped 4.2% toward a two-month low in afternoon trading Friday, after both an equity analyst and credit analyst trimmed their outlooks on the cryptocurrency wallet provider, with both analysts citing downbeat outlooks for the cryptocurrency market. J.P. Morgan analyst Kenneth Worthington reiterated his neutral rating on the stock but cut his price target by 23% to $60, which is less than 1% below current levels. "Cryptocurrency trading activity remains under pressure in 3Q22, deteriorating in September," Worthington wrote in a note to clients. He also said Coinbase has lost market share, like it has during previous downcycles. Meanwhile, credit rating agency Moody's confirmed Coinbase's Ba3 trading, which is three notches deep into speculative grade, or "junk," territory, but changed the outlook on the rating to negative from under review. "The outlook is negative because of the challenging crypto asset operating environment which continues to be a drag on Coinbase's free cash flow generation capacity," said Moody's senior analyst Fadi Abdel Massih. Coinbase's stock has tumbled 76.1% year to date, while bitcoin has dropped 59.7% and the S&P 500 has declined 23.3%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 0 Score: 30.00 source: www.marketwatch.com age: 0 days qualifiers: 20.00 cryptocurrenc(y|ies), 10.00 bitcoin(|s)
Twitch's Crypto Casino Ban Ignores the Bigger Play Fri, 23 Sep 2022 15:25:18 +0000 The company is going after cryptocurrency gambling—but will that be enough to satisfy concerned streamers? Match ID: 4 Score: 20.00 source: www.wired.com age: 1 day qualifiers: 20.00 cryptocurrenc(y|ies)
If you want to pay online, you need to register an account and provide credit card information. If you don't have a credit card, you can pay with bank transfer. With the rise of cryptocurrencies, these methods may become old.
Imagine a world in which you can do transactions and many other things without having to give your personal information. A world in which you don’t need to rely on banks or governments anymore. Sounds amazing, right? That’s exactly what blockchain technology allows us to do.
It’s like your computer’s hard drive. blockchain is a technology that lets you store data in digital blocks, which are connected together like links in a chain.
Blockchain technology was originally invented in 1991 by two mathematicians, Stuart Haber and W. Scot Stornetta. They first proposed the system to ensure that timestamps could not be tampered with.
A few years later, in 1998, software developer Nick Szabo proposed using a similar kind of technology to secure a digital payments system he called “Bit Gold.” However, this innovation was not adopted until Satoshi Nakamoto claimed to have invented the first Blockchain and Bitcoin.
So, What is Blockchain?
A blockchain is a distributed database shared between the nodes of a computer network. It saves information in digital format. Many people first heard of blockchain technology when they started to look up information about bitcoin.
Blockchain is used in cryptocurrency systems to ensure secure, decentralized records of transactions.
Blockchain allowed people to guarantee the fidelity and security of a record of data without the need for a third party to ensure accuracy.
To understand how a blockchain works, Consider these basic steps:
Blockchain collects information in “blocks”.
A block has a storage capacity, and once it's used up, it can be closed and linked to a previously served block.
Blocks form chains, which are called “Blockchains.”
More information will be added to the block with the most content until its capacity is full. The process repeats itself.
Each block in the chain has an exact timestamp and can't be changed.
Let’s get to know more about the blockchain.
How does blockchain work?
Blockchain records digital information and distributes it across the network without changing it. The information is distributed among many users and stored in an immutable, permanent ledger that can't be changed or destroyed. That's why blockchain is also called "Distributed Ledger Technology" or DLT.
Here’s how it works:
Someone or a computer will transacts
The transaction is transmitted throughout the network.
A network of computers can confirm the transaction.
When it is confirmed a transaction is added to a block
The blocks are linked together to create a history.
And that’s the beauty of it! The process may seem complicated, but it’s done in minutes with modern technology. And because technology is advancing rapidly, I expect things to move even more quickly than ever.
A new transaction is added to the system. It is then relayed to a network of computers located around the world. The computers then solve equations to ensure the authenticity of the transaction.
Once a transaction is confirmed, it is placed in a block after the confirmation. All of the blocks are chained together to create a permanent history of every transaction.
How are Blockchains used?
Even though blockchain is integral to cryptocurrency, it has other applications. For example, blockchain can be used for storing reliable data about transactions. Many people confuse blockchain with cryptocurrencies like bitcoin and ethereum.
Blockchain already being adopted by some big-name companies, such as Walmart, AIG, Siemens, Pfizer, and Unilever. For example, IBM's Food Trust uses blockchain to track food's journey before reaching its final destination.
Although some of you may consider this practice excessive, food suppliers and manufacturers adhere to the policy of tracing their products because bacteria such as E. coli and Salmonella have been found in packaged foods. In addition, there have been isolated cases where dangerous allergens such as peanuts have accidentally been introduced into certain products.
Tracing and identifying the sources of an outbreak is a challenging task that can take months or years. Thanks to the Blockchain, however, companies now know exactly where their food has been—so they can trace its location and prevent future outbreaks.
Blockchain technology allows systems to react much faster in the event of a hazard. It also has many other uses in the modern world.
What is Blockchain Decentralization?
Blockchain technology is safe, even if it’s public. People can access the technology using an internet connection.
Have you ever been in a situation where you had all your data stored at one place and that one secure place got compromised? Wouldn't it be great if there was a way to prevent your data from leaking out even when the security of your storage systems is compromised?
Blockchain technology provides a way of avoiding this situation by using multiple computers at different locations to store information about transactions. If one computer experiences problems with a transaction, it will not affect the other nodes.
Instead, other nodes will use the correct information to cross-reference your incorrect node. This is called “Decentralization,” meaning all the information is stored in multiple places.
Blockchain guarantees your data's authenticity—not just its accuracy, but also its irreversibility. It can also be used to store data that are difficult to register, like legal contracts, state identifications, or a company's product inventory.
Pros and Cons of Blockchain
Blockchain has many advantages and disadvantages.
Accuracy is increased because there is no human involvement in the verification process.
One of the great things about decentralization is that it makes information harder to tamper with.
Safe, private, and easy transactions
Provides a banking alternative and safe storage of personal information
Data storage has limits.
The regulations are always changing, as they differ from place to place.
It has a risk of being used for illicit activities
Frequently Asked Questions About Blockchain
I’ll answer the most frequently asked questions about blockchain in this section.
Is Blockchain a cryptocurrency?
Blockchain is not a cryptocurrency but a technology that makes cryptocurrencies possible. It's a digital ledger that records every transaction seamlessly.
Is it possible for Blockchain to be hacked?
Yes, blockchain can be theoretically hacked, but it is a complicated task to be achieved. A network of users constantly reviews it, which makes hacking the blockchain difficult.
What is the most prominent blockchain company?
Coinbase Global is currently the biggest blockchain company in the world. The company runs a commendable infrastructure, services, and technology for the digital currency economy.
Who owns Blockchain?
Blockchain is a decentralized technology. It’s a chain of distributed ledgers connected with nodes. Each node can be any electronic device. Thus, one owns blockhain.
What is the difference between Bitcoin and Blockchain technology?
Bitcoin is a cryptocurrency, which is powered by Blockchain technology while Blockchain is a distributed ledger of cryptocurrency
What is the difference between Blockchain and a Database?
Generally a database is a collection of data which can be stored and organized using a database management system. The people who have access to the database can view or edit the information stored there. The client-server network architecture is used to implement databases. whereas a blockchain is a growing list of records, called blocks, stored in a distributed system. Each block contains a cryptographic hash of the previous block, timestamp and transaction information. Modification of data is not allowed due to the design of the blockchain. The technology allows decentralized control and eliminates risks of data modification by other parties.
Blockchain has a wide spectrum of applications and, over the next 5-10 years, we will likely see it being integrated into all sorts of industries. From finance to healthcare, blockchain could revolutionize the way we store and share data. Although there is some hesitation to adopt blockchain systems right now, that won't be the case in 2022-2023 (and even less so in 2026). Once people become more comfortable with the technology and understand how it can work for them, owners, CEOs and entrepreneurs alike will be quick to leverage blockchain technology for their own gain. Hope you like this article if you have any question let me know in the comments section
FOLLOW US ON TWITTER
Follow @AdilAhmad_c Match ID: 5 Score: 4.29 source: www.crunchhype.com age: 159 days qualifiers: 2.86 cryptocurrenc(y|ies), 1.43 bitcoin(|s)
Non-fungible tokens (NFTs) are the most popular digital assets today, capturing the attention of cryptocurrency investors, whales and people from around the world. People find it amazing that some users spend thousands or millions of dollars on a single NFT-based image of a monkey or other token, but you can simply take a screenshot for free. So here we share some freuently asked question about NFTs.
1) What is an NFT?
NFT stands for non-fungible token, which is a cryptographic token on a blockchain with unique identification codes that distinguish it from other tokens. NFTs are unique and not interchangeable, which means no two NFTs are the same. NFTs can be a unique artwork, GIF, Images, videos, Audio album. in-game items, collectibles etc.
2) What is Blockchain?
A blockchain is a distributed digital ledger that allows for the secure storage of data. By recording any kind of information—such as bank account transactions, the ownership of Non-Fungible Tokens (NFTs), or Decentralized Finance (DeFi) smart contracts—in one place, and distributing it to many different computers, blockchains ensure that data can’t be manipulated without everyone in the system being aware.
3) What makes an NFT valuable?
The value of an NFT comes from its ability to be traded freely and securely on the blockchain, which is not possible with other current digital ownership solutionsThe NFT points to its location on the blockchain, but doesn’t necessarily contain the digital property. For example, if you replace one bitcoin with another, you will still have the same thing. If you buy a non-fungible item, such as a movie ticket, it is impossible to replace it with any other movie ticket because each ticket is unique to a specific time and place.
4) How do NFTs work?
One of the unique characteristics of non-fungible tokens (NFTs) is that they can be tokenised to create a digital certificate of ownership that can be bought, sold and traded on the blockchain.
As with crypto-currency, records of who owns what are stored on a ledger that is maintained by thousands of computers around the world. These records can’t be forged because the whole system operates on an open-source network.
NFTs also contain smart contracts—small computer programs that run on the blockchain—that give the artist, for example, a cut of any future sale of the token.
5) What’s the connection between NFTs and cryptocurrency?
Non-fungible tokens (NFTs) aren't cryptocurrencies, but they do use blockchain technology. Many NFTs are based on Ethereum, where the blockchain serves as a ledger for all the transactions related to said NFT and the properties it represents.5) How to make an NFT?
Anyone can create an NFT. All you need is a digital wallet, some ethereum tokens and a connection to an NFT marketplace where you’ll be able to upload and sell your creations
6) How to validate the authencity of an NFT?
When you purchase a stock in NFT, that purchase is recorded on the blockchain—the bitcoin ledger of transactions—and that entry acts as your proof of ownership.
7) How is an NFT valued? What are the most expensive NFTs?
The value of an NFT varies a lot based on the digital asset up for grabs. People use NFTs to trade and sell digital art, so when creating an NFT, you should consider the popularity of your digital artwork along with historical statistics.
In the year 2021, a digital artist called Pak created an artwork called The Merge. It was sold on the Nifty Gateway NFT market for $91.8 million.
8) Can NFTs be used as an investment?
Non-fungible tokens can be used in investment opportunities. One can purchase an NFT and resell it at a profit. Certain NFT marketplaces let sellers of NFTs keep a percentage of the profits from sales of the assets they create.
9) Will NFTs be the future of art and collectibles?
Many people want to buy NFTs because it lets them support the arts and own something cool from their favorite musicians, brands, and celebrities. NFTs also give artists an opportunity to program in continual royalties if someone buys their work. Galleries see this as a way to reach new buyers interested in art.
10) How do we buy an NFTs?
There are many places to buy digital assets, like opensea and their policies vary. On top shot, for instance, you sign up for a waitlist that can be thousands of people long. When a digital asset goes on sale, you are occasionally chosen to purchase it.
11) Can i mint NFT for free?
To mint an NFT token, you must pay some amount of gas fee to process the transaction on the Etherum blockchain, but you can mint your NFT on a different blockchain called Polygon to avoid paying gas fees. This option is available on OpenSea and this simply denotes that your NFT will only be able to trade using Polygon's blockchain and not Etherum's blockchain. Mintable allows you to mint NFTs for free without paying any gas fees.
12) Do i own an NFT if i screenshot it?
The answer is no. Non-Fungible Tokens are minted on the blockchain using cryptocurrencies such as Etherum, Solana, Polygon, and so on. Once a Non-Fungible Token is minted, the transaction is recorded on the blockchain and the contract or license is awarded to whoever has that Non-Fungible Token in their wallet.
12) Why are people investing so much in NFT?
Non-fungible tokens have gained the hearts of people around the world, and they have given digital creators the recognition they deserve. One of the remarkable things about non-fungible tokens is that you can take a screenshot of one, but you don’t own it. This is because when a non-fungible token is created, then the transaction is stored on the blockchain, and the license or contract to hold such a token is awarded to the person owning the token in their digital wallet.
You can sell your work and creations by attaching a license to it on the blockchain, where its ownership can be transferred. This lets you get exposure without losing full ownership of your work. Some of the most successful projects include Cryptopunks, Bored Ape Yatch Club NFTs, SandBox, World of Women and so on. These NFT projects have gained popularity globally and are owned by celebrities and other successful entrepreneurs. Owning one of these NFTs gives you an automatic ticket to exclusive business meetings and life-changing connections.
That’s a wrap. Hope you guys found this article enlightening. I just answer some question with my limited knowledge about NFTs. If you have any questions or suggestions, feel free to drop them in the comment section below. Also I have a question for you, Is bitcoin an NFTs? let me know in The comment section below
Match ID: 6 Score: 4.29 source: www.crunchhype.com age: 230 days qualifiers: 2.86 cryptocurrenc(y|ies), 1.43 bitcoin(|s)
Video: What you need to know about the race to the first Bitcoin ETF Tue, 25 May 2021 14:35:25 GMT Several bitcoin ETFs have been filed with the U.S. Securities and Commission, but none have been approved. Cryptocurrency skeptics and supporters have cited benefits and concerns about what the approval of a bitcoin ETF may mean for investors. Match ID: 7 Score: 4.29 source: www.marketwatch.com age: 487 days qualifiers: 2.86 cryptocurrenc(y|ies), 1.43 bitcoin(|s)
Filter efficiency 98.958 (8 matches/768 results)
ABOUT THE PROJECT
RSS Rabbit links users to publicly available RSS entries. Vet every link before clicking! The creators accept no responsibility for the contents of these entries.
We're not prepared to take user feedback yet. Check back soon!