********** SPACE **********
return to top
A telescope happened to be pointing at the brightest supernova yet observed
Fri, 09 Jun 2023 00:04:35 +0000
The BOAT—brightest of all time—reveals the mechanics of gamma-ray bursts.
Match ID: 0 Score: 65.00 source: arstechnica.com age: 1 day
qualifiers: 65.00 nasa
ISS Daily Summary Report – 6/08/2023
Thu, 08 Jun 2023 16:00:04 +0000
Payloads: Plasma Kristall-4 (PK-4): The crew configured the video monitor system and gas system valves in order to prepare the PK-4 chamber for storage. PK-4 is a scientific collaboration between ESA and the Russian Federal Space Agency (Roscosmos), performing research in the field of Complex Plasmas: low temperature gaseous mixtures composed of ionized gas, neutral …
Match ID: 1 Score: 65.00 source: blogs.nasa.gov age: 1 day
qualifiers: 65.00 nasa
NASA Awards $5 Million to Women’s Colleges Tackling STEM Gender Gap
Wed, 07 Jun 2023 13:56 EDT
As part of a Biden/Harris initiative, NASA will award more than $5 million in funding to seven Women’s Colleges and Universities (WCUs) to research and develop strategies that increase retention of women in STEM degree programs and careers.
Match ID: 2 Score: 65.00 source: www.nasa.gov age: 2 days
qualifiers: 65.00 nasa
NASA Invites Media to View Asteroid Sample Recovery Rehearsal
Wed, 07 Jun 2023 12:32 EDT
NASA invites media to attend the first major asteroid sample recovery rehearsal for its OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, and Security – Regolith Explorer) mission on Tuesday, June 27, to Thursday, June 28, at Lockheed Martin in Littleton, Colorado.
Match ID: 3 Score: 65.00 source: www.nasa.gov age: 2 days
qualifiers: 65.00 nasa
ISS Daily Summary Report – 6/07/2023
Wed, 07 Jun 2023 16:00:55 +0000
Payloads: Airborne Particulate Monitor (APM): APM was removed from its Node 3 location and stowed. Air quality in crewed spacecraft is important for keeping astronauts healthy and comfortable. Although requirements exist for maximum allowable concentrations of particulate matter, currently no measurement capability verifies whether these requirements are met. APM demonstrates an instrument for measuring and …
Match ID: 4 Score: 65.00 source: blogs.nasa.gov age: 2 days
qualifiers: 65.00 nasa
NASA Opens Applications for Artifact Program to Museums, Schools, More
Wed, 07 Jun 2023 08:19 EDT
Eligible schools, universities, museums, libraries, and planetariums interested in receiving historic NASA objects for their STEM programs have until Friday, June 30, to apply for the 60th screening of the NASA Artifacts Module.
Match ID: 5 Score: 65.00 source: www.nasa.gov age: 2 days
qualifiers: 65.00 nasa
NASA Names Winners of 2023 Student Rocket Launch Competition
Tue, 06 Jun 2023 14:10 EDT
NASA announced Tuesday the University of Alabama in Huntsville is the overall winner of the agency’s 2023 Student Launch challenge.
Match ID: 6 Score: 55.71 source: www.nasa.gov age: 3 days
qualifiers: 55.71 nasa
ISS Daily Summary Report – 6/06/2023
Tue, 06 Jun 2023 16:00:41 +0000
SpaceX-28 (SpX-28) Docking: SpX-28 successfully docked to the Node 2 Zenith port this morning at 5:04 AM CT. The crew then performed leak checks, opened hatches, and began unloading cargo. Carrying food, supplies, and two ISS Roll Out Solar Arrays (IROSA), the SpX-28 spacecraft launched from the NASA’s Kennedy Space Center on Monday, June 5th, …
Match ID: 7 Score: 55.71 source: blogs.nasa.gov age: 3 days
qualifiers: 55.71 nasa
NASA Selects Small Business, Research Teams for Tech Development
Mon, 05 Jun 2023 14:39 EDT
NASA will award funding to more than 200 small business teams to develop new technologies designed to protect the health of astronauts, lower risk of collision damage to spacecraft, and more. The new awards from NASA’s Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) program invests in a diverse portfolio of A
Match ID: 8 Score: 46.43 source: www.nasa.gov age: 4 days
qualifiers: 46.43 nasa
Review Board: NASA-JPL Psyche Progress Outstanding, Launch on Track
Mon, 05 Jun 2023 14:13 EDT
Steps taken by NASA, the agency’s Jet Propulsion Laboratory (JPL) in Southern California, and Caltech, to put the Psyche mission on track for an October 2023 launch have been outstanding, according to an independently appointed review board. NASA and JPL convened the board last summer after the Psyche mission team requested to delay the spacecraft’
Match ID: 9 Score: 46.43 source: www.nasa.gov age: 4 days
qualifiers: 46.43 nasa
ISS Daily Summary Report – 6/05/2023
Mon, 05 Jun 2023 16:00:38 +0000
SpaceX-28 (SpX-28) Launch: The SpX-28 Dragon spacecraft launched aboard a Falcon 9 rocket from Launch Complex 39A at NASA’s Kennedy Space Center in Florida at 10:47 AM CDT this morning, and is scheduled to dock to the ISS tomorrow morning at 4:50 AM CDT. The spacecraft is carrying supplies and hardware to the ISS, including …
Match ID: 10 Score: 46.43 source: blogs.nasa.gov age: 4 days
qualifiers: 46.43 nasa
NASA, SpaceX Launch Solar Arrays, Cargo to Space Station
Mon, 05 Jun 2023 11:44 EDT
Following a successful launch of NASA’s SpaceX 28th commercial resupply services mission, two new solar arrays, science investigations, and supplies are on their way to the International Space Station.
Match ID: 11 Score: 46.43 source: www.nasa.gov age: 4 days
qualifiers: 46.43 nasa
NASA Names New Agency General Counsel
Mon, 05 Jun 2023 11:06 EDT
NASA Administrator Bill Nelson announced Monday the selection of Iris Lan as the agency’s new general counsel, effective immediately. She succeeds Sumara Thompson-King, who retired from NASA last December.
Match ID: 12 Score: 46.43 source: www.nasa.gov age: 4 days
qualifiers: 46.43 nasa
Boris Johnson leaving the Commons in ‘disgrace’, says standards committee chair – UK politics live
Sat, 10 Jun 2023 08:45:31 GMT
Johnson’s departure from political life comes less than four years after he won an 80-seat general election majority
We’ve got a bit more from Ed Davey, who has accused Boris Johnson of having “a track record of deceit and lies”.
I never thought he was fit to be an MP, let alone prime minister. He has a track record of deceit and lies.
But I hope today is not just about Boris Johnson. I think it’s about the whole Conservative party who put him there in the first place.
I think there should actually be a general election.
I think the chaos and division in the Conservative party, the fact that they’re so out of touch on the cost of living, on the NHS, it means we’ve got to put the country out of its misery with these Conservatives.Continue reading...
More pain could follow the calamities of Wasps, Worcester and London Irish but the drive exists to fight for rugby’s future
Sometimes it is not the iceberg’s fault. Had the captains of some of English club rugby’s largest vessels been more focused on avoiding this season’s multiple shipwrecks they could easily have steered a more prudent course. Hindsight always helps but if your outgoings constantly exceed your income you are bound to sink eventually, as Worcester, Wasps and now London Irish can testify.
Most painful for those most deeply affected, perhaps, is that even the most myopic of lookouts could recognise this ever-present threat. Yet for whatever reason – arrogance, selfishness, greed, complacency, poor governance – the clubs and their governing body remained in thrall to the same old flawed model. Rich benefactor equals rugby success, right? Not if they ignore the basic tenets of sound business practice it won’t.Continue reading...
Having served a 22-month ban for failing a drugs test, the sprinter wants to make up for lost time after making a low-key return
Hidden in plain sight among the teenagers and pensioners who have paid their £14 entry fee to compete at the Lee Valley Sprint Night is a Team GB star who has not stepped onto the track since failing a drugs test at the Tokyo Olympics; a contrite athlete seeking a new beginning.
Cheered on by his close friend Nethaneel Mitchell-Blake – one of the British quartet who lost their Olympic 4x100 metres silver medal because of his actions – CJ Ujah does what for so long had been commonplace: he wins a race. Two, in fact.Continue reading...
England quick is contrite about his racist tweets and with Ashes about to start he has already bowled Labuschagne and Smith
“‘I was at a point where I thought: ‘This is make or break’,” Ollie Robinson says as he describes the first of two different debuts that helped shape his transition from confusion and loss. The 29-year-old fast bowler will be one of England’s key players when they face Australia in the opening Ashes Test next Friday – but a shocking early death and a public shaming shroud those contrasting games from his past.
“I didn’t have much to lose,” Robinson says as he continues the story of his Sussex debut in April 2015. “I might have got out for nought but I had to try and put on a show and get my name back out there.”Continue reading...
Swift investment would make any Labour government a climate and economic leader – so why the dithering?
As wildfire smoke engulfs much of the east coast of the US and average global temperatures continue to rise, with the world imminently facing some of the hottest years on record, it would be an error of judgment for the Labour party to delay its green investment pledge. Doing so would not only be a mistake for our economy and the climate, but also threaten Labour’s electoral prospects, given strong public demand for bold action on this issue.
Together with its world-leading promise to end all new domestic oil and gas developments, the Labour party’s £28bn-a-year investment pledge to green industries marks the scale of climate ambition we need to see from a future British government. These commitments mark Labour out as a potential major climate leader and, like Joe Biden’s landmark Inflation Reduction Act (IRA), the investment pledge clearly demonstrates that the party is in tune with the economic realities of today’s world.
Rebecca Newsom is head of politics at GreenpeaceContinue reading...
Some newspapers focus on former PM’s accusations in fiery resignation letter, while others point readers towards damning committee report
Boris Johnson’s departure from life as an MP ahead of the publication of the Partygate report plays out across Saturday’s front pages, which are filled with a mixture of acrimony, triumph and predictions of further “Tory bloodletting”.
The Guardian focuses on the reason for his decision to resign as MP, noting that the privileges committee found he misled parliament and recommended a lengthy suspension from the House of Commons. It finds a spot lower down for the reaction to Rishi Sunak approving Johnson’s honours list, regarded as rewarding those involved in the Partygate scandal.Continue reading...
| submitted by /u/Mynameis__--__ |
Putin and Zelenskiy tussle over Ukraine’s counteroffensive narrative; water levels begin to recede in flooded regions near destroyed Kakhovka dam
Russian president Vladimir Putin said Ukraine had begun its counteroffensive against Russian troops but that efforts “so far have failed” after Moscow said it repelled several Ukrainian assaults. However, Ukrainian president Volodymyr Zelenskiy made no formal announcement of specific developments on the battlefield, but praised the “heroism” of his country’s soldiers fighting “tough battles”.
Water levels are gradually receding in parts of southern Ukraine that were flooded after the destruction of the Kakhovka dam, according to officials. Meanwhile, evidence is growing that the dam was blown up after seismic data showed there was a blast at the site in the early hours of Tuesday. Norsar, the Norwegian Seismic Array, said signals from a regional station in Romania pointed to an explosion at 2.54am. Norsar did not draw conclusions on who was responsible.
The US said Russia appeared to be deepening its defence cooperation with Iran and had received hundreds of one-way attack drones that it is using to strike Ukraine. Citing newly declassified information, the White House said the drones were built in Iran, shipped across the Caspian Sea and then used by Russian forces against Ukraine.
The Wagner group has been accused of stoking “anarchy” on Russia’s frontlines after one of the Kremlin’s military commanders claimed Yevgeny Prigozhin’s mercenaries had kidnapped and tortured his soldiers during the battle for Bakhmut. In a video posted online, Lt Col Roman Venevitin also accused Wagner soldiers of stealing arms, forcing mobilised soldiers to sign contracts with Wagner, and attempting to extort weapons from the Russian defence ministry in exchange for releasing kidnapped soldiers.
Iceland announced it would suspend work at its embassy in Russia as of 1 August, the first country to do so, and asked Russia to limit its operations in Reykjavik. “The current situation simply does not make it viable for the small foreign service of Iceland to operate an embassy in Russia,” foreign minister Thordis Gylfadottir said.
Russia will start deploying tactical nuclear weapons in Belarus after the facilities are ready on 7-8 July, Putin told his Belarusian counterpart Alexander Lukashenko on Friday in a meeting in Sochi, Russia.
Nato allies on Friday condemned Russia’s decision to withdraw from the treaty on conventional armed forces in Europe (CFE).
Hungary said on Friday it had received a group of Ukrainian prisoners of war from Russia, a release that Ukraine welcomed while expressing concern that it had not been informed.
Volodymyr Zelenskiy has thanked Joe Biden for his $2.1bn (£1.6bn) security assistance package. In a tweet, Zelenskiy said the contribution is “more important than ever” since the Kakhovka dam collapse.
The Japanese prime minister, Fumio Kishida, told Zelenskiy on Friday that Japan will offer emergency humanitarian aid worth about $5m (£3.9m) after the destruction of the Nova Kakhovka dam, a Japanese government spokesperson has said.
Ukraine’s domestic Security Service (SBU) said on Friday it had intercepted a telephone call proving a Russian “sabotage group” blew up the Kakhovka hydroelectric station and dam in southern Ukraine. A one-and-a-half minute audio clip on its Telegram channel of the alleged conversation featured two unidentified men who appeared to be discussing the fallout from the disaster in Russian. One of the men said “Our saboteur group is there. They wanted to cause fear with this dam. It did not go according to the plan. More than they planned.”
The Kremlin on Friday accused Ukrainian forces of killing civilian victims of flooding caused by the collapse of the Kakhovka dam in southern Ukraine in repeated shelling attacks, including one pregnant woman. Kremlin spokesperson Dmitry Peskov called the purported attacks “barbaric”. Russia did not provide any evidence to back up its claims.
Russian deputy prime minister Marat Khusnullin said on Friday that Crimea’s water supply will not be affected by the destruction of the Kakhovka dam, and the peninsula had enough water reserves for 500 days. A canal from the destroyed reservoir fed drinking water to the peninsula. Kyiv cut access to the canal in 2014, after Russia illegally seized Crimea and claimed to annex it.Continue reading...
Indictment accuses former president of risking national security, foreign relations, safety of US military and intelligence gathering
The US senate judiciary committee chairman, Dick Durbin, has said the investigation led by special counsel Jack Smith should be allowed to continue “without interference”.
In a statement on Friday, Durbin added that Donald Trump “should be afforded the due process protections that he is guaranteed by our constitution, just like any other American”.
I think before the sun sets today, the attorney general of the United States should be standing in front of the American people, should unseal this indictment, should provide the American people with all the facts and information here.
And the American people be able to judge for themselves whether this is just the latest incident of weaponization and politicization at the justice department or it’s something different.Continue reading...
Biogen Inc. BIIB shares rallied in the extended session late Friday following a full-day halt after the biotech company received a recommendation from a Food and Drug Administration advisory committee to approve an Alzheimer’s treatment it makes with Eisai Co. ESALF Biogen shares rallied as much as 9% after hours, after spending the regular session halted at $308.88. Eisai shares finished Friday up 9.4% at $81.03. On Friday, the FDA’s Peripheral and Central Nervous System Drugs Advisory Committee voted unanimously to recommend the companies’ drug Leqembi to treat Alzheimer’s disease. While the FDA is not bound to committee recommendations, the agency generally follows them. Approval of the drug is expected by July 6.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
| submitted by /u/ninjascotsman |
| submitted by /u/sussywanker |
It’s a Taningia danae:
Their arms are lined with two rows of sharp retractable hooks. And, like most deep-sea squid, they are adorned with light organs called photophores. They have some on the underside of their mantle. There are more facing upward, near one of their eyes. But it’s the photophores at the tip of two stubby arms that are truly unique. The size and shape of lemons—each nestled within a retractable lid like an eyeball in a socket—they are by far the largest photophores known to science.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered...
Committee found former prime minister had misled the House of Commons and recommended lengthy suspension
Boris Johnson is standing down immediately as a Conservative MP after an investigation into the Partygate scandal found he misled parliament and recommended a lengthy suspension from the House of Commons.
The former prime minister angrily accused the investigation of trying to drive him out, and claimed there was a “witch-hunt under way, to take revenge for Brexit and ultimately to reverse the 2016 referendum result”.Continue reading...
| submitted by /u/lily__brunette |
The ex-prime minister’s surprise resignation as an MP followed a collapse in his popularity that is likely to be terminal
When Boris Johnson sat down to draft his resignation statement after learning the privileges committee had concluded that he lied to MPs over Partygate, he was determined to leave his enemies – on both sides of the Commons – a clear message.
“It is very sad to be leaving parliament,” he wrote. “At least for now …” That he still harbours hopes of a comeback – despite the damage that he has done to his own reputation, the Conservative party brand and to the country more widely – should surprise nobody.Continue reading...
| submitted by /u/WashingtonPass |
Eisai Co. Ltd. ESALF shares were up 9.4% Friday after advisers to the U.S. Food and Drug Administration voted unanimously in favor of the Eisai and Biogen BIIB Alzheimer’s treatment Leqembi. The panel of independent experts said that a clinical study had verified the clinical benefit of Leqembi, also known as lecanemab. The treatment got a green light under the FDA’s accelerated approval program in January. An FDA decision on traditional approval is expected by July 6. Advisory committee votes are not binding, but the FDA often follows committee recommendations. Biogen stock trading was halted Friday during the advisory committee meeting.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Former PM says he is standing down as an MP and he has been driven out of parliament
Here is the full text of Boris Johnson’s resignation statement:
I have received a letter from the privileges committee making it clear – much to my amazement – that they are determined to use the proceedings against me to drive me out of parliament.
They have still not produced a shred of evidence that I knowingly or recklessly misled the Commons.Continue reading...
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please
send us your events for inclusion.
Enjoy today’s videos!
The industry standard for dangerous and routine autonomous inspections just got better, now with a brand-new set of features and hardware.
[ Boston Dynamics ]
For too long, dogs and vacuums have existed in a state of conflict. But Roomba robots are finally ready to make peace. To celebrate Pet Appreciation Week (4–10 June), iRobot is introducing T.R.E.A.T., an experimental prototype engineered to dispense dog treats on demand. Now dogs and vacuums can finally be friends.
[ T.R.E.A.T. ]
Legged robots have better adaptability in complex terrain, and wheeled robots move faster on flat surfaces. Unitree B-W, the ultimate speed all-rounder, combines the advantages of both types of two robots, and continues to bring new exploration and change to the industry.
[ Unitree ]
In this demonstration, Digit starts out knowing there is trash on the floor and that bins are used for recycling/trash. We use a voice command “Clean up this mess” to have Digit help us. Digit hears the command and uses a large language model to interpret how best to achieve the stated goal with its existing physical capabilities. At no point is Digit instructed on how to clean or what a mess is. This is an example of bridging the conversational nature of Chat GPT and other LLMs to generate real-world physical action.
[ Agility ]
Battery endurance represents a key challenge for long-term autonomy and long-range operations, especially in the case of aerial robots. In this paper, we propose AutoCharge, an autonomous charging solution for quadrotors that combines a portable ground station with a flexible, lightweight charging tether and is capable of universal, highly efficient, and robust charging.
[ ARPL NYU ]
BruBotics secured a place in the Guinness World Records! Together with the visitors of the Nerdland Festival, they created the longest chain of robots ever, which also respond to light. Vrije Universiteit Brussel/Imec professor Bram Vanderborght and his team, consisting of Ellen Roels, Gabriël Van De Velde, Hendrik Cools, and Niklas Steenackers, have worked hard on the project in recent months. They set their record with a chain of 334 self-designed robots. The BruBotics research group at VUB aims to bring robots closer to people with their record. “Our main objective was to introduce participants to robots in an interactive way,” says Vanderborght. “And we are proud that we have succeeded.”
[ VUB ]
Based in Italy, Comau is a leading robot manufacturer and global systems integrator. The company has been working with Intrinsic over the past several years to validate our platform technology and our developer product Flowstate through real-world use cases. In a new video case study, we go behind the scenes to explore and hear firsthand how Comau and Intrinsic are working together. Comau is using Intrinsic Flowstate to assemble the rigid components of a supermodule for a plug-in hybrid electric vehicle (PHEV).
[ Intrinsic ]
GITAI has achieved a significant milestone with the successful demonstration of a GITAI, an inchworm-type robotic arm equipped with a tool-changer function, and a GITAI lunar robotic rover in a simulated regolith chamber, featuring a 7-ton regolith simulant (LHS-1E).
[ GITAI ]
Uhh, pinch points...?
[ Deep Robotics ]
Detect, fetch, and collect. A seemingly easy task is being tested to find the best strategy to collect samples on the Martian surface, some 290,000 million kilometers away from home. The Sample Transfer Arm will need to load the tubes from the Martian surface for delivery to Earth. ESA’s robotic arm will collect them from the Perseverance rover, and possibly others dropped by sample-recovery helicopters as a backup.
[ ESA ]
Wing’s AutoLoader for curbside pickup.
[ Wing ]
MIT Mechanical Engineering students in Professor Sangbae Kim’s class explore why certain physical traits have evolved in animals in the natural world. Then they extract those useful principles that are applicable to robotic systems to solve such challenges as manipulation and locomotion in novel and interesting ways.
[ MIT ]
I get that it’s slightly annoying that robot vacuums generally cannot clean stairs, but I’m not sure that it’s a problem actually worth solving.
Also, the actual existence of this thing is super sketchy, and I wouldn’t give them any money just yet.
The fastest, tiniest, mouse-iest competition for how well robots can stick to smooth surfaces.
[ Veritasium ]
Art and language are pinnacles of human expressive achievement. This panel, part of the Stanford HAI Spring Symposium on 24 May 2023, offered conversations between artists and technologists about intersections in their work. Speakers included Ken Goldberg, professor of industrial engineering and operations research, University of California, Berkeley, and Sydney Skybetter, deputy dean of the College for Curriculum and Co-Curriculum and senior lecturer in theater arts and performance studies, Brown University. Moderated by Catie Cuan, Stanford University.
[ Stanford HAI ]
An ICRA 2023 Plenary from 90-year-old living legend Jasia Reichardt (who coined the term “uncanny valley” in 1978), linking robots with Turing, Fellini, Asimov, and Buddhism.
[ ICRA 2023 ]
Shares of Dish Network Corp. DISH were declining again Friday, down about 12% in afternoon action to pace S&P 500 SPX laggards. The shares fell 4.5% in Thursday’s session. The New York Post reported late Thursday that the company was looking to shed assets in a bid to meet its commitment to reaching 70% of the U.S. with its 5G network by the end of June. The report discussed skepticism about the company’s ability to sell enough assets to make a difference. Separately, Dish announced Friday morning that it would be offering its new Infinite Unlimited+ wireless subscription for $50 a month.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Donald Trump is the first former president in US history to face federal criminal charges – is this a gamechanger or just another chapter in the drama?
He really might be going to prison.
Donald Trump just became the first former president in American history to face federal criminal charges.Continue reading...
Animation has come a long way since 1900, when J. Stuart Blackton created The Enchanted Drawing, the earliest known animated film. The 90-second movie was created using stop-motion techniques, as flat characters, props, and backgrounds were drawn on an easel or made from paper.
Most modern animators rely on computer graphics and visualization techniques to create popular movies and TV shows like Finding Dory, Toy Story, and Paw Patrol. In the 1960s and ’70s, computer science pioneers David Evans and IEEE Life Member Ivan E. Sutherland led the development of many of the technologies animators now use. Their groundbreaking research, conducted at the University of Utah, in Salt Lake City, and at their company, Evans and Sutherland, helped jump-start the computer graphics industry.
A ceremony was held at the university on 24 March to recognize the computer graphics and visualization techniques with an IEEE Milestone. The IEEE Utah Section sponsored the nomination.
Computer graphics began in the 1950s with interactive games and visualization tools designed by the U.S. military to develop technologies for aviation, radar, and rocketry.
Evans and Sutherland, then computer science professors at the University of Utah, wanted to expand on the use of such tools by finding a way for computers to simulate objects and environments. In 1968 they founded Evans and Sutherland, locating the E&S headquarters in the university’s research park.
Many of today’s computer graphics luminaries—including Pixar cofounder Edwin Catmull, Adobe cofounder John Warnock, and Netscape founder Jim Clark, who also founded Silicon Graphics—got their start in the industry as E&S employees or as doctoral students working on research at the company’s facilities.
IEEE Milestone Dedication: Utah Computer Graphics youtu.be
While at E&S, the employees and students made fundamental contributions to computer graphics processes, says IEEE Fellow Christopher Johnson, a University of Utah computer science professor.
“David Evans, Ivan Sutherland, and their students and colleagues helped change the world,” Johnson says.
“The period from 1968 through 1978 was an extraordinary time for computer graphics,” adds Brian Berg, IEEE Region 6 history chair. “There was a rare confluence of faculty, students, staff, facilities, and resources to support research into computer vision algorithms and hardware that produced remarkable developments in computer graphics and visualization techniques. This research was responsible for the birth of much of continuous-tone computer graphics as we know it today.” Continuous-tone computer graphics have a virtually unlimited range of color and shades of gray.
Evans began his career in 1955 at Bendix—an aviation electronics company in Avon, Ohio—as manager of a project that aimed to develop an early personal computer. He left to join the University of California, Berkeley, as chair of its computer science department. He also headed Berkeley’s research for the Pentagon’s Advanced Research Project Agency (now known as the Defense Advanced Research Projects Agency).
In 1963 Evans became a principal investigator for ARPA’s Project Genie. He helped develop hardware techniques that enabled commercial use of time-shared computer systems.
In 1965 the University of Utah hired him to establish its computer science department after receiving an ARPA grant of US $5 million to investigate how the emerging field of computer graphics could play a role in the country’s technological competitiveness, according to Computer Graphics and Computer Animation.
In 1968 Evans asked Sutherland, a former colleague at Berkeley who was then an associate professor of electrical engineering at Harvard, to join him at the University of Utah, luring him with the promise of starting a company together. Sutherland was already famous in computer graphics circles, having created Sketchpad, the first computer-aided design program, for his Ph.D. thesis in 1963 at MIT.
The two founded E&S almost as soon as Sutherland arrived, and they began working on computer-based simulation systems.
The duo in 1969 developed the line-drawing system displays LDS-1 and LDS-2, the first graphics devices with a processing unit. They then built the E&S Picture System—the next generation of LDS displays.
Those workstations, as they were called, came to be used by most computer-generated-imagery production companies through the 1980s.
E&S also developed computer-based simulation systems for military and commercial training, including the CT5 and CT6 flight simulators.
In addition to hiring employees, E&S welcomed computer science doctoral students from the university to work on their research projects at the company.
“Almost every influential person in the modern computer-graphics community either passed through the University of Utah or came into contact with it in some way,” Robert Rivlin wrote in his book, The Algorithmic Image: Graphic Visions of the Computer Age.
One of the doctoral students was Henri Gouraud, who in 1971 developed an algorithm to simulate the differing effects of light and color across the surface of an object. The Gouraud shading method is still used by creators of video games and cartoons.
In 1974 Edwin Catmull, then also a doctoral student at the university, developed the principle of texture mapping, a method for adding complexity to a computer-generated surface. Catmull went on to help found Pixar in 1986 with computer scientist Alvy Ray Smith, an IEEE member. For his work in the industry, Catmull received the 2006 IEEE John von Neumann Medal.
Doctoral student Bui Tuong Phong in 1973 devised Phong shading, a modeling method that reflects light so computer-generated graphics can look shiny and plasticlike.
“As a group, the University of Utah contributed more to the field of knowledge in computer graphics than any of its contemporaries,” Berg wrote in the Milestone proposal. “That fact is made most apparent both in the widespread use of the techniques developed and in the body of awards the innovations garnered.” The awards include several scientific and technical Oscars, an Emmy, and many IEEE medals.
Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.
The Milestone plaque displayed on a granite obelisk outside of the University of Utah’s Merrill engineering building reads:
In 1965 the University of Utah established a Center of Excellence for computer graphics research with Advanced Research Projects Agency (ARPA) funding. In 1968 two professors founded the pioneering graphics hardware company Evans & Sutherland; by 1978, fundamental rendering and visualization techniques disclosed in doctoral dissertations included the Warnock algorithm, Gouraud shading, the Catmull-Rom spline, and the Blinn-Phong reflection model. Alumni-founded companies include Atari, Silicon Graphics, Adobe, Pixar, and Netscape.
Russian president says beginning of offensive ‘is evidenced by the use of strategic reserves’. This live blog is closed
Russian military bloggers are saying that overnight the Ukrainians were making another attempt to break through Russian lines in occupied Zaporizhzhia in the area of Orikhiv.
The claims have not been independently verified.Continue reading...
Agatha Christie explains why Donald Trump is the first president to be indicted.
The post Trump’s Mistake Was Committing Small Crimes by Himself appeared first on The Intercept.
Kremlin commander claimed the mercenaries had kidnapped and tortured his soldiers during battle for Bakhmut
The Wagner group has been accused of stoking “anarchy” on Russia’s frontlines after one of the Kremlin’s military commanders claimed Yevgeny Prigozhin’s mercenaries had kidnapped and tortured his soldiers during the battle for Bakhmut.
In a video posted online, Lt Col Roman Venevitin also accused Wagner soldiers of stealing arms, forcing mobilised soldiers to sign contracts with Wagner, and attempting to extort weapons from the Russian defence ministry in exchange for releasing kidnapped soldiers.Continue reading...
The £28bn delay caps weeks of turmoil over Labour’s green ambitions, but if he’s serious about them, the leader will need to get used to it
A U-turn on the eve of a major policy announcement is not usually part of the plan for a government in waiting. Later this month the Labour leader, Keir Starmer, will set out his pitch on energy, jobs and net zero, hoping to place a green economy at the centre of his vision for revitalising the UK.
But with just weeks to go, his shadow chancellor, Rachel Reeves, admitted on Friday that the key plank of that vision – the party’s much-heralded flagship commitment to spend £28bn a year on green investment – would be delayed. She blamed the economic mess being left by the Conservative party, and insisted the target would be met in the second half of a Labour parliament.Continue reading...
Readers say they’d love to ditch their cars as Stuart Jeffries has, but it’s not easy in rural areas
Stuart Jeffries’ recognition that he shares the privilege of Londoners when it comes to public transport (I ditched my car – and improved my fitness, sleep and bank balance, 7 June) reminds us that this is a massive component of regional inequality that challenges the levelling-up waffle.
I gave up the car in November 2022, and I too am privileged – but in a completely different way. I know how to get the best out of a failing public transport system better than most people in West Yorkshire. I have studied bus and train timetables since childhood, I have done service on a transport authority and I have honed my limited IT skills to track wayward buses and trains. The latter is necessary because timetables often morph into fairytales.Continue reading...
| submitted by /u/lackbotone |
| submitted by /u/chrisdh79 |
There is a chasm between how shocking the pair clearly think Vulgar is and how shocking it actually is. A chorus wouldn’t have gone amiss either
The most shocking thing about Madonna and Sam Smith’s much-trailed collaborative single may be that it apparently took seven people to write – among them Smith and Madonna, a chunk of the team behind Smith’s 2022 chart-topper Unholy, and blue-chip songwriter-for-hire Ryan Tedder. There’s almost nothing to its two and a half minutes: a Bollywood-ish string motif, a two-note melody and a beat that, when it really kicks in, recalls the kind of hard-edged house music that proliferated in 90s New York gay clubs. Ironically, the atmosphere it conjures isn’t a million miles removed from that of Junior Vasquez’s 1996 single If Madonna Calls, the self-styled “bitch track” that led to a breakdown in relations between the Sound Factory’s resident DJ and the singer.
“Vulgar will make you dance, don’t need a chorus,” sing Smith and Madonna in unison, a line that’s hard not to respond to with a shrugging, “Well, if you say so, but one might have helped.”Continue reading...
Signals point to explosion early on Tuesday, as Ukraine says intercepted call proves Russia was responsible
Evidence is growing that the Nova Kakhovka dam was blown up, after the publication of seismic data showing there was a blast at the site in the early hours of Tuesday.
Norsar, the Norwegian Seismic Array, said signals from a regional station in Romania pointed to an explosion at 2.54am. Norsar did not draw conclusions on who was responsible.Continue reading...
| submitted by /u/Sorin61 |
| submitted by /u/fine_world_07 |
The stars’ beautiful, haunting show follows a mother who sues the NHS to stop her child from dying. They open up about parental terror, inequality and dad dancing
If Best Interests – a drama about a mother who takes the NHS to court after doctors decide to allow her teenage daughter to die – feels too harrowing to countenance, you’re not alone: even the cast can’t bring themselves to watch it. “It was hard enough doing it on the day,” says Sharon Horgan of playing Nicci, the mother in question. Michael Sheen, who co-stars as her husband Andrew – a man devastated by his daughter’s illness but unwilling to back his wife’s appeal – is also avoiding it. “I’m more nervous than usual,” he admits. “I know it’s going to be a difficult watch.”
That’s an understatement. Best Interests begins with Nicci and Andrew on a train, giddily happy, slightly frisky and, as we soon realise, uncharacteristically carefree. Over the next four hours, we see their relationship falter under the pressure of caring for their younger daughter Marnie (Niamh Moriarty) who has muscular dystrophy, as consultants tell them her condition has progressed beyond all medical intervention – something that leads Nicci to mount a headline-grabbing, life-upending legal challenge. It’s little wonder Horgan had doubts about taking the role in the first place. “I was really nervous about how much this was going to fuck me up,” she says. It ended up being as crushing as she feared. “We spent a lot of time in terrible pain. You have to go to some really awful places to get yourself into that mindset and stay there. Sometimes you come home and go: ‘What kind of a weird job is this?’”Continue reading...
| submitted by /u/Wagamaga |
There’s no telling how many indictments he will collect before the election. And the sad fact is that his party doesn’t seem to care
Donald Trump announced his latest indictment last night in front of a painting of a guy literally twirling his moustache. “I am an innocent man,” the former president insisted, next to this cartoon shorthand for villainy. The oil painting in question is not so much an artwork as a lift-music version of an artwork, and seems to hang at Trump’s Bedminster golf club in New Jersey – which is the same place he buried his former wife Ivana, as all admirers of both exquisite taste and private-cemetery tax breaks may already know. Either way, Ivana’s there, right near the first tee. It’s what she would have wanted.
As for her surviving ex-husband, it’s fashionable to say that anything that would represent a catastrophic setback for any other human being is exactly what Trump would have wanted. By this metric, his indictment on federal charges for the first time, including under the Espionage Act, is an absolute gift and a triumph. He’ll use it to pull in fundraising, it’ll rally his base, it’ll make every Republican beta – which is to say, every Republican – feel they have to swear loyalty to him. Furthermore, it’s already got him right where he most loves to be: with everyone talking about him. And these are all reasonable points – or at least reasonable in a through-the-looking-glass way, given that to many outside observers the United States passed reason two or three election cycles ago. If only they could invade themselves to bring democracy.
Marina Hyde is a Guardian columnist
On Tuesday 13 June, Marina Hyde will join Gary Younge at a Guardian Live event in Brighton. Readers can join this event in person
What Just Happened?! by Marina Hyde (Guardian Faber, £9.99). To support the Guardian and Observer, order your copy at guardianbookshop.com. Delivery charges may apply
Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.Continue reading...
| submitted by /u/geoxol |
| submitted by /u/Global_Informant |
| submitted by /u/Wagamaga |
| submitted by /u/ivan-slimer |
Britain’s health is a national scandal, not just because of the state of the NHS, but because the government refuses to take action on our diets
In April 1994, the CEOs of the US’s seven biggest tobacco companies swore on oath before a Senate committee that nicotine was “not addictive”. At the time it was estimated that 3,000 American children were being induced by said companies to start smoking every day.
Last Monday, the BBC’s Panorama programme came close to repeating that scene with Britain’s food manufacturers. The products at issue are ultra-processed foods (UPF). Their makers’ denial of the harm these products may cause is as adamant as those tobacco execs’ once was, and the consequences could be equally lethal.
Simon Jenkins is a Guardian columnistContinue reading...
| submitted by /u/chrisdh79 |
Kaspersky is reporting a zero-click iOS exploit in the wild:
Mobile device backups contain a partial copy of the filesystem, including some of the user data and service databases. The timestamps of the files, folders and the database records allow to roughly reconstruct the events happening to the device. The mvt-ios utility produces a sorted timeline of events into a file called “timeline.csv,” similar to a super-timeline used by conventional digital forensic tools.
Using this timeline, we were able to identify specific artifacts that indicate the compromise. This allowed to move the research forward, and to reconstruct the general infection sequence:...
| submitted by /u/Ssider69 |
| submitted by /u/zvone187 |
| submitted by /u/Everest518 |
| submitted by /u/Cookie_Cutter_Cook |
Prince Harry became one of the most senior royals ever to give evidence in an English court this week. Jim Waterson reports from the high court in London where tabloid phone hacking was back under the microscope
For Prince Harry’s entire life, he has been the subject of media stories and of special interest to tabloid newspapers. Everything from childhood injuries to his early relationships, his entry to and exit from a military career right up to his marriage and subsequent departure from official royal life. Many of those stories were written, he claims, with the aid of illegal practices such as phone hacking.
As Jim Waterson tells Nosheen Iqbal, Harry is also suing the publishers of the Sun and the Daily Mail, but this week he got his day in court against the Mirror group. Under cross examination, he made his case that reporters seemingly had access to information that could only have come from underhand methods. The Mirror group has admitted to hacking phones in the past but maintains that Harry’s was not one of them. Their case is that the information they printed was from sources close to the prince or already in the public domain.Continue reading...
| submitted by /u/ICumCoffee |
| submitted by /u/777fer |
There’s a lot of risk in deploying new technology for cutting-edge computer chips. So Intel executives were understandably cautious in executing a plan that next year simultaneously introduces both a new transistor—RibbonFET—and a new way of powering it—PowerVia.
To take some of the risk out of this high-wire act, the company has built and tested processor cores composed of Intel’s current generation of transistors combined with PowerVia. The resulting cores saw more than a 6 percent frequency boost as well as more compact designs and 30 percent less power loss. Just as important, the tests proved that including backside power doesn’t make the chips more costly, less reliable, or more difficult to test for defects. Intel is presenting the details of these tests in Tokyo next week at the IEEE Symposium on VLSI Technology and Circuits.
“We wanted to make sure we could derisk…understand everything about PowerVia, and then go the next step and integrate with RibbonFET,” says Ben Sell, Intel’s vice president of technology development.
PowerVia is Intel’s version of a technology called backside power delivery. Today, chips are constructed with the transistors at the surface of the silicon and all the interconnects that power them and transmit their data signals built above them. Backside power removes all the power-delivering interconnects to beneath the silicon. This has two main effects. First, it leaves more room for the data interconnects above the silicon. And second, the power interconnects can be made larger and therefore less resistive.
Backside power delivery moves the power interconnects from above the silicon to below it.Intel
That combination improves performance in a few ways. First, with an easier path for power to flow, circuits on the CPU experience less voltage droop; in other words, there is a smaller transient fall in voltage when demand for current increases from, say, a large block of logic switching on. With less droop, transistors can be run faster.
Second, cores can be made more compact, decreasing the length of interconnects between logic cells, which speeds things up. When the standard logic cells that make up the processor core are laid out on the chip, interconnect congestion keeps them from packing together perfectly, leaving loads of blank space between the cells. With less congestion among the data interconnects, the cells fit together more tightly, with some portions up to 95 percent filled. Sell says that’s a double-digit improvement. What’s more, the lack of congestion allowed some of the smallest interconnects to spread out a bit, reducing parasitic capacitance that hinders performance.
The 6 percent gain from these advantages is about half what’s typically delivered when a chipmaker scales down transistors from one technology node to the next. PowerVia delivers it with no change to the transistors.
Making PowerVia-enabled chips requires several extra steps and leads to the unusual result that there is hardly any silicon left in the chip. Things start out pretty normal: The transistors, which in this case are FinFETs made using the Intel 4 process, are constructed at the surface of the silicon, as usual. The main difference is that a group of deep, narrow holes are also drilled and then filled in with metal. These nano-TSVs (for through-silicon vias) will be important later. From there, layers of interconnect are formed above the transistors to link them together into logic cells and larger circuits. So far, so regular.
Then the process takes a turn. A blank silicon wafer, called a carrier wafer, is bonded to the top of those interconnects and the whole thing is flipped over. Then the bottom of the original wafer (now on top) is polished away until the ends of the nano-TSVs are exposed. At that point, layers of comparatively chunky interconnects are built up to connect to the nano-TSVs and form the backside power delivery network. These interconnect layers terminate in the bond pads that will link the chip to the package and the rest of the computer.
The resulting chip is thus made up of a large layer of blank silicon for support, a layer of data interconnects, a vanishingly narrow layer of silicon transistors, and a layer of power interconnects.
It’s hard to spot the silicon in this PowerVia-enabled processor. (Hint: It’s the bit of white in the middle.) Most of the chip is made up of the signal interconnects above and the much chunkier power interconnects below the transistors. Intel
You might expect that having to build interconnects on both sides of the silicon would make the cost of the chip shoot up. But early on, Intel saw a reason why that would not be the case, says Sell. The smallest, most tightly packed layer of interconnects, called M0, are also the costliest to produce. They can require more than one pass through chipmaking’s most expensive step, extreme ultraviolet lithography. But with no power interconnects to get in the way, the lines in the M0 layer could be six nanometers further apart than they are today. That may not seem like much, but it means it takes less EUV effort to make them. For the process to be introduced next year and for its successor, “the cost savings we get from not scaling so aggressively more than offsets the additional cost from the backside power-delivery process,” Sell says.
If the plans for PowerVia were going to work, the technology had to meet certain criteria, most of which have to do with not making things worse: Despite existing in a much thinner layer of silicon, the transistors had to work just as well; the power delivery network had to be just as reliable as those built on the front side of the silicon; the heat generated in the silicon couldn’t get out of hand, despite the transistors being sandwiched between interconnect layers; and the ability to debug ICs and spot design defects can’t be hampered.
It took some doing to meet these criteria. For example, the power-interconnect process had to be tweaked to keep from affecting the transistors. And Intel had to set some design rules to keep thermal issues in line. It also had to come up with new methods to make debugging work.
On top of all that, Intel engineers had to ensure that the PowerVia chips’ yield—the fraction of good chips per wafer—was on target to reach high-volume manufacturing, even though these particular chips will never be sold. The goal here was for the yield of Intel 4 PowerVia chips to match those of Intel 4 chips from 9 months ago. PowerVia chips were always going to lag, because any improvements to Intel 4’s yield would take time to translate to the PowerVia experiments. “We did a bit better than that,” says Sell. PowerVia’s yield curve follows Intel 4’s by only 6 months.
With the process for PowerVia worked out, the only change Intel will have to make in order to complete its move from Intel 4 to the next node, called 20A, is to the transistor. RibbonFET, Intel’s take on nanosheet, or gate-all-around, transistors, will then slot in to the already established interconnect scheme.
If all goes well, and Sell says all is going well, the 20A process will be making the company’s Arrow Lake CPUs in 2024. The following technology generation, called 18A, is meant for both Intel products and foundry customers.
Success would put Intel ahead of TSMC and Samsung, in offering both nanosheet transistors and backside power. Samsung has already moved to a gate-all-around device, and it’s unclear when it will integrate backside power. TSMC is scheduled to offer gate-all-around devices in 2025, but it won’t be adding backside power delivery until at least 2026.
Authorities close off 46-mile stretch of coastline after man attacked by tiger shark near Hurghada
A Russian man has died after being mauled by a shark off one of Egypt’s Red Sea resorts, Egyptian and Russian authorities have said.
Egypt’s environment ministry said the man was killed on Thursday after being attacked by a tiger shark in the waters near the city of Hurghada. Authorities closed off a 46-mile (74km) stretch of the coastline, announcing it would remain off-limits until Sunday.Continue reading...
| submitted by /u/777fer |
Rufo’s Documentary Foundation received an influx of untraceable money in 2021, as his national profile grew.
The post Funded by Dark Money, Chris Rufo’s Nonprofit Stokes the Far Right’s Culture War appeared first on The Intercept.
| submitted by /u/Avieshek |
| submitted by /u/ElijahPepe |
Sultan Al Jaber – Cop28 president and CEO of state oil firm – is ‘ally the climate movement needs’, posts say
An army of fake social media accounts on Twitter and the blogging site Medium have been promoting and defending the controversial hosting of a UN climate summit by the United Arab Emirates.
The president of the Cop28 climate talks is Sultan Al Jaber, who is also the chief executive of the state oil giant Adnoc, which has major net zero-busting expansion plans.Continue reading...
Simulating how air and gases flow through a jet engine is one of the most challenging computing problems in engineering. Engineers at Rolls-Royce think quantum computers could give these calculations a significant boost. So they’ve collaborated with Nvidia to simulate the largest ever quantum circuit for computational fluid dynamics (CFD).
With scores of fan blades spinning at thousands of rotations a second, and airflows measuring in hundreds of kilometers per hour, modeling the inner workings of a jet turbine is incredibly complex. But it’s also critical for designing more powerful and efficient engines, something that’s becoming increasingly important as the aviation industry attempts to reduce its climate impact.
Rolls-Royce already makes wide use of CFD simulations in its design processes, but the enormous computational resources required means its engineers typically have to compromise on fidelity or scale. Quantum computers could provide the speedup needed, though, says Leigh Lapworth, a fellow in computational science at Rolls-Royce.
“This is opening up a large new area for quantum computing, and for industries that thought quantum computing had nothing for them.”
—Frank Gaitan, University of Maryland
Today’s quantum processors are still too small to simulate complex systems like jet engines. But Lapworth says that Rolls-Royce still sees value in developing quantum algorithms for CFD, which is why it has teamed up with Nvidia and Israeli quantum-software company Classiq. Last month, the companies announced that they had used Nvidia’s GPU-based quantum simulator to test a 39-qubit quantum CFD circuit with a depth of 10 million layers. That’s the largest such simulation to date, and it “unlocks the next step of our research agenda,” says Lapworth.
“We know from previous experience, getting these applications to a production-ready standard takes just as long as the hardware development,” he says. “We need to do the algorithms development and the application development, such that when those devices appear, we’ve got the applications ready to go.”
Simulating quantum circuits involves using classical computers to model all of the quantum interactions that would happen inside a quantum processor. The number of required calculations scales exponentially with the number of qubits, however, so it’s not surprising that a classical computer simulation of a quantum computer can achieve meaningful results only at smaller size scales.
Yet, the insights this smaller-size simulation can still provide, says Lapworth, are proving valuable in further designing and optimizing the quantum algorithm.
Rolls-Royce’s approach to quantum CFD simulation is based on the HHL algorithm, which can solve linear algebra problems exponentially faster than classical approaches can, given enough qubits. CFD systems use both linear and nonlinear algebra, so the solution is actually a hybrid scheme that leaves the nonlinear parts of the problem to classical hardware.
In theory, the approach should lead to significant speedups on future quantum computers. But the only way to make sure that the algorithm works is to run it and compare the results to those produced by a classical CFD simulator, something not possible with today’s rudimentary quantum hardware. “There’s no quantum computer on the planet that could run that circuit today,” says Lapworth.
Simulating these kinds of quantum algorithms on classical hardware can certainly be tricky, says Lapworth. While they can solve massive computational challenges that are intractable for classical computers, quantum computers’ complexity means that on smaller problems they may also run significantly slower, he says. That’s where Nvidia comes in. By using Nvidia’s A100 GPUs to accelerate their simulations, the team has now managed to run the circuit at a scale large enough to perform meaningful comparisons against experimental data of the jet engines and their components at work, says Lapworth.
The approach Rolls-Royce has taken makes sense, says Frank Gaitan, a research physicist at the University of Maryland, because it uses a similar strategy to existing CFD simulation algorithms. It just involves swapping out a classical linear equation solver for a quantum one. But he points out that the HHL algorithm does have limitations, because unless the input meets some fairly stringent constraints, the exponential speedup can be lost. His group has devised a different quantum algorithm for CFD that only achieves a quadratic speedup and only works on problems that feature turbulence, but it can deal with both linear and nonlinear calculations and is considerably simpler, he says.
Nonetheless, he thinks the research shows people are recognizing that quantum computers could have far more applications than originally assumed. “Prior to some of this work, I think most people thought quantum computers were for factoring integers, searching databases, that sort of stuff,” he says. “This is opening up a large new area for quantum computing, and for industries that thought quantum computing had nothing for them.”
Rolls-Royce isn’t the only company intrigued by quantum computing’s potential for CFD applications. A spokesman for Pennsylvania-based simulation-software provider Ansys, said the company is working on quantum approaches for speeding up both CFD and other physics simulations. And David Heiny, CEO of German provider SimScale, also says the technology is on their radar. “Quantum approaches to CFD and simulation in general are something SimScale is interested in and keeping an eye on,” he says.
The powerful lights mounted on the border wall threaten the dark skies that make southern Arizona a biodiversity hotspot.
The post The Feds Have Thousands of Stadium Lights on the Border. Switching Them On Would Devastate Desert Ecosystems. appeared first on The Intercept.
A growing number of countries are preparing to shift from using the U.S. dollar in trade, which could undermine the greenback’s global supremacy.
The post Monetary Blowback: How U.S. Wars, Sanctions, and Hegemony Are Threatening the Dollar’s Reserve Currency Dominance appeared first on The Intercept.
Facing questions about Gail Gitcho’s work as a foreign agent, the GOP-affiliated Women’s Democracy Network scrubbed her from its site.
The post GOP Lobbyist Claimed to Be “Empowering Women” — but Worked for Saudi Theocracy’s LIV Golf appeared first on The Intercept.
Inspired by dog-agility courses, a team of scientists from Google DeepMind has developed a robot-agility course called Barkour to test the abilities of four-legged robots.
Since the 1970s, dogs have been trained to nimbly jump through hoops, scale inclines, and weave between poles in order to demonstrate agility. To take home ribbons at these competitions, dogs must have not only speed but keen reflexes and attention to detail. These courses also set a benchmark for how agility should be measured across breeds, which is something that Atil Iscen—a Google DeepMind scientist in Denver—says is lacking in the world of four-legged robots.
Despite great developments in the past decade, including robots like MIT’s Mini Cheetah and Boston Dynamics’ Spot which have shown how animal-like robots’ movement can be, a lack of standardized tasks for these types of robots has made it difficult to compare their progress, Iscen says.
Quadruped Obstacle Course Provides New Robot Benchmark youtube
“Unlike previous benchmarks developed for legged robots, Barkour contains a diverse set of obstacles that requires a combination of different types of behaviors such as precise walking, climbing, and jumping,” Iscen says. “Moreover, our timing-based metric to reward faster behavior encourages researchers to push the boundaries of speed while maintaining requirements for precision and diversity of motion.”
For their reduced-size agility course—the Barkour course was 25 meters squared instead of up to 743 square meters used for traditional courses—Iscen and colleagues chose four obstacles from traditional dog-agility courses: a pause table, weave poles, climbing an A-frame, and a jump.
The Barkour robotic-quadruped benchmark course uses four obstacles from traditional dog-agility courses and standardizes a set of performance metrics around subjects’ timings on the course. Google
“We picked these obstacles to put multiple axes of agility, including speed, acceleration, and balance,” he said. “It is also possible to customize the course further by extending it to contain other types of obstacles within a larger area.”
As in dog-agility competitions, robots that enter this course are deducted points for failing or missing an obstacle, as well as for exceeding the course’s time limit of roughly 11 seconds. To see how difficult their course was, the DeepMind team developed two different learning approaches to the course: a specialist approach that trained on each type of skill needed for the course—for example, jumping or slope climbing—and a generalist approach that trained by studying simulations run using the specialist approach.
After training four-legged robots in both of these different styles, the team released them onto the course and found that robots trained with the specialist approach slightly edged out those trained with the generalized approach. The specialists completed the course in about 25 seconds, while the generalists took closer to 27 seconds. However, robots trained with both approaches not only exceeded the course time limit but were also surpassed by two small dogs—a Pomeranian/Chihuahua mix and a Dachshund—that completed the course in less than 10 seconds.
Here, an actual dog [left] and a robotic quadruped [right] ascend and then begin their descent on the Barkour course’s A-frame challenge. Google
“There is still a big gap in agility between robots and their animal counterparts, as demonstrated in this benchmark,” the team wrote in their conclusion.
While the robots’ performance may have fallen short of expectations, the team writes that this is actually a positive because it means there’s still room for growth and improvement. In the future, Iscen hopes that the easy reproducibility of the Barkour course will make it an attractive benchmark to be employed across the field.
“We proactively considered reproducibility of the benchmark and kept the cost of materials and footprint to be low. We would love to see Barkour setups pop up in other labs.”
—Atil Iscen, Google DeepMind
“We proactively considered reproducibility of the benchmark and kept the cost of materials and footprint to be low,” Iscen says. “We would love to see Barkour setups pop up in other labs and we would be happy to share our lessons learned about building it, if other research teams interested in the work can reach out to us. We would like to see other labs adopting this benchmark so that the entire community can tackle this challenging problem together.”
As for the DeepMind team, Iscen says they’re also interested in exploring another aspect of dog-agility courses in their future work: the role of human partners.
“At the surface, (real) dog-agility competitions appear to be only about the dog’s performance. However, a lot comes to the fleeting moments of communication between the dog and its handler,” he explains. “In this context, we are eager to explore human-robot interactions, such as how can a handler work with a legged robot to guide it swiftly through a new obstacle course.”
A paper describing DeepMind’s Barkour course was published on the arXiv preprint server in May.
New paper: “Lessons Lost: Incident Response in the Age of Cyber Insurance and Breach Attorneys“:
Abstract: Incident Response (IR) allows victim firms to detect, contain, and recover from security incidents. It should also help the wider community avoid similar attacks in the future. In pursuit of these goals, technical practitioners are increasingly influenced by stakeholders like cyber insurers and lawyers. This paper explores these impacts via a multi-stage, mixed methods research design that involved 69 expert interviews, data on commercial relationships, and an online validation workshop. The first stage of our study established 11 stylized facts that describe how cyber insurance sends work to a small numbers of IR firms, drives down the fee paid, and appoints lawyers to direct technical investigators. The second stage showed that lawyers when directing incident response often: introduce legalistic contractual and communication steps that slow-down incident response; advise IR practitioners not to write down remediation steps or to produce formal reports; and restrict access to any documents produced...
Looking for luscious local cuisine, exotic cocktails and sandy beaches? Or would you rather explore the area’s rich history and natural beauty? There’s plenty to keep you mesmerised
Vibrant, culturally rich and visually stunning – the islands of the Caribbean are like nowhere else on Earth. From cocktails to coral reefs, hiking to heritage, they offer something for all tastes and tempos. Here are seven unmissable highlights of the Caribbean.
Enjoy sunset cocktails on the white sand beaches of Barbados or Jamaica
Pristine beaches and perfect sunsets go hand in hand in the Caribbean – as do perfectly made cocktails to enjoy them with. Just pick your favourite beach – be it along the platinum coast of Barbados, or at the heart of the rich, reggae-infused warmth of Jamaica – add a classic mojito or piña colada (cocktails are available to all-inclusive holidaymakers at Sandals Resorts), and watch the sun sink into the limitless blue horizon. Ready to take the sunset experience one step further? At some of its luxury, all-inclusive Jamaica resorts, Sandals has over-water bars, complete with broad hammocks for lazing just above the gently lapping water.
The fight could influence whether Georgia stays blue in 2024’s Senate and presidential races.
The post No One Believes in Cop City. So Why Did Atlanta’s City Council Fund It? appeared first on The Intercept.
The definition conflates criticism of Israel with antisemitism. A new report details how it’s been used to justify punitive action against Palestine advocates in Europe.
The post Biden Embraces Antisemitism Definition That Has Upended Free Speech in Europe appeared first on The Intercept.
The intelligence report described the demonstrations as a “violent far-left occupation” — a phrase copied directly from an article by Ngo a day earlier.
The post DHS Intel Report on Cop City Protesters Cribbed Far-Right Activist Andy Ngo appeared first on The Intercept.
In an interview with The Intercept, the ousted Pakistani prime minister, just released from arrest, accuses the country’s military of deepening a political crisis.
The post Imran Khan: U.S. Was Manipulated by Pakistan Military Into Backing Overthrow appeared first on The Intercept.
Dina Genkina: Hi. I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. This episode is brought to you by IEEE Explore. The digital library with over 6 million pieces of the world’s best technical content. In the November issue of IEEE Spectrum, one of our most popular stories was about code that writes its own code. Here to probe a little deeper is the author of that article, Craig Smith. Craig is a former New York Times correspondent and host of his own podcast, Eye On AI. Welcome to the podcast, Craig.
Craig Smith: Hi.
Genkina: Thank you for joining us. So you’ve been doing a lot of reporting on these new artificial intelligence models that can write their own code to whatever capacity that they can do that. So maybe we can start by highlighting a couple of your favorite examples, and you can explain a little bit about how they work.
Smith: Yeah. Absolutely. First of all, the reason I find this so interesting is that I don’t code myself. And I’ve been talking to people for a couple of years now about when artificial intelligence systems will get to the point that I can talk to them, and they’ll write a computer program based on what I’m asking them to do, and it’s an idea that’s been around for a long time. And one thing is a lot of people think this exists already because they’re used to talking to Siri or Alexa or Google Assistant on some other virtual assistant. And you’re not actually writing code when you talk to Siri or Alexa or Google Assistant. That changed when they built GPT-3, the successor to GPT-2, which was a much larger language model. And these large language models are trained on huge corpuses of data and based primarily on something called a transformer algorithm. They were really focused on text. On human natural language.
But kind of a side effect was that there’s a lot of HTML code out on the internet. And GPT-3 it turns out learned how HTML code just as it learned English natural language. The first application of these large language models’ ability to write code has been first by GitHub. Together with OpenAI and Microsoft, they created a product called Copilot. And it’s pair programming. I mean, oftentimes when programmers are writing code, they have someone— they work in teams. In pairs. And one person writes kind of the initial code and the other person cleans it up or checks it and tests it. And if you don’t have someone to work with, then you have to do that yourself, and it takes twice as long. So GitHub created this thing based on GPT-3 called Copilot, and it acts as that second set of hands. And so when you begin to write a line of code, it’ll autocomplete that line, just as it happens with Microsoft Word now or any Word processing program. And then the coder can either accept or modify or delete that suggestion. GitHub recently did a survey and found that coders can code twice as fast using Copilot to help autocomplete their code than if they were working on their own.
Genkina: Yeah. So maybe we could put a bit of a framework to this. So I guess programming in its most basic form like back in the old days used to be with these punch cards, right? And when you get down to what you’re telling the computer to do, it’s all ones and zeros. So the base way to talk to a computer is with ones and zeros. But then people developed more complicated tools so that programmers don’t have to sit around and type ones and zeros all day long. And programming languages and their simpler programming languages are slightly more sophisticated, higher-level programming languages so to speak. And they’re kind of closer to words, although definitely not natural language. But they will use some words, but they still have to follow this somewhat rigid logical structure. So I guess one way to think about it is that these tools are kind of moving on to the next level of abstraction above that, or trying to do so.
Smith: That’s right. And that started really in the forties, or I guess in the fifties at a company called Remington Rand. Remington Rand. A woman named Grace Hopper introduced a programming language that used English language vocabulary. So that instead of having to write in symbols, mathematic symbols, the programmers could write import, for example, to ingest some other piece of code. And that has started this ladder of increasingly efficient languages to where we are today with things like Python. I mean, they’re primarily English language words and different kinds of punctuation. There isn’t a lot of mathematical notation in them.
So what’s happened with these large language models, what happened with HTML code and is now happening with other programming languages, is that you’re able to speak to them instead of— as with CodeWhisperer or Copilot, where you write in computer code or programming language and the system autocompletes what you started writing, you can write in natural language and the computer will interpret that and write the code associated with it. And that opens up this vista of what I’m dreaming of, of being able to talk to a computer and have it write a program.
The problem with that is that, as I was saying, natural language is so imprecise that you either need to learn to speak or write in a very constrained way for the computer to understand you. Even then, there’ll be ambiguities. So there’s a group at Microsoft that has come up with this system called T coder. It’s just a research paper now. It hasn’t been productized. But the computer, you tell it that you want it to do something in very spare, imprecise language. And the computer will see that there are several ways to code that phrase, and so the computer will come back and ask for clarification of what you mean. And that interaction, that back-and-forth, then refines the meaning or the intent of the person who’s talking or writing instructions to the computer to the point that it’s adequately precise, and then the computer generates the code.
So I think eventually there will be very high-level data scientists that learn coding languages, but it opens up software development to a large swath of people who will no longer need to know a programming language. They’ll just need to understand how to interact with these systems. And that will require them to understand, as you were saying at the onset, the logical flow of a program and the syntax of programs, of programming languages and be aware of the ambiguities in natural language.
And some of that’s already finding its way into products. There’s a company called Akkio that has a no-code platform. It’s primarily a drag-and-drop interface. And it works on tabular data primarily. But you drag in a spreadsheet and drop it into their interface, and then you click a bunch of buttons on what you want to train the program on. What you want the program to predict. These are predictive models. And then you hit a button, and it trains the program. And then you feed it your untested data, and it will make the predictions on that data. It’s used for a lot of fascinating things. Right now, it’s being used in the political sphere to predict who in a list of 20,000 contacts will donate to a particular party or campaign. Contacts will donate to a particular political party or campaign. So it’s really changing political fundraising.
And Akkio has just come out with a new feature which I think you’ll start seeing in a lot of places. One of the issues in working with data is cleaning it up. Getting rid of outliers. Rationalizing the language. You may have a column where some things are written out in words. Other things are numbers. You need to get them all into numbers. Things like that. That kind of clean-up is extremely time-consuming and tedious. And Akkio has a large— well, they’ve actually tapped into a large language model. So they’re using a large language model. It’s not their model. But you just write in natural language into the interface what you want done. You want to combine three columns that give the date, the time, and the month and year. I mean, the day of the week, the month, the year. The month and the year. You want to combine that into a single number so that the computer can deal with it more easily. You can just tell the interface by writing in simple English what you want. And you can be fairly imprecise in your English, and the large language model will understand what you mean. So it’s an example of how this new ability is being implemented in products. I think it’s pretty amazing. And I think you’ll see that spread very quickly. I mean, this is all a long way from my talking to a computer and having it create a complicated program for me. These are still very basic.
Genkina: Yeah. So you mention in your article that this isn’t actually about to put coders out of a job, right? So is it just because you think it’s not there yet. The technologies not at that level? Or is that fundamentally not what’s happening in your view?
Smith: Well, the technology certainly isn’t there yet. It’s going to be a very long time before— well, I don’t know that it’s going to be a long time because things have moved so quickly. But it’ll be a while yet, before you’ll be able to speak to a computer and have it write complex programs. But what will happen and will happen, I think, fairly quickly is with things like AlphaCode in the background, things like T coder that interacts with the user, that people won’t need to learn computer programming languages any longer in order to code. They will need to understand the structure of a program, the logic and syntax, and they’ll have to understand the nuances and ambiguities in natural language. I mean, if you turned it over to someone who wasn’t aware of any of those things, I think it would not be very effective.
But I can see that computer science students will learn C++ and Python because you learn the basics in any field that you’re going into. But the actual application will be through natural language working with one of these interactive systems. And what that allows is just a much broader population to get involved in programming and developing software. And we really need that because there is a real shortage of capable computer programmers and coders out there. The world is going through this digital transformation. Every process is being turned into software. And there just aren’t enough people to do that. That’s what’s holding that transformation back. So as you broaden the population of people that can do that, more software will be developed in a shorter period of time. I think it’s very exciting.
Genkina: So maybe we can get into a little bit of the copyright issues surrounding this because for example, GitHub Copilot sometimes spits out bits of code that are found in the training data that it was trained on. So there’s a pool of training data from the internet like you mentioned in the beginning and the output of this program the auto-completer suggests is some combination of all the inputs maybe put together in a creative way, but sometimes just straight copies of bits of code from the input. And some of these input bits of code have copyright licenses.
Yeah. Yeah. That’s interesting. I remember when sampling started in the music industry. And I thought it would be impossible to track down the author of every bit of music that was sampled and work out some kind of a licensing deal that would compensate the original artist. But that’s happened, and people are very quick to spot samples that use their original music if they haven’t been compensated. In this realm, to me, it’s a little different. It’ll be interesting to see what happens. Because the human mind ingests data and then produces theoretically original thought, but that thought is really just a jumble of everything that you’ve ingested. Yeah. I had this conversation recently about whether the human mind is really just a large language model that has trained on all of the information that it’s been exposed to.
And it seems to me that, on the one hand, it’s impossible to trace every input for any particular output as these systems get larger. And I just think it’s an unreasonable to expect every piece of human creative output to be copyrighted and tracked through all of the various iterations that it goes through. I mean, you look at the history of art. Every artist in the visual arts is drawing on his predecessors and using ideas and things to create something new. I haven’t looked in any particular cases where it’s glaring that the code or the language is clearly identifiable is coming from one source. I don’t know how to put it. I think the world is getting so complex that creative output, once it’s out there unless something like sampling for music where it’s clearly identifiable, that it’s going to be impossible to credit and compensate everyone whose output became an input to that computer program.
Genkina: My next question was about who should get paid for code by these big AIs, but I guess you kind of suggested a model where all the training data get a little bit of— everyone responsible for the training data would get a little bit of royalties for every use. I guess, long term that’s probably not super viable because a few generations from now there’s going to be no one that contributed to the training data.
Smith: Yeah. But that is interesting, who owns these models that are written by a computer. It’s something I really haven’t thought about. And I don’t know if you’ll cut this out, but have you read anything about that topic? About who will own— if AlphaCode becomes a product, deep mines AlphaCode, and it writes a program that becomes extremely useful and is used around the world and generates potentially a lot of revenue, who owns that model? I don’t know.
Genkina: So what is your expectation for what do you think will happen in this arena in the coming 5 to 10 years or so?
Smith: Well, in terms of auto-generated code, I think it’s going to progress very quickly. I mean, transformers came out in 2017, I think. And two years later, you have AlphaCode writing complete programs from natural language. And now you have T coder in the same year with a system that refines the natural language intent. I think in five years, yeah, we’ll be able to write basic software programs from speech. It’ll take much longer to write something like GPT-3. That’s a very, very complicated program. But the more that these algorithms are commoditized, the more I think combining them will be easier. So In 10 years, yeah, I think it’s possible that you’ll be able to talk to a computer. And again, not an untrained person, but a person that understands how programming works and program a fairly complex program. It kind of builds on itself this cycle because the more people that can participate in development that on the one hand creates more software, but it also frees up sort of the high-level data scientists to develop novel algorithms and new systems. And so I see it as accelerating and it’s an exciting time. [music]
Genkina: Today on Fixing the Future, we spoke to Craig Smith about AI-generated code. I’m Dina Genkina for IEEE Spectrum and I hope you’ll join us next time on Fixing the Future.
The IEEE Board of Directors shapes the future direction of IEEE and is committed to ensuring IEEE remains a strong and vibrant organization—serving the needs of its members and the engineering and technology community worldwide—while fulfilling the IEEE mission of advancing technology for the benefit of humanity.
This article features IEEE Board of Directors members Jill Gostin, Stephanie White, and Yu Yuan.
Director and Vice President, Member and Geographic Activities
Jill Gostin, an IEEE senior member, is director and vice president of IEEE Member and Geographic Activities.Nathan Gostin
Gostin is a dedicated mathematician and community leader whose work centers around systems engineering, algorithm assessment, and software testing and evaluation, specifically related to sensor systems. She is a principal research scientist in applied research programs pertaining to sensors and electromagnetic applications.
Her current work focuses on open architecture sensor systems, which allow systems to reuse existing technologies, providing the flexibility to quickly refresh an existing component of the system or swap in new technologies. Gostin uses a model-based systems engineering approach to develop the open architecture and the associated standard. By providing a standard to define the interfaces between components of the system, modifications and innovations can be quickly and easily incorporated.
Gostin, an active IEEE volunteer, has served on the IEEE Future Directions Committee, on the Board of Governors of the IEEE Computer Society and the IEEE Aerospace and Electronic Systems Society, and as vice president of finance for the IEEE Sensors Council, among many other IEEE roles. She believes in leading by example and says it is important to help others in advancing their career paths. Through the IEEE Computer Society, she was a representative to IEEE’s Women in Engineering program, which works to increase the representation of women in engineering disciplines. Gostin has also served as a STEM mentor to middle and high school math and science classes; and as a panelist for discussions on women in technology.
She has authored or co-authored multiple technical papers and has received multiple technical and service awards. In 2016, she was named Georgia’s Women in Technology Woman of the Year for mid-size businesses, an award recognizing women technology executives for their accomplishments as leaders in business, as visionaries of technology, and who make a difference in their community.
Director, Division X
IEEE Life Senior Member Stephanie White is director of IEEE Division X.William Pallack
White is an educator, technical leader, corporate manager, and entrepreneur. She is a pioneer in software and system requirements engineering—making significant and lasting contributions in the behavior modeling, requirements semantics, and requirements analysis fields, resulting in less costly and safer cyber-physical systems.
As a principal engineer of requirements and architecture, White was responsible for detecting errors in requirements on eight multi-million-dollar aircraft and space programs, producing higher quality specifications with lower cost and risk. Recognizing the need for verifiable methods that practicing engineers can use, she created scalable and practical modeling and analytic techniques based on formal methods. Her methods were used to ensure the correctness of aircraft and space programs.
Addressing the need for research in engineering systems where computer systems have an essential role, she founded the IEEE Technical Committee on Engineering of Computer-Based Systems in 1990. This area of research is now known as cyber-physical systems engineering.
White, a lifelong IEEE volunteer, has held many positions, including president of the IEEE Systems Council and vice president of technical activities for the IEEE Computer Society (also serving on its board of governors from 2006 to 2008). She wants to use her current position within IEEE to improve the return on members’ investment, broaden IEEE’s membership base, and advance technology for humanity.
Currently a senior professor emeritus, White has taught systems science, systems engineering, and computer science. She still participates in dissertation committees. White received the 2013 IEEE-USA Divisional Professional Leadership Award for inspiring women to study and work in the STEM fields and for leadership in diversity initiatives.
Director and President, IEEE Standards Association
An IEEE Senior Member, Yu Yuan is director and president of the IEEE Standards Association.Yu Yuan
Yuan is a scientist, inventor, and entrepreneur. His work in consumer technology, multimedia, virtual reality, the Internet of Things, and digital transformation has significantly impacted industry and society. His current work focuses on developing technologies, infrastructures, ecosystems, and resources needed for massively multiplayer ultra-realistic virtual experiences.
Yuan also works on building an international metaverse incubation and collaboration platform, providing access to knowledge and resources for metaverse development. His efforts have empowered a new generation of innovators and creators to push the boundaries of digital experiences—enabling a new era of immersive, interconnected, and intelligent technologies.
Yuan has been an IEEE volunteer for many years. His service in IEEE standards activities at different levels (working groups, standards committees, and higher-level governance) has been widely appreciated by standards developers, individual members, and entity members around the world. As the current president of the IEEE Standards Association (IEEE SA), he plays a pivotal role in shaping global standards, fostering collaboration, and driving innovation in the technology sector. He believes that IEEE SA has the opportunity for significant growth and to become a stronger global influence. He is committed to encouraging, supporting, and protecting innovation in standards and the standards development process.Yuan is also a member of the IEEE Consumer Technology Society and a member-at-large on the society’s board of governors. From 2015-2020, he led the IEEE Consumer Technology Society Standards Committee to grow the society’s standards activities from zero to a top-level among IEEE technical societies and councils. The committee received the 2019 IEEE SA Standards Committee Award for exceptional leadership in entity-based standards development and industry engagement in consumer technology.
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
On 3 July 1996, Earth was facing all but absolute destruction from an alien force hovering above three of the world’s biggest cities. Hope of humanity’s survival dwindled after brute force failed to thwart the attackers. But a small piece of malicious computer code changed the course of history when it was uploaded to the aliens’ computer system the next day. The malware—spoiler alert—disabled the invading ships’ defenses and ultimately saved the fate of humanity.
At least, that’s what happened in the wildly speculative 1996 sci-fi film Independence Day.
Yet, for all the reality-defying situations the blockbuster depicted, the prospective reality of a malware attack wreaking havoc on a future crewed spacecraft mission has digital-security experts very concerned. Gregory Falco, an assistant professor of civil and systems engineering at Johns Hopkins, explored the topic in a recent paper presented at the spring 2023 IEEE Aerospace Conference. Inspiration for the study, he says, came from his discovering a relative lack of cybersecurity features in the Artemis crew’s next-generation spacesuits.
“Maybe you might think about securing the communications link to your satellite, but the stuff in space all trusts the rest of stuff in space.”
—James Pavur, cybersecurity engineer
“The reality was that there was zero specification when they had their call for proposals [for new spacesuit designs] that had anything to do with cyber[security],” Falco says. “That was frustrating for me to see. This paper was not supposed to be groundbreaking.... It was supposed to be kind of a call to say, ‘Hey, this is a problem.’ ”
As human spaceflight prepares to enter a new, modern era with NASA’s Artemis program, China’s Tiangong Space Station, and a growing number of fledgling space-tourism companies, cybersecurity is at least as much of a persistent problem up there as it is down here. Its magnitude is only heightened by the fact that maliciously driven system failures—in the cold, unforgiving vacuum of space—can escalate to life or death with just a few inopportune missteps. Apollo-era and even Space Shuttle–era approaches to cybersecurity are overdue for an update, Falco says.
When the United States and other space-faring nations, such as the then–Soviet Union, began to send humans to space in the late 1960s, there was little to fear in the way of cybersecurity risks. Not only did massively interconnected systems like the internet not yet exist, but technology aboard these craft was so bespoke that it protected itself through a “security by obscurity” approach.
This meant that the technology was so complex that it effectively kept itself safe from tampering, says James Pavur, a cybersecurity researcher and lead cybersecurity software engineer at software company Istari Global.
A consequence of this security approach is that once you do manage to enter the craft’s internal systems—whether you’re a crew member or perhaps in years to come a space tourist—you’ll be granted full access to the online systems with essentially zero questions asked.
This security approach is not only insecure, says Pavur, but it is also vastly different from the zero-trust approach applied to many terrestrial technologies.
“Cybersecurity has been something that kind of stops on the ground,” he says. “Like maybe you might think about securing the communications link to your satellite, but the stuff in space all trusts the rest of stuff in space.”
NASA is no stranger to cybersecurity attacks on its terrestrial systems—nearly 2,000 “cyber incidents” were made in 2020 according to a 2021 NASA report. But the types of threats that could target crewed spacecraft missions would be much different from phishing emails, says Falco.
Cyberthreats to crewed spacecraft may focus on proximity approaches, such as installing malware or ransomware into a craft’s internal computer. In his paper, Falco and coauthor Nathaniel Gordon lay out four ways that crew members, including space tourists, may be used as part of these threats: crew as the attacker, crew as an attack vector, crew as collateral damage, and crew as the target.
“It’s almost akin to medical-device security or things of that nature rather than opening email,” Falco says. “You don’t have the same kind of threats as you would have for an IT network.”
Among a host of troubling scenarios, proprietary secrets—both private and national—could be stolen, the crew could be put at risk as part of a ransomware attack, or crew members could even be deliberately targeted through an attack on safety-critical systems like air filters.
All of these types of attacks have taken place on Earth, say Falco and Gordon in their paper. But the high level of publicity of the work as well as the integrated nature of spacecraft—close physical and network proximity of systems within a mission—could make cyberattack on spacecraft particularly appealing. Again heightening the stakes, the harsh environment of outer (or lunar or planetary) space renders malicious cyberthreats that much more perilous for crew members.
To date, deadly threats like these have gratefully not affected human spaceflight. Though if science fiction provides any over-the-horizon warning system for the shape of threats to come, consider sci-fi classics like 2001: A Space Odyssey or Alien—in which a nonhuman crew member is able to control the crafts’ computers in order to change the ship’s route and to even prevent a crew member from leaving the ship in an escape pod.
Right now, say Falco and Gordon, there is little to keep a bad actor or a manipulated crew member onboard a spacecraft from doing something similar. Luckily, the growing presence of humans in space also provides an opportunity to create meaningful hardware, software, and policy changes surrounding the cybersecurity of these missions.
Saadia Pekkanen is the founding director of the University of Washington’s Space Law, Data and Policy Program. In order to create a fertile environment for these innovations, she says, it will be important for space-dominant countries like the United States and China to create new policies and legislation to dictate how to address their own nations’ cybersecurity risk.
While these changes won’t directly affect international policy, decisions made by these countries could steer how other countries address these problems as well.
“We’re hopeful that there continues to be dialogue at the international level, but a lot of the regulatory action is actually going to come, we think, at the national level,” Pekkanen says.
Hope for a solution, Pavur says, could begin with the fact that another sector in aerospace—the satellite industry—has made recent strides toward greater and more robust cybersecurity of their telemetry and communications (as outlined in a 2019 review paper published in the journal IEEE Aerospace and Electronic Systems).
Falco points toward relevant terrestrial cybersecurity standards—including the zero-trust protocol—that require users to prove their identity to access the systems that keep safety-critical operations separate from all other onboard tasks.
Creating a security environment that’s more supportive of ethical hackers—the kind of hackers who break things to find security flaws in order to fix them instead of exploit them—would provide another crucial step forward, Pavur says. However, he adds, this might be easier said than done.
“That’s very uncomfortable for the aerospace industry because it’s just not really how they historically thought about threat and risk management,” he says. “But I think it can be really transformative for companies and governments that are willing to take that risk.”
Falco also notes that space tourism flights could benefit from a spacefaring equivalent of the TSA—to ensure that malware isn’t being smuggled onboard in a passenger’s digital devices. But perhaps most important, instead of “cutting and pasting” imperfect terrestrial solutions into space, Falco says that now is the time to reinvent how the world secures critical cyber infrastructure in Earth orbit and beyond.
“We should use this opportunity to come up with new or different paradigms for how we handle security of physical systems,” he says. “It’s a white space. Taking things that are half-assed and don’t work perfectly to begin with and popping them into this domain is not going to really serve anyone the way we need.”
Whether to pardon January 6 convicts will be the most revealing question of the Republican primary.
The post Oath Keepers Leader Stewart Rhodes Says He’s a Political Prisoner. Republicans Are Listening. appeared first on The Intercept.
A group of researchers from NASA, MIT, and other institutions have achieved the fastest space-to-ground laser-communication link yet, doubling the record they set last year. With data rates of 200 gigabits per second, a satellite could transmit more than 2 terabytes of data—roughly as much as 1,000 high-definition movies—in a single 5-minute pass over a ground station.
“The implications are far-reaching because, put simply, more data means more discoveries,” says Jason Mitchell, an aerospace engineer at NASA’s Space Communications and Navigation program.
The new communications link was made possible with the TeraByte InfraRed Delivery (TBIRD) system orbiting about 530 kilometers above Earth’s surface. Launched into space last May, TBIRD achieved downlink rates of up to 100 Gb/s with a ground-based receiver in California by last June. This was 100 times as fast as the quickest Internet speeds in most cities, and more than 1,000 times as fast as radio links traditionally used for communications with satellites.
The fastest data networks on Earth typically rely on laser communications over fiber optics. However, a high-speed laser-based Internet does not exist yet for satellites. Instead, space agencies and commercial satellite operators most commonly use radio to communicate with objects in space. The infrared light that laser communications can employ has a much higher frequency than radio waves, enabling much higher data rates.
“There are satellites currently in orbit limited by the amount of data they are able to downlink, and this trend will only increase as more capable satellites are launched,” says Kat Riesing, an aerospace engineer and a staff member at MIT Lincoln Laboratory on the TBIRD team. “Even a hyperspectral imager—HISUI on the International Space Station—has to send data back to Earth via storage drives on cargo ships due to limitations on downlink rates. TBIRD is a big enabler for missions that collect important data on Earth’s climate and resources, as well as astrophysics applications such as black hole imaging.”
MIT Lincoln Laboratory conceived TBIRD in 2014 as a low-cost, high-speed way to access data on spacecraft. A key way it reduced expenses was by using commercial, off-the-shelf components originally developed for terrestrial use. These include high-rate optical modems developed for fiber telecommunications and high-speed large-volume storage to hold data, Riesing says.
Located onboard NASA’s Pathfinder Technology Demonstrator 3 (PTD-3) satellite, TBIRD was carried into orbit on SpaceX’s Transporter-5 rideshare mission from Cape Canaveral Space Force Station in Florida on 25 May 2022. The PTD-3 satellite is a roughly 12-kilogram CubeSat about the size of two stacked cereal boxes, and its TBIRD payload is no larger than the average tissue box. “Industry’s drive to small, low-power, high-data-rate optical transceivers enabled us to achieve a compact form factor suitable even for small satellites,” Mitchell says.
“There are satellites currently in orbit limited by the amount of data they are able to downlink, and this trend will only increase as more-capable satellites are launched.” —Kat Riesing, aerospace engineer, MIT Lincoln Laboratory
The development of TBIRD faced a number of challenges. To start with, terrestrial components are not designed to survive the rigors of launching to and operating in space. For example, during a thermal test simulating the extreme temperatures the devices might face in space, the fibers in the optical signal amplifier melted.
The problem was that, when used as originally intended, the atmosphere could help cool the amplifier through convection. When tested in a vacuum, simulating space, the heat that the amplifier generated was trapped. To solve the issue, the researchers worked with the amplifier’s vendor to modify it so that it released heat through conduction instead.
In addition, laser beams from space to Earth can experience distortion from atmospheric effects and weather conditions. This can cause power loss, and in turn data loss, for the beams.
To compensate, the scientists developed their own version of automatic repeat request (ARQ), a protocol for controlling errors in data transmission over a communications link. In this arrangement, the ground terminal uses a low-rate uplink signal to let the satellite know that it has to retransmit any block of data, or frame, that has been lost or damaged. The new protocol lets the ground station tell the satellite which frames it received correctly, so the satellite knows which ones to retransmit and not waste time sending data it doesn’t have to.
Another challenge the scientists faced stemmed from how lasers form in much narrower beams than radio transmissions. For successful data transmission, these beams must be aimed precisely at their receivers. This is often accomplished by mounting the laser on a gimbal. Due to TBIRD’s small size, however, it instead maneuvers the CubeSat carrying it to point it at the ground, using any error signals it receives to correct the satellite’s orientation. This gimbal-less strategy also helped further shrink TBIRD, making it cheaper to launch.
TBIRD’s architecture can support multiple channels through wavelength separation to enable higher data rates, Riesing says. This is how TBIRD accomplished a 200-Gb/s downlink on 28 April—by using two 100-Gb/s channels, she explains. “This can scale further on a future mission if the link is designed to support it,” Riesing notes.
“Put simply, more data means more discoveries.” —Jason Mitchell, aerospace engineer, NASA
The research team’s next step is to explore where to apply this technology in upcoming missions. “This technology is particularly useful for science missions where collecting a lot of data can provide significant benefits,” Riesing says. “One mission concept that is enabled by this is the Event Horizon Explorer mission, which will extend the exciting work of the Event Horizon Telescope in imaging black holes with even higher resolution.”
The scientists also want to explore how to extend this technology to different scenarios, such as geostationary orbit, Riesing says. Moreover, Mitchell says, they are looking at ways to push TBIRD’s capabilities as far away as the moon, in order to support future missions there. The rates under consideration are in the 1- to 5-Gb/s range, which “may not seem like much of an improvement, but remember the moon is roughly 400,000 km away from Earth, which is quite a long distance to cover,” Mitchell says.
The new technology may also find use in high-speed atmospheric data links on the ground. “For example, from building to building, or across inhospitable terrain, such as from mountaintop to mountaintop, where the cost of laying fiber systems could be exorbitant,” Riesing says.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
LATTICE is an undergrad project from Caltech that’s developing a modular robotic transportation system for the lunar surface that uses autonomous rovers to set up a sort of cable car system to haul things like ice out of deep craters to someplace more useful. The prototype is fully functional, and pretty cool to watch in action.
We’re told that the team will be targeting a full system demonstration deploying across a “crater” on Earth this time next year. As to what those quotes around “crater” mean, your guess is as good as mine.
[ Caltech ]
Happy World Cocktail Day from Flexiv!
[ Flexiv ]
Here’s what Optimus has been up to lately.
As per usual, the robot is moderately interesting, but it’s probably best to mostly just ignore Musk.
[ Tesla ]
The INSECT tarsus-inspired compliant robotic grippER with soft adhesive pads (INSECTER) uses only one single electric actuator with a cable-driven mechanism. It can be easily controlled to perform a gripping motion akin to an insect tarsus (i.e., wrapping around the object) for handling various objects.
[ Paper ]
Congratulations to ANYbotics on their $50 million Series B!
And from 10 years ago (!) at ICRA 2013, here is video I took of StarlETH, one of ANYmal’s ancestors.
[ ANYbotics ]
In this video we present results from the recent field-testing campaign of the DigiForest project at Evo, Finland. The DigiForest project started in September 2022 and runs up to February 2026. It brings together diverse partners working on aerial robots, walking robots, autonomous lightweight harvesters, as well as forestry decision makers and commercial companies with the goal to create a full data pipeline for digitized forestry.
[ DigiForest ]
The Robotics and Perception Group at UZH will be presenting some new work on agile autonomous high-speed flight through cluttered environments at ICRA 2023.
[ Paper ]
Robots who lift together, stay together.
[ Sanctuary AI ]
The next CYBATHLON competition, which will take place again in 2024, breaks down barriers between the public, people with disabilities, researchers and technology developers. The initiative promotes the inclusion and participation of people with disabilities and improves assistance systems for use in everyday life by the end users.
[ Cybathlon ]
Russia’s invasion of Ukraine in 2022 put Ukrainian communications in a literal jam: Just before the invasion, Russian hackers knocked out Viasat satellite ground receivers across Europe. Then entrepreneur Elon Musk swept in to offer access to Starlink, SpaceX’s growing network of low Earth orbit (LEO) communications satellites. Musk soon reported that Starlink was suffering from jamming attacks and software countermeasures.
In March, the U.S. Department of Defense (DOD) concluded that Russia was still trying to jam Starlink, according to documents leaked by U.S. National Guard airman Ryan Teixeira and seen by the Washington Post. Ukrainian troops have likewise blamed problems with Starlink on Russian jamming, the website Defense One reports. If Russia is jamming a LEO constellation, it would be a new layer in the silent war in space-ground communications.
“There is really not a lot of information out there on this,” says Brian Weeden, the director of program planning for the Secure World Foundation, a nongovernmental organization that studies space governance. But, Weeden adds, “my sense is that it’s much harder to jam or interfere with Starlink [than with GPS satellites].”
Regardless of their altitude or size, communications satellites transmit more power and therefore require more power to jam than navigational satellites. However, compared with large geostationary satellites, LEO satellites—which orbit Earth at an altitude of 2,000 kilometers or lower—have frequent handovers that “introduce delays and opens up more surface for interference,” says Mark Manulis, a professor of privacy and applied cryptography at the University of the Federal Armed Forces’ Cyber Defense Research Institute (CODE) in Munich, Germany.
Security and communications researchers are working on defenses and countermeasures, mostly behind closed doors, but it is possible to infer from a few publications and open-source research how unprepared many LEO satellites are for direct attacks and some of the defenses that future LEO satellites may need.
For years, both private companies and government agencies have been planning LEO constellations, each numbering thousands of satellites. The DOD, for example, has been designing its own LEO satellite network to supplement its more traditional geostationary constellations for more than a decade and has already begun issuing contracts for the constellation’s construction. University research groups are also launching tiny, standardized cube satellites (CubeSats) into LEO for research and demonstration purposes. This proliferation of satellite constellations coincides with the emergence of off-the-shelf components and software-defined radio—both of which make the satellites more affordable, but perhaps less secure.
Russia’s defense agencies commissioned a system called Tobol that’s designed to counter jammers that might interfere with their own satellites, reported journalist and author Bart Hendrickx. That implies that Russia either can transmit jamming signals up to satellites, or suspects that adversaries can.
Many of the agencies and organizations launching the latest generation of low-cost satellites haven’t addressed the biggest security issues they face, researchers wrote in one review of LEO security in 2022. That may be because one of the temptations of LEO is the ability of relatively cheap new hardware to do smaller jobs.
“Satellites are becoming smaller. They are very purpose-specific,” says Ijaz Ahmad, a telecoms security researcher at the VTT Technical Research Centre in Espoo, Finland. “They have less resources for computing, processing, and also memory.” Less computing power means fewer encryption capabilities, as well as less ability to detect and respond to jamming or other active interference.
The rise of software-defined radio (SDR) has also made it easier to get hardware to accomplish new things, including allowing small satellites to cover many frequency bands. “When you make it programmable, you provide that hardware with some sort of remote connectivity so you can program it. But if the security side is overlooked, it will have severe consequences,” Ahmad says.
“At the moment there are no good standards focused on communications for LEO satellites.”
—Mark Manulis, professor of privacy and applied cryptography, University of the Federal Armed Forces
Among those consequences are organized criminal groups hacking and extorting satellite operators or selling information they have captured.
One response to the risks of software-defined radio and the fact that modern low-cost satellites require firmware updates is to include some simple physical security. Starlink did not respond to requests for comments on its security, but multiple independent researchers said they doubt today’s commercial satellites match military-grade satellite security countermeasures, or even meet the same standards as terrestrial communications networks. Of course, physical security can be defeated with a physical attack, and state actors have satellites capable of changing their orbits and grappling with, and thus perhaps physically hacking, communications satellites, the Secure World Foundation stated in an April report.
Despite that vulnerability, LEO satellites do bring certain advantages in a conflict: There are more of them, and they cost less per satellite. Attacking or destroying a satellite “might have been useful against an adversary who only has a few high-value satellites, but if the adversary has hundreds or thousands, then it’s a lot less of an impact,” Weeden says. LEO also offers a new option: sending a message to multiple satellites for later confirmation. That wasn’t possible when only a handful of GEO satellites covered Earth, but it is a way for cooperating transmitters and receivers to ensure that a message gets through intact. According to a 2021 talk by Vijitha Weerackody, a communications engineer at Johns Hopkins University, as few as three LEO satellites may be enough for such cooperation.
Even working together, future LEO constellation designers may need to respond with improved antennas, radio strategies that include spread spectrum modulation, and both temporal and transform-domain adaptive filtering. These strategies come at a cost to data transmission and complexity. But such measures may still be defeated by a strong enough signal that covers the satellite’s entire bandwidth and saturates its electronics.
“There’s a need to introduce a strong cryptographic layer,” says Manulis. “At the moment there are no good standards focused on communications for LEO satellites. Governments should push for standards in that area relying on cryptography.” The U.S. National Institute of Standards and Technology does have draft guidelines for commercial satellite cybersecurity that satellite operator OneWeb took into account when designing its LEO constellation, says OneWeb principal cloud-security architect Wendy Ng: “Hats off to them, they do a lot of work speaking to different vendors and organizations to make sure they’re doing the right thing.”
OneWeb uses encryption in its control channels, something a surprising number of satellite operators fail to do, says Johannes Willbold, a doctoral student at Ruhr University, in Bochum, Germany. Willbold is presenting his analysis of three research satellites’ security on 22 May 2023 at the IEEE Symposium on Security and Privacy. “A lot of satellites had straight-up no security measures to protect access in the first place,” he says.
Securing the growing constellations of LEO satellites matters to troops in trenches, investors in any space endeavor, anyone traveling into Earth orbit or beyond, and everyone on Earth who uses satellites to navigate or communicate. “I’m hoping there will be more initiatives where we can come together and share best practices and resources,” says OneWeb’s Ng. Willbold, who cofounded an academic workshop on satellite security, is optimistic that there will be: “It’s surprising to me how many people are now in the field, and how many papers they submitted.”
Fifty years ago, on 14 May 1973, a modified Saturn V rocket launched from the Kennedy Space Center carrying Skylab, the United States’ first space station. Six years later, in the early hours of 12 July 1979, Skylab reentered Earth’s atmosphere in a fiery blaze, spreading debris across the Indian Ocean and Western Australia. More than a decade later, a rancher found this end cap from one of Skylab’s oxygen tanks in the dirt. Cattle were drinking collected rainwater from the remains of a US $2.2 billion NASA investment.
Skylab’s fate was sealed moments after lift-off when the sun shield and main solar panel were severely damaged, making it questionable whether the spacecraft could fulfill its multiple planned missions. Without the sun shield, which also protected against small meteoroid damage, the internal temperature of the module would rise to uninhabitable temperatures. The damaged solar panels could not generate enough electricity to power the space station.
Skylab’s sun shield, shown here dangling by a thin strap, was damaged during launch. NASA
Skylab launched as a single, two-story unit that combined living quarters with a workshop. It included hundreds of science experiments, a solar observatory, and even a device for taking in-flight showers. The human crew was scheduled to go up a day after the spacecraft. Within hours of the Skylab failure, NASA delayed that crewed mission, as engineers hustled to assess the damage and suggest repairs. The space agency had only a short window of opportunity to salvage the mission. As the cabin overheated, food would begin to spoil, photographic film would be damaged, and materials would begin to break down and off-gas, making the air unbreathable.
NASA engineer Jack Kinzler suggested a solar shield designed like an umbrella that could be deployed through a 20-centimeter-square port hole near the site of the damage and then opened up to provide shade. Once the proof of concept was approved, engineers raced against time to manufacture the device while the Skylab crew began training on how to make the necessary repairs.
Eleven days later, on 25 May 1973, Commander Charles “Pete” Conrad Jr., Science Pilot Joseph Kerwin (the first medical doctor in space), and Pilot Paul Weitz finally headed to the space station. After orbiting Skylab in an Apollo Command and Service Module to visualize the damage, Weitz prepared for an EVA, or extravehicular activity. While Kerwin held his legs, Weitz stood through an open hatch and attempted to free the damaged solar array by hooking it with a 3-meter pole. This didn’t work. Conrad then attempted to hard dock with Skylab, but the latches wouldn’t catch. He tried again and again and again. After eight failed attempts, the crew resorted to the backup emergency docking procedure, which they had practiced only once on Earth. It worked.
Emergency repairs to Skylab included a replacement solar parasol [left] that was deployed through an airlock [rectangular opening, right].NASA
They then deployed Kinzler’s solar parasol, and within hours the cabin temperature inside Skylab was falling to habitable levels. Two weeks later, Conrad and Kerwin performed a second EVA that removed debris from the main solar array and allowed it to open. Enough power was restored that two more Skylab missions could be completed.
Skylab 3 included Owen Garriott, the first electrical engineer in Space. IEEE Spectrum interviewed him right after his mission and again in 2009. In reading his 1974 interview nearly 50 years removed from the event, I was struck by his description of his role as a scientist/observer of the sun. Running experiments on Skylab, he noted, required decision-making based on interpretation—to, say, select the appropriate instrument settings and optimum mode of operation for a given experiment. It was a nice reminder that there is a subtle art to doing great science.
On 8 February 2019, the 45th anniversary of the return of the last Skylab crew to Earth, the documentary Searching for Skylab: America’s Forgotten Triumph premiered at the U.S. Space and Rocket Center in Huntsville, Ala. Directed by Dwight Steven-Boniecki, the film makes extensive use of archival video, punctuated by interviews with astronauts, engineers, and their families. Searching for Skylab focuses on the initial launch and the scramble to save the mission, but it also highlights some of the science experiments conducted while in space.
I found the clips of middle and high school students describing their proposed Skylab experiments to be quite poignant. They were so hopeful and earnest, but the overheated cabinet ruined a handful of the plant-based studies.
Of course, sometimes new opportunities unexpectedly present themselves. The Skylab 3 crew happened to be in place to view—and sketch—Kohoutek, or the Christmas Comet. This was the first time that humans observed a comet from space.
Skylab’s reentry in 1979 triggered a wave of memorabilia commemorating the event, including this T-shirt. Ray Dunakin
In February 1974, when the third Skylab crew powered down the space station and departed, they left with the hope that other astronauts would follow. The damage to the solar panels meant that Skylab’s orbit would eventually decay, but NASA’s initial calculations had it in space through early 1983. This would provide overlap with the startup of the new space shuttle program and possible efforts to boost Skylab’s orbit. As late as 1978, a NASA news release touted the promise of using Skylab as living and working quarters for shuttle missions or a convenient work platform for fabrication and construction of additional structures in space. But the shuttle program was delayed, and unusual solar activity affected Skylab’s solar charging. Skylab was not going to make it.
As it became clear that Skylab was going to reenter the Earth’s atmosphere, betting on the timing and location of impact became international news. NASA did its best to ensure that pieces of the 76.5-tonne structure didn’t crash into densely populated areas, by firing the booster rockets one last time to alter its final path. Although the heaviest fragments of the station fell into the Indian Ocean, debris scattered across the state of Western Australia from the coastal town of Esperance, across the Nullarbor Plain—a flat desert on the Great Australian Bight— to the town of Balladonia.
Early relic hunters scavenged the area for bits of Skylab. The largest pieces ended up in museums, including what’s now the Esperance Museum. But the debris field encompassed thousands of square kilometers of a sparsely populated region, and some items took longer to be discovered.
In the early 1990s, a stockman noticed cattle drinking at a place where no water should have been available. He went to investigate and discovered the Skylab fragment pictured at top. It was part of Skylab’s large, cylindrical oxygen tanks, which had broken into two pieces on impact. The larger piece found its way to the Esperance Museum, but the smaller piece remained undiscovered until the curious stockman uncovered it. The curved shape formed a shallow dish to collect rainwater, making it perhaps the most expensive water bowl ever.
Commemorative objects like the Skylab Protective Helmet help capture the spirit of the times.Jeffrey Hall
In the weeks leading up to Skylab’s reentry, a cottage industry of commemorative memorabilia emerged. Bob Smith, the owner of a custom silk-screening shop in Lemon Grove, Calif., got in on the action. He asked his art director, Ray Dunakin, to do something wacky with a guy wearing an old helmet and holding a steel umbrella. In an email, Dunakin told me that the resulting T-shirt became one of their most popular designs, selling thousands. Smith convinced a local TV station to send a camera crew and reporter to cover the printing process. The reporter got a human-interest story, and Smith got free advertising.
Although Dunakin had always been interested in space exploration and had followed all of the NASA launches, the Skylab T-shirt was simply a job very early in his career. He had previously done some freelance airbrush art, but working for Smith was Dunakin’s first full-time job as a graphic designer. He was shocked when one of the shirts resurfaced more than 40 years later on an online resale site, along with a hefty markup in price.
The do-it-yourself Skylab Protective Helmet promised users it would “do you absolutely no good at all!” Jeffrey Hall
Another young man who tried to cash in on the Skylab hoopla was Jeffrey Hall. At the age of 26, he founded Seat-of-the-Pants Management, which specialized in novelty gifts. In honor of Skylab’s demise, he manufactured Skylab Protective Helmets. The do-it-yourself paper hats came with the following manufacturer’s guarantee: “Should Skylab actually fall on you, your Skylab Protective Helmet will not prevent ‘splitting headaches.’ In fact, it will do you absolutely no good at all!” Hall took orders for approximately 20,000 of these at $2 apiece, but didn’t make a profit. Once Skylab crashed, a number of buyers refused to pay. Hall learned the hard lesson that he should have charged up front.
Commemorative items such as T-shirts and paper hats are often intended to be ephemeral—they exist in the moment to capture the spirit of the time. But sometimes they get stored away in basements, attics, and even museums only to emerge decades later as useful artifacts for historians to study and the public to reflect on a shared past.
Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.
An abridged version of this article appears in the May 2023 print issue as “Skylab’s Great Fall.”
On a gin-clear December day, I’m sitting under the plexiglass bubble of a radically new kind of aircraft. It’s a little past noon at the Byron Airport in northern California; in the distance, a jagged line of wind turbines atop rolling hills marks the Altamont Pass, blades spinning lazily. Above me, a cloudless blue sky beckons.
The aircraft, called BlackFly, is unlike anything else on the planet. Built by a Palo Alto, Calif., startup called Opener, it’s an electric vertical take-off and landing (eVTOL) aircraft with stubby wings fore and aft of the pilot, each with four motors and propellers. Visually, it’s as though an aerial speedster from a 1930s pulp sci-fi story has sprung from the page.
There are a couple of hundred startups designing or flying eVTOLs. But only a dozen or so are making tiny, technologically sophisticated machines whose primary purpose is to provide exhilarating but safe flying experiences to people after relatively minimal training. And in that group, Opener has jumped out to an early lead, having built dozens of aircraft at its facilities in Palo Alto and trained more than a score of people to fly them.
My own route to the cockpit of a BlackFly was relatively straightforward. I contacted the company’s CEO, Ken Karklin, in September 2022, pitched him on the idea of a story and video, and three months later I was flying one of his aircraft.
Well, sort of flying it. My brief flight was so highly automated that I was more passenger than pilot. Nevertheless, I spent about a day and a half before the flight being trained to fly the machine manually, so that I could take control if anything went wrong. For this training, I wore a virtual-reality headset and sat in a chair that tilted and gyrated to simulate flying maneuvers. To “fly” this simulation I manipulated a joystick that was identical to the one in the cockpit of a BlackFly. Opener’s chief operating officer, Kristina L. Menton, and engineer Wyatt Warner took turns patiently explaining the operations of the vehicle and giving me challenging tasks to complete, such as hovering and performing virtual landings in a vicious crosswind.
The BlackFly is entirely controlled by that joystick, which is equipped with a trigger and also topped by a thumb switch. To take off, I squeeze the trigger while simultaneously pushing forward on the switch. The machine leaps into the air with the sound of a million bees, and with a surge of giddy elation I am climbing skyward.
Much more so than an airplane or helicopter, the BlackFly taps into archetypal human yearnings for flight, the kind represented by magic carpets, the flying cars in “The Jetsons,” and even those Mountain Banshees in the movie “Avatar.” I’ve had several unusual experiences in aircraft, including flying on NASA’s zero-gravity-simulating “Vomit Comet,” and being whisked around in a BlackFly was definitely the most absorbing and delightful. Gazing out over the Altamont Pass from an altitude of about 60 meters, I had a feeling of joyous release—from Earth’s gravity and from earthly troubles.
For technical details about the BlackFly and to learn more about its origin, go here.
The BlackFly is also a likely harbinger of things to come. Most of the startups developing eVTOLs are building vehicles meant to carry several passengers on commercial runs of less than 50 kilometers. Although the plan is for these to be flown by pilots initially, most of the companies anticipate a day when the flights will be completely automated. So specialized aircraft such as the BlackFly—designed to be registered and operated as “ultralight” aircraft under aviation regulations—could provide mountains of invaluable data on highly and fully automated flying and perhaps even help familiarize people with the idea of flying without a pilot. Indeed, during my flight, dozens of sensors gathered gigabytes of data, to add to the large reservoir Opener has already collected during many hundreds of test flights so far.
As of late February 2023, Opener hadn’t yet announced a retail price or an official commercial release date for the aircraft, which has been under development and testing for more than a decade. I’ll be keeping an eye out for further news of the company. Long after my flight was over I was still savoring the experience, and hoping for another one.
Special thanks to IEEE.tv for collaborating on production of this video.
If electric vertical takeoff and landing aircraft do manage to revolutionize transportation, the date of 5 October 2011, may live on in aviation lore. That was the day when a retired mechanical engineer named Marcus Leng flew a home-built eVTOL across his front yard in Warkworth, Ont., Canada, startling his wife and several of his friends.
“So, take off, flew about 6 feet above the ground, pitched the aircraft towards my wife and the two couples that were there, who were behind automobiles for protection, and decided to do a skidding stop in front of them. Nobody had an idea that this was going to be happening,” recalls Leng.
But as he looked to set his craft down, he saw a wing starting to dig into his lawn. “Uh-oh, this is not good,” he thought. “The aircraft is going to spin out of control. But what instead happened was the propulsion systems revved up and down so rapidly that as the aircraft did that skidding turn, that wing corner just dragged along my lawn exactly in the direction I was holding the aircraft, and then came to a stable landing,” says Leng. At that point, he knew that such an aircraft was viable “because to have that sort of an interference in the aircraft and for the control systems to be able to control it was truly remarkable.”
It was the second time anyone, anywhere had ever flown an eVTOL aircraft.
Today, some 350 organizations in 48 countries are designing, building, or flying eVTOLs, according to the Vertical Flight Society. These companies are fueled by more than US $7 billion and perhaps as much as $10 billion in startup funding. And yet, 11 years after Leng’s flight, no eVTOLs have been delivered to customers or are being produced at commercial scale. None have even been certified by a civil aviation authority in the West, such as the U.S. Federal Aviation Administration or the European Union Aviation Safety Agency.
But 2023 looks to be a pivotal year for eVTOLs. Several well-funded startups are expected to reach important early milestones in the certification process. And the company Leng founded, Opener, could beat all of them by making its first deliveries—which would also be the first for any maker of an eVTOL.
Today, some 350 organizations in 48 countries are designing, building, or flying eVTOLs, according to the Vertical Flight Society.
As of late October, the company had built at its facility in Palo Alto, Calif., roughly 70 aircraft—considerably more than are needed for simple testing and evaluation. It had flown more than 30 of them. And late in 2022, the company had begun training a group of operators on a state-of-the-art virtual-reality simulator system.
Opener’s highly unusual, single-seat flier is intended for personal use rather than transporting passengers, which makes it almost unique. Opener intends to have its aircraft classified as an “ultralight,” enabling it to bypass the rigorous certification required for commercial-transport and other aircraft types. The certification issue looms as a major unknown over the entire eVTOL enterprise, at least in the United States, because, as the blog Jetlaw.com noted last August, “the FAA has no clear timeline or direction on when it will finalize a permanent certification process for eVTOL.”
Opener’s strategy is not without risks, either. For one, there’s no guarantee that the FAA will ultimately agree that Opener’s aircraft, called BlackFly, qualifies as an ultralight. And not everyone is happy with this approach. “My concern is, these companies that are saying they can be ultralights and start flying around in public are putting at risk a $10 billion [eVTOL] industry,” says Mark Moore, founder and chief executive of Whisper Aero in Crossville, Tenn. “Because if they crash, people won’t know the difference” between the ultralights and the passenger eVTOLs, he adds. “To me, that’s unacceptable.” Previously, Moore led a team at NASA that designed a personal-use eVTOL and then served as engineering director at Uber’s Elevate initiative.
A BlackFly eVTOL took off on 1 October, 2022, at the Pacific Airshow in Huntington Beach, Calif. Irfan Khan/Los Angeles Times/Getty Images
Opener’s aircraft is as singular as its business model. It’s a radically different kind of aircraft, and it sprang almost entirely from Leng’s fertile mind.
“As a kid,” he says, “I already envisioned what it would be like to have an aircraft that could seamlessly do a vertical takeoff, fly, and land again without any encumbrances whatsoever.” It was a vision that never left him, from a mechanical-engineering degree at the University of Toronto, management jobs in the aerospace industry, starting a company and making a pile of money by inventing a new kind of memory foam, and then retiring in 1996 at the age of 36.
The fundamental challenge to designing a vertical-takeoff aircraft is endowing it with both vertical lift and efficient forward cruising. Most eVTOL makers achieve this by physically tilting multiple large rotors from a vertical rotation axis, for takeoff, to a horizontal one, for cruising. But the mechanism for tilting the rotors must be extremely robust, and therefore it inevitably adds substantial complexity and weight. Such tilt-rotors also entail significant compromises and trade-offs in the size of the rotors and their placement relative to the wings.
Opener’s BlackFly ingeniously avoids having to make those trade-offs and compromises. It has two wings, one in front and one behind the pilot. Affixed to each wing are four motors and rotors—and these never change their orientation relative to the wings. Nor do the wings move relative to the fuselage. Instead, the entire aircraft rotates in the air to transition between vertical and horizontal flight.
To control the aircraft, the pilot moves a joystick, and those motions are instantly translated by redundant flight-control systems into commands that alter the relative thrust among the eight motor-propellers.
Visually, it’s an astounding aircraft, like something from a 1930s pulp sci-fi magazine. It’s also a triumph of engineering.
Leng says the journey started for him in 2008, when “I just serendipitously stumbled upon the fact that all the key technologies for making electric VTOL human flight practical were coming to a nexus.”
The journey that made Leng’s dream a reality kicked into high gear in 2014 when a chance meeting with investor Sebastian Thrun at an aviation conference led to Google cofounder Larry Page investing in Leng’s project.
Leng started in his basement in 2010, spending his own money on a mélange of home-built and commercially available components. The motors were commercial units that Leng modified himself, the motor controllers were German and off the shelf, the inertial-measurement unit was open source and based on an Arduino microcontroller. The batteries were modified model-aircraft lithium-polymer types.
“The main objective behind this was proof of concept,” he says.“I had to prove it to myself, because up until that point, they were just equations on a piece of paper. I had to get to the point where I knew that this could be practical.”
After his front-yard flight in 2011, there followed several years of refining and rebuilding all of the major components until they achieved the specifications Leng wanted. “Everything on BlackFly is from first principles,” he declares.
The motors started out generating 160 newtons (36 pounds) of static thrust. It was way too low. “I actually tried to purchase motors and motor controllers from companies that manufactured those, and I specifically asked them to customize those motors for me, by suggesting a number of changes,” he says. “I was told that, no, those changes won’t work.”
So he started designing his own brushless AC motors. “I did not want to design motors,” says Leng. “In the end, I was stunned at how much improvement we could make by just applying first principles to this motor design.”
Eleven years after Leng’s flight, no eVTOLs have been delivered to customers or are being produced at commercial scale.
To increase the power density, he had to address the tendency of a motor in an eVTOL to overheat at high thrust, especially during hover, when cooling airflow over the motor is minimal. He began by designing a system to force air through the motor. Then he began working on the rotor of the motor (not to be confused with the rotor wings that lift and propel the aircraft). This is the spinning part of a motor, which is typically a single piece of electrical steel. It’s an iron alloy with very high magnetic permeability.
By layering the steel of the rotor, Leng was able to greatly reduce its heat generation, because the thinner layers of steel limited the eddy currents in the steel that create heat. Less heat meant he could use higher-strength neodymium magnets, which would otherwise become demagnetized. Finally, he rearranged those magnets into a configuration called a Halbach array. In the end Leng’s motors were able to produce 609 newtons (137 lbs.) of thrust.
Overall, the 2-kilogram motors are capable of sustaining 20 kilowatts, for a power density of 10 kilowatts per kilogram, Leng says. It’s an extraordinary figure. One of the few motor manufacturers claiming a density in that range is H3X Technologies, which says its HPDM-250 clocks in at 12 kw/kg.
The brain of the BlackFly consists of three independent flight controllers, which calculate the aircraft’s orientation and position, based on readings from the inertial-measurement units, GPS receivers, and magnetometers. They also use pitot tubes to measure airspeed. The flight controllers continually cross-check their outputs to make sure they agree. They also feed instructions, based on the operator’s movement of the joystick, to the eight motor controllers (one for each motor).
Equipped with these sophisticated flight controllers, the fly-by-wire BlackFly is similar in that regard to the hobbyist drones that rely on processors and clever algorithms to avoid the tricky manipulations of sticks, levers, and pedals required to fly a traditional fixed- or rotary-wing aircraft.
That sophisticated, real-time control will allow a far larger number of people to consider purchasing a BlackFly when it becomes available. In late November, Opener had not disclosed a likely purchase price, but in the past the company had suggested that BlackFly would cost as much as a luxury SUV. So who might buy it? CEO Ken Karklin points to several distinct groups of potential buyers who have little in common other than wealth.
There are early tech adopters and also people who are already aviators and are “passionate about the future of electric flight, who love the idea of being able to have their own personal vertical-takeoff-and-landing, low-maintenance, clean aircraft that they can fly in rural and uncongested areas,” Karklin says. “One of them is a business owner. He has a plant that’s a 22-mile drive but would only be a 14-mile flight, and he wants to install charging infrastructure on either end and wants to use it to commute every day. We love that.”
Others are less certain about how, or even whether, this market segment will establish itself. “When it comes to personal-use eVTOLs, we are really struggling to see the business case,” says Sergio Cecutta, founder and partner at SMG Consulting, where he studies eVTOLs among other high-tech transportation topics. “I’m not saying they won’t sell. It’s how many will they sell?” He notes that Opener is not the only eVTOL maker pursuing a path to success through the ultralight or some other specialized FAA category. As of early November, the list included Alauda Aeronautics, Air, Alef, Bellwether Industries, Icon Aircraft, Jetson, Lift Aircraft, and Ryse Aero Technologies.
What makes Opener special? Both Karklin and Leng emphasize the value of all that surrounds the BlackFly aircraft. For example, there are virtual-reality-based simulators that they say enable them to fully train an operator in 10 to 15 hours. The aircraft themselves are heavily instrumented: “Every flight, literally, there’s over 1,000 parameters that are recorded, some of them at 1,000 hertz, some 100 Hz, 10 Hz, and 1 Hz,” says Leng. “All that information is stored on the aircraft and downloaded to our database at the end of the flight. When we go and make a software change, we can do what’s called regression testing by running that software using all the data from our previous flights. And we can compare the outputs against what the outputs were during any specific flight and can automatically confirm that the changes that we’ve made are without any issues. And we can also compare, to see if they make an improvement.”
Ed Lu, a former NASA astronaut and executive at Google, sits on Opener’s safety-review board. He says what impressed him most when he first met the BlackFly team was “the fact that they had based their entire development around testing. They had a wealth of flight data from flying this vehicle in a drone mode, an unmanned mode.” Having all that data was key. “They could make their decisions based not on analysis, but after real-world operations,” Lu says, adding that he is particularly impressed by Opener’s ability to manage all the flight data. “It allows them to keep track of every aircraft, what sensors are in which aircraft, which versions of code, all the way down to the flights, to what happened in each flight, to videos of what’s happening.” Lu thinks this will be a huge advantage once the aircraft is released into the “real” world.
Karklin declines to comment on whether an ultralight approval, which is governed by what the FAA designates “ Part 103,” might be an opening move toward an FAA type certification in the future. “This is step one for us, and we are going to be very, very focused on personal air vehicles for recreational and fun purposes for the foreseeable future,” he says. “But we’ve also got a working technology stack here and an aircraft architecture that has considerable utility beyond the realm of Part-103 [ultralight] aircraft, both for crewed and uncrewed applications.” Asked what his immediate goals are, Karklin responds without hesitating. “We will be the first eVTOL company, we believe, in serial production, with a small but steadily growing revenue and order book, and with a growing installed base of cloud-connected aircraft that with every flight push all the telemetry, all the flight behavior, all the component behavior, all the operator-behavior data representing all of this up to the cloud, to be ingested by our back office, and processed. And that provides us a lot of opportunity.”
This article appears in the January 2023 print issue as “Finally, an eVTOL You Can Buy Soonish.”
Top Tech 2023: A Special Report
Preview exciting technical developments for the coming year.
Can This Company Dominate Green Hydrogen?
Fortescue will need more electricity-generating capacity than France.
Pathfinder 1 could herald a new era for zeppelins
A New Way to Speed Up Computing
Blue microLEDs bring optical fiber to the processor.
The Personal-Use eVTOL Is (Almost) Here
Opener’s BlackFly is a pulp-fiction fever dream with wings.
Baidu Will Make an Autonomous EV
Its partnership with Geely aims at full self-driving mode.
China Builds New Breeder Reactors
The power plants could also make weapons-grade plutonium.
Economics Drives a Ray-Gun Resurgence
Lasers should be cheap enough to use against drones.
A Cryptocurrency for the Masses or a Universal ID?
What Worldcoin’s killer app will be is not yet clear.
The company’s Condor chip will boast more than 1,000 qubits.
Vagus-nerve stimulation promises to help treat autoimmune disorders.
New satellites can connect directly to your phone.
The E.U.’s first exascale supercomputer will be built in Germany.
A dozen more tech milestones to watch for in 2023.
Welcome to the /r/Technology Tech Support / General Discussion Thread.
All questions must be submitted as top comments (direct replies to this post).
As always, we ask that you keep it civil, abide by the rules of reddit and mind your reddiquette. Please hit the report button on any activity that you feel may be in violation of any of the guidelines listed above.
Click here to review past iterations of these support discussions.
cheers, /r/technology moderators.
A hospital visit can be boiled down to an initial ailment and an outcome. But health records tell a different story, full of doctors’ notes and patient histories, vital signs and test results, potentially spanning weeks of a stay. In health studies, all of that data is multiplied by hundreds of patients. It’s no wonder, then, that as AI data processing techniques grow increasingly sophisticated, doctors are treating health as an AI and big-data problem.
In one recent effort, researchers at Northwestern University have applied machine learning to electronic health records to produce a more granular, day-to-day analysis of pneumonia in an intensive care unit (ICU), where patients received assistance breathing from mechanical ventilators. The analysis, published 27 April in the Journal of Clinical Investigation, includes clustering of patient days by machine learning, which suggests that long-term respiratory failure and the risk of secondary infection are much more common in COVID-19 patients than the subject of much early COVID fears—cytokine storms.
“Most methods that approach data analysis in the ICU look at data from patients when they’re admitted, then outcomes at some distant time point,” said Benjamin D. Singer, a study coauthor and associate professor at Northwestern’s Feinberg School of Medicine. “Everything in the middle is a black box.”
The hope is that AI can make new clinical findings from daily ICU patient status data beyond the COVID-19 case study.
The day-wise approach to the data led researchers to two related findings: Secondary respiratory infections are a common threat to ICU patients, including those with COVID-19; and a strong association between COVID-19 and respiratory failure, which can be interpreted as an unexpected lack of evidence for cytokine storms in COVID-19 patients. An eventual shift to multiple-organ failure might be expected if patients had an inflammatory cytokine response, which the researchers did not find. Reported rates vary, but cytokine storms have since the earliest days of the pandemic been considered a dangerous possibility in severe COVID-19 cases.
Some 35 percent of patients were diagnosed with a secondary infection, also known as ventilator-associated pneumonia (VAP), at some point during their ICU stays. More than 57 percent of COVID-19 patients developed VAP, compared to 25 percent of non-COVID patients. Multiple VAP episodes were reported for almost 20 percent of COVID-19 patients.
Catherine Gao, an instructor of medicine at Northwestern University and one of the study’s coauthors said the machine-learning algorithms they used helped the researchers “see clear patterns emerge that made clinical sense.” The team dubbed their day-focused machine learning approach CarpeDiem, after the Latin phrase meaning “seize the day.”
CarpeDiem was built using the Jupyter Notebook platform, and the team has made both the code and deidentified data available. The data set included 44 different clinical parameters for each patient day, and the clustering approach returned 14 groups with different signatures of six types of organ dysfunction: respiratory, ventilator instability, inflammatory, renal, neurologic, and shock.
“The field has focused on the idea that we can look at early data and see if that predicts how [patients] are going to do days, weeks, or months later,” said Singer. The hope, he said, is that research using daily ICU patient status rather than just a few time points can tell investigators—and the AI and machine-learning algorithms they use—more about the efficacy of different treatments or responses to changes in a patient’s condition. One future research direction would be to examine the momentum of illness, Singer said.
The technique the researchers developed (which they called the “patient-day approach”) might catch other changes in clinical states with less time between data points, said Sayon Dutta, an emergency physician at Massachusetts General Hospital who helps develop predictive models for clinical practice using machine learning and was not involved in the study. Hourly data could present its own problems to a clustering approach, he said, making patterns difficult to recognize. “I think splitting the day up into 8-hour chunks instead might be a good compromise of granularity and dimensionality,” he said.
Calls to incorporate new techniques to analyze the large amounts of ICU health data predate the COVID-19 pandemic. Machine learning or computational approaches more broadly could be used in the ICU in a variety of ways, not just in observational studies. Possible applications could use daily health records, as well as real-time data recorded by health care devices, or involve designing responsive machines that incorporate a range of available information.
The overall mortality rates were around 40 percent in both patients who developed a secondary infection and those who did not. But among study patients with one diagnosed case of VAP, if their secondary pneumonia was not successfully treated within 14 days, 76.5 percent eventually died or were sent to hospice care. The rate was 17.6 percent among those whose secondary pneumonia was considered cured. Both groups included roughly 50 patients.
Singer stresses that the risk of secondary pneumonia is typically a necessary one. “The ventilator is absolutely lifesaving in these instances. It’s up to us to figure out how to best manage complications that arise from it,” he said. “You have to be alive to experience a complication.”
Democrats and Republicans have previously joined hands to support the invasion of Iraq, huge corporate tax cuts, and more.
The post The Debt Limit Bill: Yet Another Triumph for Bipartisanship appeared first on The Intercept.
Stay up late sometime when the moon is past full and look at the large dark oval near its western edge. Renaissance astronomers called it the Ocean of Storms, Oceanus Procellarum, not knowing it was a hundred times drier than the most arid desert on Earth.
But there is water there. And two new studies—one Chinese, the other American—suggest that lunar soil may have a good deal more water in it than modern space scientists previously believed. It’s still very, very dry; NASA’s Artemis program is looking for ice in shadowed craters near the moon’s south pole, and mission managers should not change those plans. Still, the new evidence is tantalizing, and scientists say it deserves further exploration.
Some details on each of the findings:
In 2020, the China National Space Administration launched a robotic mission, called Chang’e-5, to the Ocean of Storms. It was China’s first mission to return soil samples from the lunar surface. The CNSA said the ship gathered just over 1.7 kilograms of lunar regolith, which it found to be speckled with thousands of glass beads, mostly microscopic.
China’s Chang’e-5 spacecraft brought back these glass beads from lunar soil.
This was not surprising; the moon has been showered for billions of years with micrometeoroids, and the heat of their impact has been shown to melt rock, which then turns glassy as it cools. But here’s what was new: A team of scientists at the Chinese Academy of Sciences scanned 117 glass beads from Chang’e-5 and claimed most of them contained either water molecules or hydroxyls, molecules with an attached chemical group made of one hydrogen and one oxygen atom.
“The interesting thing is that the water entrapped in impact glass beads is of solar-wind origin,” wrote Hu Sen, one of the study authors, in an email to IEEE Spectrum. Hu’s team, reporting its findings in the journal Nature Geoscience, says much of the hydrogen present streamed from the sun and bonded with oxygen in the lunar soil, creating a water cycle of sorts, enough to help replace water molecules that escape into space due to the sun’s heat.
The Chinese scientists then take this a step further—a big step. If the Chang’e-5 lander found so many glass beads in just one spot, they say, there may be similar beads, impregnated with water or its components, all over the moon. “We believe that impact glass beads formed by meteorite or micrometeorite impacts are a common phase in lunar soils, from equator to polar and from east to west, distributed globally and spread evenly,” wrote Hu.
If that’s true, they say, the outer layers of lunar soil could contain 270 trillion kg of water molecules. There is no good way to compare that to liquid water on Earth (Hu suggests Earth’s oceans weigh about a million times as much) but, still, if a future space program wanted to use lunar water for drinking, oxygen, or the chemical components of rocket fuel, wouldn’t this be intriguing?
Not so fast. Scientists doing related work say to tread very, very carefully. “The measurements are well done but it’s not a game changer,” says Rhonda Stroud, director of the Buseck Center for Meteorite Studies at Arizona State University. She was not involved in the Chang’e-5 study, but has done extensive research on the likelihood of water in the lunar regolith. She points out that geologists sometimes use the word “water” loosely to describe both molecules with hydroxyl groups and actual H2O because their chemical signatures may often be very similar.
“There are lots of ways hydrogen can be stored in the glass beads,” she says. She concludes, “It’s premature to say there’s an easily extractable source of water.”
So where does that leave the search for lunar water? For that, let’s turn to the second study:
This study may be on firmer ground because it was done from the air. Last year a team of scientists scanned for possible water on the moon using a converted NASA Boeing 747 called SOFIA. The plane, since retired, carried a 2.7-meter telescope with a spectrometer it could point at the moon. It flew above 99.9 percent of the water vapor in Earth’s atmosphere, so that earthly vapor couldn’t fool its instruments. NASA says infrared spectroscopy is a good way of identifying lunar water and telling it apart from other molecules.
NASA’s SOFIA aircraft mapped water signatures near the moon’s south pole. Darker blue means higher concentration of ice.Ernie Wright/NASA Goddard Scientific Visualization Studio
The resulting map of the region near the moon’s south pole shows some water signatures even on sunlit plains. But the greatest concentrations are in the shadows—against the steep walls of craters where the sun rarely (or never) reaches. That confirms a growing body of research that started in the 1990s, when robotic probes first found evidence of ice hiding in the permanently darkened recesses of polar craters.
NASA plans to send a robotic rover called VIPER to the lunar south pole late in 2024. The agency says its instruments should be able to parse the difference between water, hydroxyl, and other compounds. If it succeeds, Artemis astronauts could follow as soon as December 2025, though the Artemis schedule has often slipped. China and Russia have talked on occasion of a joint lunar effort as well.
Whoever goes, they’ll bring their own water to start. Will they find more?
The Jet Propulsion Laboratory’s Ingenuity helicopter is preparing for the 50th flight of its five-flight mission to Mars. Flight 49, which took place last weekend, was its fastest and highest yet—the little helicopter flew 282 meters at an altitude of 16 meters, reaching a top speed of 6.50 meters per second. Not a bad performance for a tech demo that was supposed to be terminated two years ago.
From here, things are only going to get more difficult for Ingenuity. As the Perseverance rover continues its climb up the Jezero crater’s ancient river delta, Ingenuity is trying its best to scout ahead. But the winding hills and valleys make it difficult for the helicopter to communicate with the rover, and through the rover to its team back on Earth. And there isn’t a lot of time or room to spare, because Ingenuity isn’t allowed to fly too close to Perseverance, meaning that if the rover ever catches up to the helicopter, the helicopter may have to be left behind for the rover’s own safety. This high-stakes race between the helicopter scout and the science rover will continue for kilometers.
“Two years in, 10 kilometers flown, and we’re well over an hour now in the skies of Mars.”
—Teddy Tzanetos, NASA
For the Ingenuity team, this new mode of operation was both a challenge and an opportunity. This was nothing new for folks who have managed to keep this 30-day technology demo alive and healthy and productive for years, all from a couple hundred million kilometers away. IEEE Spectrum spoke with Ingenuity team lead Teddy Tzanetos at JPL last week about whether flying on Mars is ever routine, how they upgraded Ingenuity for its extended mission, and what the helicopter’s success means for the future of airborne exploration and science on Mars.
IEEE Spectrum: Is 50 flights on Mars a milestone for you folks, or are things routine enough now that you’re looking at it as just another flight?
Teddy Tzanetos: It’s hugely meaningful. We’ll come back to the routine question in a second, but it’s very meaningful for all of us. When we hit 10 and then 25 it was big, but 50 is a pretty serious number now that we’re 10 times our initial flight count. Two years in, 10 kilometers flown, and we’re well over an hour now in the skies of Mars. So hitting flight 50, it’s a big thing—we’re probably going to set up a happy hour and have a big party for the team.
Can you talk about some of the new challenges that Ingenuity has been facing as it makes its way up Jezero crater’s river delta along with the Perseverance rover?
Tzanetos: The core of the challenge here is that the paradigm has changed. When you look at the first year of Ingenuity’s extended operations, we were still in the Three Forks area, where the ground was flat. We could get line of sight from the helicopter to the rover from hundreds and hundreds of meters away. Our longest link that we established was 1.2 kilometers—a massive distance.
And then we started to realize that the rover was going to enter the river delta in like six months. It’s going to start climbing up through dozens and dozens of meters of elevation change and passing through ravines, and that’s going to start presenting a telecom issue for us. We knew that it couldn’t be business as usual anymore—if we still wanted to keep this helicopter mission going, not only did we need to change the way we were operating, we also had to change the helicopter itself.
“We owe it to everyone who worked on Ingenuity and everyone who will continue to work on rotorcraft on Mars to try and get everything out of this little spacecraft that we can.”
—Teddy Tzanetos, NASA
This realization culminated in the most challenging flight software upgrade we’ve ever done with Ingenuity, which happened last December. We went into the guts of our algorithms and added two new features. One was the ability to detect and react to landing hazards from the air, which involved handing over a little bit of autonomy back to Ingenuity, with the ability to tell it, “Fly to your terminal waypoint and try and land where we think is good, based off of orbital imagery. But if you have better information from your images than what we humans had here on Earth, and you see a hazard, pick a safer site and land there instead.” So that’s one huge change in what’s happening now. And we need that at the river delta because we’re no longer flying in a parking lot—besides the challenge of the elevation change, the terrain is different as well, with more and larger rocks that Ingenuity needs to avoid.
The second feature that we added was to include information about the terrain to Ingenuity’s navigation filter. When we designed Ingenuity, we assumed we were only going to be deployed on the flat terrain of Three Forks. Therefore, any change in the laser altimeter measurement we could trust to be a real change in the motion of the helicopter, or we could at least filter that into our altitude data. But that’s no longer the case. Now, as Ingenuity flies, if the altimeter sees a big decrease in elevation, that could be because the ground is rising to meet us rather than because we’re moving down. So since December, we’ve been telling Ingenuity about the elevation profile across its intended flight so that it knows what the ground is doing underneath it.
Now that both the rover and the helicopter have begun the river delta climb, we’re also paying very close attention to our telecom-link budget maps. You can imagine every hill or rise that could occlude the line of sight between the helicopter antenna and the rover antenna will have a big impact on your telecom link, and we have wonderful maps from orbit where we can pick a potential landing point and propagate our radio-link budget calculation across that point.
We’re trying to plan these flights as aggressively as we can to make sure that we stay ahead of Perseverance. We don’t want to run the risk of having a situation where the rover may need to wait for Ingenuity—that’s not a good thing for anybody. But we also want to provide value for the rover by scouting ahead, and what we hope to do on flight 50 is to get some imagery of the Belva crater, which is this beautiful massive crater to the north of where Ingenuity currently is. We’re going to get perspectives that the rover team would not be able to provide for the science team, and it’s really exciting for us when there are these moments that are uniquely driven by Ingenuity’s capability. We want to go after those, because we want to provide that value while she’s still healthy. While we still can. We owe it to everyone who worked on Ingenuity and everyone who will continue to work on rotorcraft on Mars to try and get everything out of this little spacecraft that we can.
“One of the best hallmarks of technology success is when you don’t realize it, or when it becomes boring. That means the technology is working, and that’s a wonderful feeling.”
—Teddy Tzanetos, NASA
At one point, NASA was very clear that Ingenuity’s mission would come to an end so that Perseverance could move on to focus on its primary mission. But obviously, Ingenuity is still flying, and still keeping up with the rover. Not only that, but we’ve heard from a rover driver how valuable it is to have Ingenuity scouting ahead. With that in mind, as Ingenuity navigates this challenging terrain, will there be any flexibility if something doesn’t go quite right, or will Perseverance just leave the helicopter behind?
Tzanetos: We have to look at the big picture. The most important thing at this point is for Perseverance to collect samples and do science. If you look at everything that needs to be done across all of the rover’s science payloads, every sol [Martian day] is precious. And the helicopter team understands that.
We’re doing our best to become more efficient, and I think that’s a big win that we don’t celebrate enough on the Ingenuity team internally—how much more efficient we are today compared to where we were two years ago. Earlier, you mentioned flying becoming routine. I think the team has succeeded in doing that, and I’m extremely proud of that accomplishment. One of the best hallmarks of technology success is when you don’t realize it, or when it becomes boring. That means the technology is working, and that’s a wonderful feeling.
There’s what’s called a tactical window that we have between the downlink of the last sol’s activity and when we need to uplink activity for the next sol, which is anywhere from five to 10 hours. A certain cadence of activities have to take place during that window, and we need to pass certain checkpoints to get our data uploaded and radiated through the Deep Space Network in time. We’ve worked very, very hard to minimize our footprint on that timeline, while also being reactive so that we can move quickly on any last-minute changes that the rover team needs us to accommodate. We have to get in, fly, and get out.
Anomalies will happen. That’s just the nature of Mars. But when those moments occur, the helicopter and rover teams back each other up. To be clear, no one on the helicopter team wants to cause a delay for the rover. We all want the rover to fulfill its mission, get its samples, and get the science done. If we have a serious anomaly, we’ll have to take that one sol at a time. We’re going to try as hard as we can to make sure we can keep pushing this little baby as far as we can while still accomplishing the core science mission.
NASA’s Ingenuity Mars Helicopter takes off and lands in this video captured on 19 April 2021 by Mastcam-Z, an imager aboard NASA’s Perseverance Mars rover. This video features only the moments of takeoff and the landing—and not footage of the helicopter hovering for about 30 seconds.NASA/JPL-Caltech/ASU/MSSS
How do you balance risk to the helicopter against exploration and science goals, or trying new things like pushing Ingenuity’s flight envelope?
Tzanetos: That’s the fun part! There’s no instruction manual. The way we do it is we have a phone call with the core people on the team, and everyone just shares their opinions. The highest priority for us is getting some good scouting imagery for the scientists and rover drivers—we jump at those opportunities. If we’re flying through a piece of terrain that isn’t particularly interesting, that’s when we start looking at the flight envelope developments, right? With flight 49, we’re going higher than we ever had before and flying faster than we ever have before. That’s not a request from the science community or the rover planners; that’s coming from our own internal team where we’re trying to release capability piece by piece as the flights go on, because every time we get that win, it’s a win for the sample recovery helicopters. So there’s that ever-present pressure to push harder, push faster, push higher. And let’s also get some wonderful scouting data along the way when we can.
What have you learned about flying helicopters on Mars from 50 flights that you would have no idea about if you’d been able to do just five flights?
Tzanetos: Tons of things, since I just talked about flying faster and flying higher, and we’ve now legitimately expanded Ingenuity’s flight envelope. There’s the lifetime argument, which is obvious—this design has lasted much longer than anyone could have expected, even just in terms of parts and workmanship. Each one of Ingenuity’s nearly 1,000 solder joints were soldered by technicians at JPL who have the most blessed, precise hands. We’d designed Ingenuity to fly in springtime on Mars, but during the Martian winter, for more than 200 sols the temperature cycled between 20 °C and –90 °C and back again. Eventually, it got so cold that Ingenuity’s battery would die every night, the heater would stop running, and everything would freeze. That was a massive curve ball that we had to contend with, but because of the workmanship of those people, Ingenuity was able to survive.
“We now have a stake in the ground to say, ‘Off-the-shelf works, we can trust these things.’”
—Teddy Tzanetos, NASA
Also, dust. We knew that dust would settle on Ingenuity’s solar panel, but we’ve shown that through the process of flying, there’s some sort of effect that’s helping us to keep our panel clean. It’s difficult to put a finger on exactly what it is—maybe the vibration of flight, or the downwash of air passing over the solar panel and into the rotors, or the oncoming air as we move forward. And it wasn’t just the dust on the panels; we also got dust in our actuators. Last year, Ingenuity weathered a big dust storm, and afterwards when we tried checking our control surfaces, things did not look good. The motor currents were way too high, and we were left scratching our heads, trying to figure out what to do. We didn’t have dust boots around the rotor system simply because we had thought, “We’re only going to be operating for 30 days, we don’t need them.”
Our partners at AeroVironment [who worked with JPL on the Mars helicopter design] had one of the swash plate mechanisms lying around, so they spoke to our geologists to figure out what kinds of dust particles might have gotten blown into the swash plate on Mars. We sent them some simulated Mars dust, and they threw it at the swash plate, and then did an experiment to figure out how many times they needed to cycle it before it started to operate properly. Seven cycles got most of the dust out, so we tried that on Mars, and it worked. So now we have a new tool in our tool belt: We know how to clean ourselves. That’s huge. And we wouldn’t have figured out any of these things had we not gone past five flights.
Looking at the Mars sample return helicopters, how much of their design has been made possible by the fact that Ingenuity has been able to fly this long and answer these questions that you might not have even thought to ask?
The entire design. I don’t think we’d be talking about sample recovery helicopters if Ingenuity didn’t fly, period, and if it hadn’t survived for as long as it has. You have to keep in mind, Ingenuity is a tech demo. These sample recovery helicopters are a real part of the mission now. If Perseverance has an anomaly in the next decade, these helicopters are the backup—they have to work. And I’m sure that Ingenuity’s two years of extended operations provided the evidence necessary to even start talking about the sample recovery helicopters. Otherwise, it would be crazy to think, “Let’s go from tech demo to part of a class B mission within a year.”
That’s amazing. It must feel really good for you folks to have completely changed what the sample return mission looks like because of how successful Ingenuity has been.
Absolutely. I personally thought to myself, “Hey, this is great, Ingenuity has been doing a great job, and this will be wonderful data for the next time we send a rotorcraft to Mars.” Which I thought was going to be like 10 years later—I thought that the Mars sample return would happen with a rover, and then maybe after that, we could throw some helicopters on Mars, maybe a hexacopter with some science payloads on it. Never in my wildest dreams did I ever think, while we’re still flying Ingenuity, that we’d be designing the next helicopter mission based on Ingenuity to go to Mars.
More broadly, how has Ingenuity influenced NASA’s approach to robotics?
From a robotics perspective, I hope one of the long-lasting impacts of Ingenuity is the adoption of commercial off-the-shelf technology into more NASA missions, and other non-NASA missions into space. This was the first time we flew a cellphone processor, not because we loved the idea about using a part that wasn’t radiation hardened but because we were forced to. We needed a high-throughput processor, and the only way to do that and be lightweight enough was to use a cellphone chip. There was a lot of concern about that—we did some initial testing, but given that we were a tech demo, which means high-risk, high reward, we could only do so much. And here we are, two years later, with this Snapdragon Qualcomm processor that’s been running for two years on the surface of Mars, not to mention all the other components like the IMU [inertial measurement unit], the camera, the battery, the solar panels. I think that’s one of the unsung victories of Ingenuity. We now have a stake in the ground to say, “Off-the-shelf works, we can trust these things.” And we can make a stronger argument for the next mission to really enable your engineers and your scientists to have much more technology on board than anything else we’ve sent into space.
Ingenuity will attempt Flight 50 anytime now, with the goal of traveling 300 meters to the other side of a ridge. The landing site may make it difficult to know whether the flight was successful until Perseverance catches up a bit, but we hope to hear the good news within the next few days.
The mission to return martian samples back to Earth will see a European 2.5 metre-long robotic arm pick up tubes filled with precious soil from Mars and transfer them to a rocket for an historic interplanetary delivery.
The sophisticated robot, known as the Sample Transfer Arm or STA, will play a crucial role in the success of the Mars Sample Return campaign.
The Sample Transfer Arm is conceived to be autonomous, highly reliable and robust. The robot can perform a large range of movements with seven degrees of freedom, assisted by two cameras and a myriad of sensors. It features a gripper – akin to a hand – that can capture and handle the sample tubes at different angles.
The robotic arm will land on Mars to retrieve the sample tubes NASA’s Perseverance rover is currently collecting from the surface. Able to “see”, “feel” and take autonomous decisions, its high level of dexterity allows the arm to extract the tubes from the rover, pick them up from the martian ground, insert them into a container and close the lid before lifting-off from Mars.
ESA’s Earth Return Orbiter (ERO) will rendezvous with the container filled with martian samples and bring the material back to Earth.
The joint endeavour between NASA and ESA aims to bring back martian samples to the best labs in our planet by 2033.
Follow the latest news about Mars Sample Return on Twitter and read all about it on the blog To Mars and Back.
More about the Sample Transfer Arm
Three days before astronauts left on Apollo 8, the first-ever flight around the moon, NASA’s safety chief, Jerome Lederer, gave a speech that was at once reassuring and chilling. Yes, he said, the United States’ moon program was safe and well-planned—but even so, “Apollo 8 has 5,600,000 parts and one and one half million systems, subsystems, and assemblies. Even if all functioned with 99.9 percent reliability, we could expect 5,600 defects.”
The mission, in December 1968, was nearly flawless—a prelude to the Apollo 11 landing the next summer. But even today, half a century later, engineers wrestle with the sheer complexity of the machines they build to go to space. NASA’s Artemis I, its Space Launch System rocket mandated by Congress in 2010, endured a host of delays before it finally launched in November 2022. And Elon Musk’s SpaceX may be lauded for its engineering acumen, but it struggled for six years before its first successful flight into orbit.
Relativity envisions 3D-printing facilities someday on the Martian surface, fabricating much of what people from Earth would need to live there.
Is there a better way? An upstart company called Relativity Space is about to try one. Its Terran 1 rocket, the company says, has about a tenth as many parts as comparable launch vehicles do, because it is made through 3D printing. Instead of bending metal and milling and welding, engineers program a robot to deposit layers of metal alloy in place.
Relativity’s first rocket, the company says, is ready to go from launch complex 16 at Cape Canaveral, Fla. When it happens, the company says it will stream the liftoff on YouTube.
Artist’s concept of Relativity’s planned Terran R rocket. The company says it should be able to carry a 20,000-kilogram payload into low Earth orbit.Relativity
“Over 85 percent of the rocket by mass is 3D printed,” said Scott Van Vliet, Relativity’s head of software engineering. “And what’s really cool is not only are we reducing the amount of parts and labor that go into building one of these vehicles over time, but we’re also reducing the complexity, we’re reducing the chance of failure when you reduce the part count, and you streamline the build process.”
Relativity says it can put together a Terran rocket in two months, compared to two years for some conventionally built ones. The speed and cost of making a prototype—say, for wind-tunnel testing—are reduced because you tell the printer to make a scaled-down model. There is less waste because the process is additive. And if something needs to be modified, you reprogram the 3D printer instead of slow, expensive retooling.
Investors have noticed. The company says financial backers have included BlackRock, Y Combinator and the entrepreneur Mark Cuban.
“If you walk into any rocket factory today other than ours,” said Josh Brost, the company’s head of business development, “you still will see hundreds of thousands of parts coming from thousands of vendors, and still being assembled using lots of touch labor and lots of big-fix tools.”
Terran 1 Nose Cone Timelapse Check out this timelapse of our nose cone build for Terran 1. This milestone marks the first time we’ve created this unique shape ...
Terran 1, rated as capable of putting a 1,250-kilogram payload in low Earth orbit, is mainly intended as a test bed. Relativity has signed up a variety of future customers for satellite launches, but the first Terran 1 (“Terran” means “earthling”) will not carry a paying customer’s satellite. The first flight has been given the playful name “Good Luck, Have Fun”—GLHF for short. Eventually, if things are going well, Relativity will build a larger booster, called Terran R, which the company hopes will compete with the SpaceX Falcon 9 for launches of up to 20,000 kg. Relativity says the Terran R should be fully reusable, including the upper stage—something that other commercial launch companies have not accomplished. In current renderings, the rocket is, as the company puts it, “inspired by nature,” shaped to slice through the atmosphere as it ascends and comes back for recovery.
A number of Relativity’s top people came from Musk’s SpaceX or Jeff Bezos’s space company, Blue Origin, and, like Musk, they say their vision is a permanent presence on Mars. Brost calls it “the long-term North Star for us.” They say they can envision 3D-printing facilities someday on the Martian surface, fabricating much of what people from Earth would need to live there. “For that to happen,” says Brost, “you need to have manufacturing capabilities that are autonomous and incredibly flexible.”
Relativity’s fourth-generation Stargate 3D printer.Relativity
Just how Relativity will do all these things is a work in progress. The company says its 3D technology will help it work iteratively—finding mistakes as it goes, then correcting them as it prints the next rocket, and the next, and so on.
“In traditional manufacturing, you have to do a ton of work up front and have a lot of the design features done well ahead of time,” says Van Vliet. “You have to invest in fixed tooling that can often take years to build before you’ve actually developed an article for your launch vehicle. With 3D printing, additive manufacturing, we get to building something very, very quickly.”
The next step is to get the first rocket off the pad. Will it succeed? Brost says a key test will be getting through max q—the point of maximum dynamic pressure on the rocket as it accelerates through the atmosphere before the air around it thins out.
“If you look at history, at new space companies doing large rockets, there’s not a single one that’s done their first rocket on their first try. It would be quite an achievement if we were able to achieve orbit on our inaugural launch,” says Brost.
“I’ve been to many launches in my career,” he says, “and it never gets less exciting or nerve wracking to me.”
An experimental, potentially revolutionary all-electric airplane designed by NASA will soon be taking its first test flight, which will mark a major milestone for battery-powered aviation. However, the program already appears destined to fall short of its lofty goal to exploit the unique features of electric propulsion to rewrite the design rules for modern aircraft. Its time and funding has nearly run out.
Part of the agency’s storied X-plane program, the X-57 Maxwell set out with the ambitious goal of tackling two grand challenges in aerospace engineering simultaneously. Not only did it aim to show that an airplane could be powered entirely by electricity, it also planned to demonstrate the significant gains in efficiency and performance that could be made by switching from two large engines to many smaller ones evenly distributed across the wings—a configuration known as a “blown wing.”
The plan was to demonstrate both of these propositions through a series of increasingly advanced test vehicles. Ultimately though, the complexity of the first challenge, compounded by disruptions caused by the COVID-19 pandemic, saw timelines repeatedly pushed back. As a result, the project’s leaders say it no longer has the funding to progress to the latter stages of the program.
“It turned out to be actually a pretty tall order to work through all of those airworthiness, and qualification, and design challenges.”
—Sean Clarke, NASA
The first iteration of the X-57, a modified Tecnam P2006T light aircraft whose gas-powered engines have been replaced with electric motors, will take flight this coming spring or possibly summer. (As of early January, NASA is still unclear as to precisely when that maiden voyage will be. NASA officials Spectrum contacted could only narrow the timeframe down to “first half of 2023.”) That will be a significant achievement, making the X-57 one of just a handful of electrically powered aircraft to get off the ground. But the team say they plan to wrap up flight testing by the end of the year and will no longer be building more advanced designs featuring novel wing configurations and distributed propulsion, such as the blown wing.
“We tried to do a very ambitious thing. Trying to do a new type of airframe and a new motor project is not very typical, because those are both very, very challenging endeavors,” says Nick Borer, deputy principal investigator for the X-57 project at the NASA Langley Research Center. “The agency funds a lot of different things and they’ve been very generous with what they’ve provided to us. But there are priorities at the top and eventually, you’ve got to finish up.”
The project’s ultimate goal was to take advantage of the benefits of electric propulsion to reimagine the design of aircraft wings. For instance, in the case of that blown wing: the large number of motors and props on the leading-edge force air at high rates over the wing, which can generate significant lift even at low speeds. This makes it possible to take off from shorter runways and can also allow you to shrink the size of the wing, reducing drag and boosting cruise efficiency.
The design is difficult to achieve with conventional combustion engines, because they are relatively heavy and become increasingly inefficient as they are scaled down in size. The same is not true of electric motors though, which means it’s relatively simple to switch from several large motors to many smaller ones distributed along the wing.
The current iteration of the X-57, pictured here, is powered by two electric motors and is based at the NASA Armstrong Flight Research Center in California.Carla Thomas/NASA
The final design iteration of the X-57 had six small electrically powered propellers across the front of each wing. The wings themselves would be only 40 percent of the size of a conventional P2006T wing. The design also featured two larger motors mounted on the tips of each wing, which would further reduce drag by counteracting the vortices normally produced at the end of each wing. Because the high lift generated by the smaller propellers along the leading edge would only be needed at take-off, these were designed to fold up once at cruising altitude to further reduce drag.
“The whole idea of an X-plane is to do something that has never been done before, and so I think it is just normal to expect that there is a learning curve.”
—Sergio Cecutta, SMG Consulting
Altogether these aerodynamic innovations would slash the planes’ power consumption at cruise by as much as a third, according to Borer. Electric motors are also about three times more efficient in terms of their power-to-weight ratio compared to gasoline-burning ones, he adds, so combined these design changes were expected to lead to a roughly fivefold reduction in energy requirements while flying at cruise speeds of around 280 kilometers per hour.
Switching to electric propulsion turned out to be more complicated than envisioned. The team had to completely redesign their battery packs in 2017 to avoid the risk of catastrophic fires. The high voltages and power levels required for electric aviation also posed significant complications, says Borer, requiring several iterations of the systems designed to protect components from electromagnetic interference.
Early on in the project they also found that state-of-the-art transistors able to withstand high power levels couldn’t tolerate the vibrations and temperatures involved in flight. This was resolved only recently by switching to a newer generation of silicon carbide MOSFET modules, says Sean Clarke, principal investigator for the X-57 project at the NASA Armstrong Flight Research Center in California. “It turned out to be actually a pretty tall order to work through all of those airworthiness, and qualification, and design challenges,” he says.
This led to delays that will mean the more ambitious goals of the project may not come to fruition, but Borer hopes that others will be able to pick up from where they left off. The team has been regularly publishing their findings and data as they’ve progressed, he says. They are also actively contributing to standards for electric aviation and are working with regulators to help develop aircraft certification processes. “We’re pushing out everything that we can,” says Borer.
The X-57’s custom-made battery packs installed in the aircraft’s cabin provide all the aircraft’s power, rather than the JET A/A-1 fuel that powers most aviation today. Lauren Hughes/NASA
This information sharing has already borne fruit. NASA’s main subcontractor for the project California-based Empirical Systems Aerospace has been able to commercialize the X-57’s battery pack design, and the agency has a technology-transfer agreement with Virginia-based electric-aircraft designer Electra, which involved the team sharing information on their aerodynamic innovations. The company that NASA initially contracted to build their electric motors, Joby Aviation, has also gone on to develop their own electric vertical take-off and landing (eVTOL) vehicle and is today one of the leaders in the industry.
This is the beauty of a publicly funded effort like the X-57, says Sergio Cecutta, founder and partner at SMG Consulting, who covers the electric-aviation industry. Unlike a private development effort, he says, all of the advances and lessons that have come out of the project will be in the public domain and can spread throughout the industry. And while it may not have achieved its most ambitious goals, Cecutta says it has done exactly what was intended, which was to remove some of the roadblocks holding back electric aviation.
“The whole idea of an X-plane is to do something that has never been done before, and so I think it is just normal to expect that there is a learning curve,” he says. “In the end, you want to lay the groundwork for the industry to become successful, and I think on that metric, the X-57 has been a successful project.”
A rocket built by Indian startup Skyroot has become the country’s first privately developed launch vehicle to reach space, following a successful maiden flight earlier today. The suborbital mission is a major milestone for India’s private space industry, say experts, though more needs to be done to nurture the fledgling sector.
The Vikram-S rocket, named after the founder of the Indian space program, Vikram Sarabhai, lifted off from the Indian Space Research Organization’s (ISRO) Satish Dhawan Space Centre, on India’s east coast, at 11:30 a.m. local time (1 a.m. eastern time). It reached a peak altitude of 89.5 kilometers (55.6 miles), crossing the 80-km line that NASA counts as the boundary of space, but falling just short of the 100 km recognized by the Fédération Aéronautique Internationale.
In the longer run, India’s space industry has ambitions of capturing a significant chunk of the global launch market.
Pawan Kumar Chandana, cofounder of the Hyderabad-based startup, says the success of the launch is a major victory for India’s nascent space industry, but the buildup to the mission was nerve-racking. “We were pretty confident on the vehicle, but, as you know, rockets are very notorious for failure,” he says. “Especially in the last 10 seconds of countdown, the heartbeat was racing up. But once the vehicle had crossed the launcher and then went into the stable trajectory, I think that was the moment of celebration.”
At just 6 meters (20 feet) long and weighing only around 550 kilograms (0.6 tonnes), the Vikram-S is not designed for commercial use. Today’s mission, called Prarambh, which means “the beginning” in Sanskrit, was designed to test key technologies that will be used to build the startup’s first orbital rocket, the Vikram I. The rocket will reportedly be capable of lofting as much as 480 kg up to an 500-km altitude and is slated for a maiden launch next October.
Skyroot cofounder Pawan Kumar Chandana standing in front of the Vikram-S rocket at the Satish Dhawan Space Centre, on the east coast of India.Skyroot
In particular, the mission has validated Skyroot’s decision to go with a novel all-carbon fiber structure to cut down on weight, says Chandana. It also allowed the company to test 3D-printed thrusters, which were used for spin stabilization in Vikram-S but will power the upper stages of its later rockets. Perhaps the most valuable lesson, though, says Chandana, was the complexity of interfacing Skyroot's vehicle with ISRO’s launch infrastructure. “You can manufacture the rocket, but launching it is a different ball game,” he says. “That was a great learning experience for us and will really help us accelerate our orbital vehicle.”
Skyroot is one of several Indian space startups looking to capitalize on recent efforts by the Indian government to liberalize its highly regulated space sector. Due to the dual-use nature of space technology, ISRO has historically had a government-sanctioned monopoly on most space activities, says Rajeswari Pillai Rajagopalan, director of the Centre for Security, Strategy and Technology at the Observer Research Foundation think tank, in New Delhi. While major Indian engineering players like Larsen & Toubro and Godrej Aerospace have long supplied ISRO with components and even entire space systems, the relationship has been one of a supplier and vendor, she says.
But in 2020, Finance Minister Nirmala Sitharaman announced a series of reforms to allow private players to build satellites and launch vehicles, carry out launches, and provide space-based services. The government also created the Indian National Space Promotion and Authorisation Centre (InSpace), a new agency designed to act as a link between ISRO and the private sector, and affirmed that private companies would be able to take advantage of ISRO’s facilities.
The first launch of a private rocket from an ISRO spaceport is a major milestone for the Indian space industry, says Rajagopalan. “This step itself is pretty crucial, and it’s encouraging to other companies who are looking at this with a lot of enthusiasm and excitement,” she says. But more needs to be done to realize the government’s promised reforms, she adds. The Space Activities Bill that is designed to enshrine the country’s space policy in legislation has been languishing in draft form for years, and without regulatory clarity, it’s hard for the private sector to justify significant investments. “These are big, bold statements, but these need to be translated into actual policy and regulatory mechanisms,” says Rajagopalan.
Skyroot’s launch undoubtedly signals the growing maturity of India’s space industry, says Saurabh Kapil, associate director in PwC’s space practice. “It’s a critical message to the Indian space ecosystem, that we can do it, we have the necessary skill set, we have those engineering capabilities, we have those manufacturing or industrialization capabilities,” he says.
The Vikram-S rocket blasting off from the Satish Dhawan Space Centre, on the east coast of India.Skyroot
However, crossing this technical milestone is only part of the challenge, he says. The industry also needs to demonstrate a clear market for the kind of launch vehicles that companies like Skyroot are building. While private players are showing interest in launching small satellites for applications like agriculture and infrastructure monitoring, he says, these companies will be able to build sustainable businesses only if they are allowed to compete for more lucrative government and defense-sector contacts.
In the longer run, though, India’s space industry has ambitions of capturing a significant chunk of the global launch market, says Kapil. ISRO has already developed a reputation for both reliability and low cost—its 2014 mission to Mars cost just US $74 million, one-ninth the cost of a NASA Mars mission launched the same week. That is likely to translate to India’s private space industry, too, thanks to a considerably lower cost of skilled labor, land, and materials compared with those of other spacefaring nations, says Kapil. “The optimism is definitely there that because we are low on cost and high on reliability, whoever wants to build and launch small satellites is largely going to come to India,” he says.
News about Khan, a former prime minister at the center of a political crisis roiling Pakistan, mostly disappeared from the country’s media.
The post In Secret Meeting, Pakistani Military Ordered Press to Stop Covering Imran Khan appeared first on The Intercept.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. This week, we’re featuring a special selection of videos from ICRA 2023! We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Abstract: Near the limits of adhesion, the forces generated by a tire are nonlinear and intricately coupled. Efficient and accurate modeling in this region could improve safety, especially in emergency situations where high forces are required. To this end, we propose a novel family of tire force models based on neural ordinary differential equations and a neural-ExpTanh parameterization. These models are designed to satisfy physically insightful assumptions while also having sufficient fidelity to capture higher-order effects directly from vehicle state measurements. They are used as drop-in replacements for an analytical brush tire model in an existing nonlinear model predictive control framework. Experiments with a customized Toyota Supra show that scarce amounts of driving data—less than 3 minutes—is sufficient to achieve high-performance autonomous drifting on various trajectories with speeds up to 45 miles per hour. Comparisons with the benchmark model show a 4x improvement in tracking performance, smoother control inputs, and faster and more consistent computation time.
Abstract: Aerial-aquatic vehicles are capable of moving in the two most dominant fluids, making them more promising for a wide range of applications. We propose a prototype with special designs for propulsion and thruster configuration to cope with the vast differences in the fluid properties of water and air. For propulsion, the operating range is switched for the different mediums by the dual-speed propulsion unit, providing sufficient thrust and also ensuring output efficiency. For thruster configuration, thrust vectoring is realized by the rotation of the propulsion unit around the mount arm, thus enhancing the underwater maneuverability. This paper presents a quadrotor prototype of this concept and the design details and realization in practice.
Abstract: Falling cat problem is well-known where cats show their super aerial reorientation capability and can land safely. For their robotic counterparts, a similar falling quadruped robot problem has not been fully addressed, although achieving as safe a landing as the cats has been increasingly investigated. Unlike imposing the burden on landing control, we approach to safe landing of falling quadruped robots by effective flight phase control. Different from existing work like swinging legs and attaching reaction wheels or simple tails, we propose to deploy a 3-DoF morphable inertial tail on a medium-size quadruped robot. In the flight phase, the tail with its maximum length can self-right the body orientation in 3D effectively; before touchdown, the tail length can be retracted to about 1/4 of its maximum for impressing the tail’s side effect on landing. To enable aerial reorientation for safe landing in the quadruped robots, we design a control architecture that is verified in a high-fidelity physics simulation environment with different initial conditions. Experimental results on a customized flight-phase test platform with comparable inertial properties are provided and show the tail’s effectiveness on 3D body reorientation and its fast retractability before touchdown. An initial falling quadruped robot experiment is shown, where the robot Unitree A1 with the 3-DoF tail can land safely subject to non-negligible initial body angles.
Abstract: Achieving stable hopping has been a hallmark challenge in the field of dynamic legged locomotion. Controlled hopping is notably difficult due to extended periods of underactuation combined with very short ground phases wherein ground interactions must be modulated to regulate a global state. In this work, we explore the use of hybrid nonlinear model predictive control paired with a low-level feedback controller in a multirate hierarchy to achieve dynamically stable motions on a novel 3D hopping robot. In order to demonstrate richer behaviors on the manifold of rotations, both the planning and feedback layers must be designed in a geometrically consistent fashion; therefore, we develop the necessary tools to employ Lie group integrators and appropriate feedback controllers. We experimentally demonstrate stable 3D hopping on a novel robot, as well as trajectory tracking and flipping in simulation.
Abstract: Enlightened by the fast-running gait of mammals like cheetahs and wolves, we design and fabricate a single-actuated untethered compliant robot that is capable of galloping at a speed of 313 millimeters per second or 1.56 body lengths per second (BL/s), faster than most reported soft crawlers in mm/s and BL/s. An in-plane prestressed hair clip mechanism (HCM) made up of semirigid materials, i.e., plastics are used as the supporting chassis, the compliant spine, and the force amplifier of the robot at the same time, enabling the robot to be simple, rapid, and strong. With experiments, we find that the HCM robotic locomotion speed is linearly related to actuation frequencies and substrate friction differences except for concrete surfaces, that tethering slows down the crawler, and that asymmetric actuation creates a new galloping gait. This paper demonstrates the potential of HCM-based soft robots.
Abstract: In nature, living creatures show versatile behaviors. They can move on various terrains and perform impressive object manipulation/transportation using their legs. Inspired by their morphologies and control strategies, we have developed bioinspired robots and adaptive modular neural control. In this video, we demonstrate our five bioinspired robots in our robot zoo setup. Inchworm-inspired robots with two electromagnetic feet (Freelander-02 and AVIS) can adaptively crawl and balance on horizontal and vertical metal pipes. With special design, the Freelander-02 robot can adapt its posture to crawl underneath an obstacle, while the AVIS robot can step over a flange. A millipede-inspired robot with multiple body segments (Freelander-08) can proactively adapt its body joints to efficiently navigate on bump terrain. A dung beetle–inspired robot (ALPHA) can transport an object by grasping it with its hind legs and at the same time walk backward with the remaining legs like dung beetles. Finally, an insect-inspired robot (MORF), which is a hexapod robot platform, demonstrates typical insectlike gaits (slow wave and fast tripod gaits). In a nutshell, we believe that this bioinspired robot zoo demonstrates how the diverse and fascinating abilities of living creatures can serve as inspiration and principles for developing robotics technology capable of achieving multiple robotic functions and solving complex motor control problems in systems with many degrees of freedom.
Abstract: AngGo is a hands-free shared indoor smart mobility device for public use. AngGo is a personal mobility device that is suitable for the movement of passengers in huge indoor spaces such as convention centers or airports. The user can use both hands freely while riding the AngGo. Unlike existing mobility devices, the mobility device can be maneuvered using the feet and was designed to be as intuitive as possible. The word “AngGo” is pronounced like a Korean word meaning “sit down and move.” There are 6 ToF distance sensors around AngGo. Half of them are in the front part and the other half are in the rear part. In the autonomous mode, AngGo avoids obstacles based on the distance from each sensor. IR distance sensors are mounted under the footrest to measure the extent to which the footrest is moved forward or backward, and these data are used to control the rotational speed of motors. The user can control the speed and the direction of AngGo simultaneously. The spring in the footrest generates force feedback, so the user can recognize the amount of variation.
Abstract: Since the Renaissance, artists have created artworks using novel techniques and machines, deviating from conventional methods. The robotic drawing system is one of such creative attempts that involves not only the artistic nature but also scientific problems that need to be solved. Robotic drawing problems can be viewed as planning the robot’s drawing path that eventually leads to the art form. The robotic pen-art system imposes new challenges, unlike robotic painting, requiring the robot to maintain stable contact with the target drawing surface. This video showcases an autonomous robotic system that creates pen art on an arbitrary canvas surface without restricting its size or shape. Our system converts raster or vector images into piecewise-continuous paths depending on stylistic choices, such as TSP art or stroke-based drawing. Our system consists of multiple manipulators with mobility and performs stylistic drawing tasks. In order to create a more extensive pen art, the mobile manipulator setup finds a minimal number of discrete configurations for the mobile platform to cover the ample canvas space. The dual manipulator setup can generate multicolor pen art using adaptive three-finger grippers with a pen-tool-change mechanism. We demonstrate that our system can create visually pleasing and complicated pen art on various surfaces.
Abstract: We developed a novel “Smart Bartender” system, which can understand the intention of users just from the eye gaze and make some corresponding actions. Particularly, we believe that a cyber-barman who cannot feel our faces is not an intelligent one. We thus aim at building a novel cyber-barman by capturing and analyzing the intention of the customers on the fly. Technically, such a system enables the user to select a drink simply by staring at it. Then the robotic arm mounted with a camera will automatically grasp the target bottle and pour the liquid into the cup. To achieve this goal, we firstly adopt YOLO to detect candidate drinks. Then, the GazeNet is utilized to generate potential gaze center for grounding the target bottle that has minimum center-to-center distance. Finally, we use object pose estimation and path-planning algorithms to guide the robotic arm to grasp the target bottle and execute pouring. Our system integrated with the category-level object pose estimation enjoys powerful performance, generalizing to various unseen bottles and cups that are not used for training. We believe our system would not only reduce the intensive human labor in different service scenarios but also provide users with interactivity and enjoyment.
Abstract: The current state of robotics technology lacks a platform that can combine manipulation, aerial locomotion, and bipedal terrestrial locomotion. Therefore, we define aerial humanoid robotics as the outcome of platforms with these three capabilities. To implement aerial humanoid robotics on the humanoid robot iCub, we conduct research in different directions. This includes experimental research on jet turbines and codesign, which is necessary to implement aerial humanoid robotics on the real iCub. These activities aim to model and identify the jet turbines. We also investigate flight control of flying humanoid robots using Lyapunov-quadratic-programming-based control algorithms to regulate both the attitude and position of the robot. These algorithms work independently of the number of jet turbines installed on the robot and ensure satisfaction of physical constraints associated with the jet engines. In addition, we research computational fluid dynamics for aerodynamics modeling. Since the aerodynamics of a multibody system like a flying humanoid robot is complex, we use CFD simulations with Ansys to extract a simplified model for control design, as there is little space for closed-form expressions of aerodynamic effects.
Abstract: The goal of this research project (Consortium: Altatek GmbH, Eastern Switzerland University of Applied Sciences OST, Faculty of Law University of Zurich) was the development of a multifunctional, autonomous single-axle robot with an electric drive. The robot is customized for agricultural applications in mountainous areas with steepest slopes. The intention is to relieve farmers from arduous and safety-critical work. Furthermore, the robot is developed as a modular platform that can be used for work in forestry, municipal, sports fields, and winter/snow applications. Robot features: Core feature is the patented center of gravity control. With a sliding wheel axle of 800 millimeters, hills up to a steepness of 35 degrees (70 percent) can be easily driven and a safe operation without tipping can be ensured. To make the robot more sustainable, electric drives and a 48-volt battery were equipped. To navigate in mountainous areas, several sensors are used. In difference to applications on flat areas, the position and gradient of the robot on the slope needs to be measured and considered in the path planning. A sensor system that detects possible obstacles and especially humans or animals which could be in the path of the robot is currently under development.
Abstract: Surf zones are challenging for walking robots if they cannot anchor to the substrate, especially at the transition between dry sand and waves. Crablike dactyl designs enable robots to achieve this anchoring behavior while still being lightweight enough to walk on dry sand. Our group has been developing a series of crablike robots to achieve the transition from walking on underwater surfaces to walking on dry land. Compared with the default forward-moving gait, we find that inward-pulling gaits and sideways walking increase efficiency in granular media. By using soft dactyls, robots can probe the ground to classify substrates, which can help modify gaits to better suit the environment and recognize hazardous conditions. Dactyls can also be used to securely grasp the object and dig in the substrate for installing cables, searching for buried objects, and collecting sediment samples. To simplify control and actuation, we developed a four-degrees-of-freedom Klann mechanism robot, which can climb onto an object and then grasp it. In addition, human interfaces will improve our ability to precisely control the robot for these types of tasks. In particular, the U.S. government has identified munitions retrieval as an environmental priority through their Strategic Environmental Research Development Program. Our goal is to support these efforts with new robots.
Abstract: Tasks involving locally unstable or discontinuous dynamics (such as bifurcations and collisions) remain challenging in robotics, because small variations in the environment can have a significant impact on task outcomes. For such tasks, learning a robust deterministic policy is difficult. We focus on structuring exploration with multiple stochastic policies based on a mixture of experts (MoE) policy representation that can be efficiently adapted. The MoE policy is composed of stochastic subpolicies that allow exploration of multiple distinct regions of the action space (or strategies) and a high- level selection policy to guide exploration toward the most promising regions. We develop a robot system to evaluate our approach in a real-world physical problem-solving domain. After training the MoE policy in simulation, online learning in the real world demonstrates efficient adaptation within just a few dozen attempts, with a minimal sim2real gap. Our results confirm that representing multiple strategies promotes efficient adaptation in new environments and strategies learned under different dynamics can still provide useful information about where to look for good strategies.
Abstract: This paper tackles the task of singulating and grasping paperlike deformable objects. We refer to such tasks as paper-flipping. In contrast to manipulating deformable objects that lack compression strength (such as shirts and ropes), minor variations in the physical properties of the paperlike deformable objects significantly impact the results, making manipulation highly challenging. Here, we present Flipbot, a novel solution for flipping paperlike deformable objects. Flipbot allows the robot to capture object physical properties by integrating exteroceptive and proprioceptive perceptions that are indispensable for manipulating deformable objects. Furthermore, by incorporating a proposed coarse-to-fine exploration process, the system is capable of learning the optimal control parameters for effective paper-flipping through proprioceptive and exteroceptive inputs. We deploy our method on a real-world robot with a soft gripper and learn in a self-supervised manner. The resulting policy demonstrates the effectiveness of Flipbot on paper-flipping tasks with various settings beyond the reach of prior studies, including but not limited to flipping pages throughout a book and emptying paper sheets in a box. The code is available here: https://robotll.github.io/Flipbot/
Abstract: Crochet is a textile craft that has resisted mechanization and industrialization except for a select number of one-off crochet machines. These machines are only capable of producing a limited subset of common crochet stitches. Crochet machines are not used in the textile industry, yet mass-produced crochet objects and clothes sold in stores like Target and Zara are almost certainly the products of crochet sweatshops. The popularity of crochet and the existence of crochet products in major chain stores shows that there is both a clear demand for this craft as well as a need for it to be produced in a more ethical way. In this paper, we present Croche-Matic, a radial crochet machine for generating three-dimensional cylindrical geometry. The Croche-Matic is designed based on Magic Ring technique, a method for hand-crocheting 3D cylindrical objects. The machine consists of nine mechanical axes that work in sequence to complete different types of crochet stitches, and includes a sensor component for measuring and regulating yarn tension within the mechanical system. Croche-Matic can complete the four main stitches used in Magic Ring technique. It has a success rate of 50.7 percent with single crochet stitches, and has demonstrated an ability to create three-dimensional objects.
Abstract: This letter presents the first design of a soft and lightweight UAV, entirely 3D-printed in flexible filament. The drone’s flexible arms are equipped with a tendon-actuated bending system, which is used for applications that require physical interaction with the environment. The flexibility of the UAV can be controlled during the additive manufacturing process by adjusting the infill rate ρTPU distribution. However, the increase inflexibility implies difficulties in controlling the UAV, as well as structural, aerodynamic, and aeroelastic effects. This article provides insight into the dynamics of the system and validates the flyability of the vehicle for densities as low as 6 percent. Within this range, quasi-static arm deformations can be considered; thus the autopilot is fed back through a static arm deflection model. At lower densities, strong nonlinear elastic dynamics appear, which translates to complex modeling, and it is suggested to switch to data-based approaches. Moreover, this work demonstrates the ability of the soft UAV to perform full-body perching, specifically landing and stabilizing on pipelines and irregular surfaces without the need for an auxiliary system.
Abstract: Cargo drones are designed to carry payloads with predefined shape, size, and/or mass. This lack of flexibility requires a fleet of diverse drones tailored to specific cargo dimensions. Here we propose a new reconfigurable drone based on a modular design that adapts to different cargo shapes, sizes, and mass. We also propose a method for the automatic generation of drone configurations and suitable parameters for the flight controller. The parcel becomes the drone’s body to which several individual propulsion modules are attached. We demonstrate the use of the reconfigurable hardware and the accompanying software by transporting parcels of different mass and sizes requiring various numbers and propulsion modules’ positioning. The experiments are conducted indoors (with a motion-capture system) and outdoors (with an RTK-GNSS sensor). The proposed design represents a cheaper and more versatile alternative to the solutions involving several drones for parcel transportation.
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
Does your robot know where it is right now? Does it? Are you sure? And what about all of its robot friends—do they know where they are too? This is important. So important, in fact, that some would say that multirobot simultaneous localization and mapping (SLAM) is a crucial capability to obtain timely situational awareness over large areas. Those some would be a group of MIT roboticists who just won the IEEE Transactions on Robotics Best Paper Award for 2022, presented at this year’s IEEE International Conference on Robotics and Automation (ICRA 2023), in London. Congratulations!
Out of more than 200 papers published in Transactions on Robotics last year, reviewers and editors voted to present the 2022 IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award to Yulun Tian, Yun Chang, Fernando Herrera Arias, Carlos Nieto-Granda, Jonathan P. How, and Luca Carlone from MIT for their paper Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems.
“The editorial board, and the reviewers, were deeply impressed by the theoretical elegance and practical relevance of this paper and the open-source code that accompanies it. Kimera-Multi is now the gold standard for distributed multirobot SLAM.”
—Kevin Lynch, editor in chief, IEEE Transactions on Robotics
Robots rely on simultaneous localization and mapping to understand where they are in unknown environments. But unknown environments are a big place, and it takes more than one robot to explore all of them. If you send a whole team of robots, each of them can explore their own little bit, and then share what they’ve learned with one another to make a much bigger map that they can all take advantage of. Like most things robot, this is much easier said than done, which is why Kimera-Multi is so useful and important. The award-winning researchers say that Kimera-Multi is a distributed system that runs locally on a bunch of robots all at once. If one robot finds itself in communications range with another robot, they can share map data, and use those data to build and improve a globally consistent map that includes semantic annotations.
Since filming the above video, the researchers have done real-world tests with Kimera-Multi. Below is an example of the map generated by three robots as they travel a total of more than 2 kilometers. You can easily see how the accuracy of the map improves significantly as the robots talk to each other:
More details and code are available on GitHub.
Transactions on Robotics also selected some excellent Honorable Mentions for 2022:
Stabilization of Complementarity Systems via Contact-Aware Controllers, by Alp Aydinoglu, Philip Sieg, Victor M. Preciado, and Michael Posa
Autonomous Cave Surveying With an Aerial Robot, by Wennie Tabib, Kshitij Goel, John Yao, Curtis Boirum, and Nathan Michael
Prehensile Manipulation Planning: Modeling, Algorithms and Implementation, by Florent Lamiraux and Joseph Mirabel
Rock-and-Walk Manipulation: Object Locomotion by Passive Rolling Dynamics and Periodic Active Control, by Abdullah Nazir, Pu Xu, and Jungwon Seo
Origami-Inspired Soft Actuators for Stimulus Perception and Crawling Robot Applications, by Tao Jin, Long Li, Tianhong Wang, Guopeng Wang, Jianguo Cai, Yingzhong Tian, and Quan Zhang
Political messaging expert Anat Shenker-Osorio breaks down the art of reframing the debate for progressives to win.
The post A Dmitri Rebuttal by Messaging Expert Anat Shenker-Osorio appeared first on The Intercept.
At IEEE, we know that the advancement of science and technology is the engine that drives the improvement of the quality of life for every person on this planet. Unfortunately, as we are all aware, today’s world faces significant challenges, including escalating conflicts, a climate crisis, food insecurity, gender inequality, and the approximately 2.7 billion people who cannot access the Internet.
The COVID-19 pandemic exposed the digital divide like never before. The world saw the need for universal broadband connectivity for remote work, online education, telemedicine, entertainment, and social networking. Those who had access thrived while those without it struggled. As millions of classrooms moved online, the lack of connectivity made it difficult for some students to participate in remote learning. Adults who could not perform their job virtually faced layoffs or reduced work hours.
The pandemic also exposed weaknesses in the global infrastructure that supports the citizens of the world. It became even more apparent that vital communications, computing, energy, and distribution infrastructure was not always equitably distributed, particularly in less developed regions.
I had the pleasure of presenting the 2023 IEEE President’s Award to Doreen Bogdan-Martin, secretary-general of the International Telecommunication Union, on 28 March, at ITU’s headquarters in Geneva. The award recognizes her distinguished leadership at the agency and her notable contributions to the global public.
It is my honor to recognize such a transformational leader and IEEE member for her demonstrated commitment to bridging the digital divide and to ensuring connectivity that is safe, inclusive, and affordable to all.
Nearly 45 percent of global households do not have access to the Internet, according to UNESCO. A report from UNICEF estimates that nearly two-thirds of the world’s schoolchildren lack Internet access at home.
This digital divide is particularly impactful on women. who are 23 percent less likely than men to use the Internet. According to the United Nations Educational, Scientific and Cultural Organization, in 10 countries across Africa, Asia, and South America, women are between 30 percent and 50 percent less likely than men to make use of the Internet.
Even in developed countries, Internet access is often lower than one might imagine. More than six percent of the U.S. population does not have a high-speed connection. In Australia, the figure is 13 percent. Globally, just over half of households have an Internet connection, according to UNESCO. In the developed world, 87 percent are connected, compared with 47 percent in developing nations and just 19 percent in the least developed countries.
As IEEE looks to lead the development of technology to tackle climate change and empower universal prosperity, it is essential that we recognize the role that meaningful connectivity and digital technology play in the organization’s goals to support global sustainability, drive economic growth, and transform health care, education, employment, gender equality, and youth empowerment.
IEEE members around the globe are continuously developing and applying technology to help solve these problems. It is that universal passion—to improve global conditions—that is at the heart of our mission, as well as our expanding partnerships and significant activities supporting the achievement of the U.N. Sustainable Development Goals.
One growing partnership is with the International Telecommunication Union, a U.N. specialized agency that helps set policy related to information and communication technologies. IEEE Member Doreen Bogdan-Martin was elected as ITU secretary-general and took office on 1 January, becoming the first woman to lead the 155-year-old organization. Bogdan-Martin is the recipient of this year’s IEEE President’s Award [see sidebar].
IEEE and ITU share the goal of bringing the benefits of technology to all of humanity. I look forward to working closely with the U.N. agency to promote meaningful connectivity, intensify cooperation to connect the unconnected, and strengthen the alignment of digital technologies with inclusive sustainable development.
I truly believe that one of the most important applications of technology is to improve people’s lives. For those in underserved regions of the world, technology can improve educational opportunities, provide better health care, alleviate suffering, and maintain human dignity.
Technology and technologists, particularly IEEE members, have a significant role to play in shaping life on this planet. They can use their skills to develop and advance technology—from green energy to reducing waste and emissions, and from transportation electrification to digital education, health, and agriculture. As a person who believes in the power of technology to benefit humanity, I find this to be a very compelling vision for our shared future.
Please share your thoughts with me: firstname.lastname@example.org.
IEEE president and CEO
This article appears in the June 2023 print issue as “Connecting the Unconnected.”
Earlier this week, I signed on to a short group statement, coordinated by the Center for AI Safety:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The press coverage has been extensive, and surprising to me. The New York Times headline is “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” BBC: “Artificial intelligence could lead to extinction, experts warn.” Other headlines are similar.
I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war—which is to say, a risk worth taking seriously, but not something to panic over. Which is what I thought the statement said...
ESA invites European companies to submit proposals for commercial cargo transportation services to and from the International Space Station and future commercial low Earth orbit outposts.
For more than a century, utility companies have used electromechanical relays to protect power systems against damage that might occur during severe weather, accidents, and other abnormal conditions. But the relays could neither locate the faults nor accurately record what happened.
Then, in 1977, Edmund O. Schweitzer III invented the digital microprocessor-based relay as part of his doctoral thesis. Schweitzer’s relay, which could locate a fault within the radius of 1 kilometer, set new standards for utility reliability, safety, and efficiency.
Schweitzer Engineering Laboratories
President and CTO
Purdue University, West Lafayette, Ind.; Washington State University, Pullman
To develop and manufacture his relay, he launched Schweitzer Engineering Laboratories in 1982 from his basement in Pullman, Wash. Today SEL manufactures hundreds of products that protect, monitor, control, and automate electric power systems in more than 165 countries.
Schweitzer, an IEEE Life Fellow, is his company’s president and chief technology officer. He started SEL with seven workers; it now has more than 6,000.
The 40-year-old employee-owned company continues to grow. It has four manufacturing facilities in the United States. Its newest one, which opened in March in Moscow, Idaho, fabricates printed circuit boards.
Schweitzer has received many accolades for his work, including the 2012 IEEE Medal in Power Engineering. In 2019 he was inducted into the U.S. National Inventors Hall of Fame.
Power system faults can happen when a tree or vehicle hits a power line, a grid operator makes a mistake, or equipment fails. The fault shunts extra current to some parts of the circuit, shorting it out.
If there is no proper scheme or device installed with the aim of protecting the equipment and ensuring continuity of the power supply, an outage or blackout could propagate throughout the grid.
Overcurrent is not the only damage that can occur, though. Faults also can change voltages, frequencies, and the direction of current.
A protection scheme should quickly isolate the fault from the rest of the grid, thus limiting damage on the spot and preventing the fault from spreading to the rest of the system. To do that, protection devices must be installed.
That’s where Schweitzer’s digital microprocessor-based relay comes in. He perfected it in 1982. It later was commercialized and sold as the SEL-21 digital distance relay/fault locator.
Schweitzer says his relay was, in part, inspired by an event that took place during his first year of college.
“Back in 1965, when I was a freshman at Purdue University, a major blackout left millions without power for hours in the U.S. Northeast and Ontario, Canada,” he recalls. “It was quite an event, and I remember it well. I learned many lessons from it. One was how difficult it was to restore power.”
He says he also was inspired by the book Protective Relays: Their Theory and Practice. He read it while an engineering graduate student at Washington State University, in Pullman.
“I bought the book on the Thursday before classes began and read it over the weekend,” he says. “I couldn’t put it down. I was hooked.
“I realized that these solid-state devices were special-purpose signal processors. They read the voltage and current from the power systems and decided whether the power systems’ apparatuses were operating correctly. I started thinking about how I could take what I knew about digital signal processing and put it to work inside a microprocessor to protect an electric power system.”
The 4-bit and 8-bit microprocessors were new at the time.
“I think this is how most inventions start: taking one technology and putting it together with another to make new things,” he says. “The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.”
He says he was introduced to signal processing, signal analysis, and how to use digital techniques in 1968 while at his first job, working for the U.S. Department of Defense at Fort Meade, in Maryland.
Faster ways to clear faults and improve cybersecurity
Schweitzer continues to invent ways of protecting and controlling electric power systems. In 2016 his company released the SEL-T400L, which samples a power system every microsecond to detect the time between traveling waves moving at the speed of light. The idea is to quickly detect and locate transmission line faults.
The relay decides whether to trip a circuit or take other actions in 1 to 2 milliseconds. Previously, it would take a protective relay on the order of 16 ms. A typical circuit breaker takes 30 to 40 ms in high-voltage AC circuits to trip.
“The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.”
“I like to talk about the need for speed,” Schweitzer says. “In this day and age, there’s no reason to wait to clear a fault. Faster tripping is a tremendous opportunity from a point of view of voltage and angle stability, safety, reducing fire risk, and damage to electrical equipment.
“We are also going to be able to get a lot more out of the existing infrastructure by tripping faster. For every millisecond in clearing time saved, the transmission system stability limits go up by 15 megawatts. That’s about one feeder per millisecond. So, if we save 12 ms, all of the sudden we are able to serve 12 more distribution feeders from one part of one transmission system.”
The time-domain technology also will find applications in transformer and distribution protection schemes, he says, as well as have a significant impact on DC transmission.
What excites Schweitzer today, he says, is the concept of energy packets, which he and SEL have been working on. The packets measure energy exchange for all signals including distorted AC systems or DC networks.
“Energy packets precisely measure energy transfer, independent of frequency or phase angle, and update at a fixed rate with a common time reference such as every millisecond,” he says. “Time-domain energy packets provide an opportunity to speed up control systems and accurately measure energy on distorted systems—which challenges traditional frequency-domain calculation methods.”
He also is focusing on improving the reliability of critical infrastructure networks by improving cybersecurity, situational awareness, and performance. Plug-and-play and best-effort networking aren’t safe enough for critical infrastructure, he says.
“SEL OT SDN technology solves some significant cybersecurity problems,” he says, “and frankly, it makes me feel comfortable for the first time with using Ethernet in a substation.”
Schweitzer didn’t start off planning to launch his own company. He began a successful career in academia in 1977 after joining the electrical engineering faculty at Ohio University, in Athens. Two years later, he moved to Pullman, Wash., where he taught at Washington State’s Voiland College of Engineering and Architecture for the next six years. It was only after sales of the SEL-21 took off that he decided to devote himself to his startup full time.
It’s little surprise that Schweitzer became an inventor and started his own company, as his father and grandfather were inventors and entrepreneurs.
His grandfather, Edmund O. Schweitzer, who held 87 patents, invented the first reliable high-voltage fuse in collaboration with Nicholas J. Conrad in 1911, the year the two founded Schweitzer and Conrad—today known as S&C Electric Co.—in Chicago.
Schweitzer’s father, Edmund O. Schweitzer Jr., had 208 patents. He invented several line-powered fault-indicating devices, and he founded the E.O. Schweitzer Manufacturing Co. in 1949. It is now part of SEL.
Schweitzer says a friend gave him the best financial advice he ever got about starting a business: Save your money.
“I am so proud that our 6,000-plus-person company is 100 percent employee-owned,” Schweitzer says. “We want to invest in the future, so we reinvest our savings into growth.”
He advises those who are planning to start a business to focus on their customers and create value for them.
“Unleash your creativity,” he says, “and get engaged with customers. Also, figure out how to contribute to society and make the world a better place.”
As I read the newest papers about DNA-based computing, I had to confront a rather unpleasant truth. Despite being a geneticist who also majored in computer science, I was struggling to bridge two concepts—the universal Turing machine, the very essence of computing, and the von Neumann architecture, the basis of most modern CPUs. I had written C++ code to emulate the machine described in Turing’s 1936 paper, and could use it to decide, say, if a word was a palindrome. But I couldn’t see how such a machine—with its one-dimensional tape memory and ability to look at only one symbol on that tape at a time—could behave like a billion-transistor processor with hardware features such as an arithmetic logic unit (ALU), program counter, and instruction register.
I scoured old textbooks and watched online lectures about theoretical computer science, but my knowledge didn’t advance. I decided I would build a physical Turing machine that could execute code written for a real processor.
Rather than a billion-transistor behemoth, I thought I’d target the humble 8-bit 6502 microprocessor. This legendary chip powered the computers I used in my youth. And as a final proof, my simulated processor would have to run Pac-Man, specifically the version of the game written for the Apple II computer.
In Turing’s paper, his eponymous machine is an abstract concept with infinite memory. Infinite memory isn’t possible in reality, but physical Turing machines can be built with enough memory for the task at hand. The hardware implementation of a Turing machine can be organized around a rule book and a notepad. Indeed, when we do basic arithmetic, we use a rule book in our head (such as knowing when to carry a 1). We manipulate numbers and other symbols using these rules, stepping through the process for, say, long division. There are key differences, though. We can move all over a two-dimensional notepad, doing a scratch calculation in the margin before returning to the main problem. With a Turing machine we can only move left or right on a one-dimensional notepad, reading or writing one symbol at a time.
A key revelation for me was that the internal registers of the 6502 could be duplicated sequentially on the one-dimensional notepad using four symbols—0, 1, _ (or space), and $. The symbols 0 and 1 are used to store the actual binary data that would sit in a 6502’s register. The $ symbol is used to delineate different registers, and the _ symbol acts as a marker, making it easy to return to a spot in memory we’re working with. The main memory of the Apple II is emulated in a similar fashion.
Apart from some flip-flops, a couple of NOT gates, and an up-down counter, the PureTuring machine uses only RAM and ROM chips—there are no logic chips. An Arduino board [bottom] monitors the RAM to extract display data. James Provost
Programming a CPU is all about manipulating the registers and transferring their contents to and from main memory using an instruction set. I could emulate the 6502’s instructions as chains of rules that acted on the registers, symbol by symbol. The rules are stored in a programmable ROM, with the output of one rule dictating the next rule to be used, what should be written on the notepad (implemented as a RAM chip), and whether we should read the next symbol or the previous one.
I dubbed my machine PureTuring. The ROM’s data outputs are connected to set of flip-flops. Some of the flip-flops are connected to the RAM, to allow the next or previous symbol to be fetched. Others are connected to the ROM’s own address lines in a feedback loop that selects the next rule.
It turned out to be more efficient to interleave the bits of some registers rather than leaving them as separate 8-bit chunks. Creating the rule book to implement the 6502’s instruction set required 9,000 rules. Of these, 2,500 were created using an old-school method of writing them on index cards, and the rest were generated by a script. Putting this together took about six months.
Only some of the 6502 registers are exposed to programmers [green]; its internal, hidden registers [purple] are used to execute instructions. Below each register a how the registers are arranged, and sometime interleaved, on the PureTuring’s “tape.”James Provost
To fetch a software instruction, PureTuring steps through the notepad using $ symbols as landmarks until it gets to the memory location pointed to by the program counter. The 6502 opcodes are one byte long, so by the time the eighth bit is read, PureTuring is in one of 256 states. Then PureTuring returns to the instruction register and writes the opcode there, before moving on to perform the instruction. A single instruction can take up to 3 million PureTuring clock cycles to fetch, versus one to six cycles for the actual 6502!
The 6502 uses a memory-mapped input/output system. This means that devices such as displays are represented as locations somewhere within main memory. By using an Arduino to monitor the part of the notepad that corresponds to the Apple II’s graphics memory, I could extract pixels and show them on an attached terminal or screen. This required writing a “dewozzing” function for the Arduino as the Apple II’s pixel data is laid out in a complex scheme. ( Steve Wozniak created this scheme to enable the Apple II to fake an analog color TV signal with digital chips and keep the dynamic RAM refreshed.)
I could have inserted input from a keyboard into the notepad in a similar fashion, but I didn’t bother because actually playing Pac-Man on the PureTuring would require extraordinary patience: It took about 60 hours just to draw one frame’s worth of movement for the Pac-Man character and the pursuing enemy ghosts. A modification that moved the machine along the continuum toward a von Neumann architecture added circuitry to permit random access to a notepad symbol, making it unnecessary to step through all prior symbols. This adjustment cut the time to draw the game characters to a mere 20 seconds per frame!
PureTuring Part 1. Turing Machine Microprocessor. www.youtube.com
Looking forward, features can be added one by one, moving piecemeal from a Turing machine to a von Neumann architecture: Widen the bus to read eight symbols at a time instead of one, replace the registers in the notepad with hardware registers, add an ALU, and so on.
Now when I read papers and articles on DNA-based computing, I can trace each element back to something in a Turing machine or forward to a conventional architecture, running my own little mental machine along a conceptual tape!
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
We’ve just relaunched the IEEE Robots Guide over at RobotsGuide.com, featuring new robots, new interactives, and a complete redesign from the ground up. Tell your friends, tell your family, and explore nearly 250 robots in pictures and videos and detailed facts and specs, with lots more on the way!
The qualities that make a knitted sweater comfortable and easy to wear are the same things that might allow robots to better interact with humans. RobotSweater, developed by a research team from Carnegie Mellon University’s Robotics Institute, is a machine-knitted textile “skin” that can sense contact and pressure.
RobotSweater’s knitted fabric consists of two layers of conductive yarn made with metallic fibers to conduct electricity. Sandwiched between the two is a net-like, lace-patterned layer. When pressure is applied to the fabric—say, from someone touching it—the conductive yarn closes a circuit and is read by the sensors. In their research, the team demonstrated that pushing on a companion robot outfitted in RobotSweater told it which way to move or what direction to turn its head. When used on a robot arm, RobotSweater allowed a push from a person’s hand to guide the arm’s movement, while grabbing the arm told it to open or close its gripper. In future research, the team wants to explore how to program reactions from the swipe or pinching motions used on a touchscreen.
DEEP Robotics Co. yesterday announced that it has launched the latest version of its Lite3 robotic dog in Europe. The system combines advanced mobility and an open modular structure to serve the education, research, and entertainment markets, said the Hangzhou, China–based company.
Lite3’s announced price is US $2,900. It ships in September.
Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains. We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback in a self-supervised manner. We validate our method in multiple short- and large-scale navigation tasks on a large, autonomous all-terrain vehicle (ATV) on challenging off-road terrains, and demonstrate ease of integration on a separate large ground robot.
This work will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2023) in London next week.
Sheet Metal Workers’ Local Union 104 has introduced a training course on automating and innovating field layout with the Dusty Robotics FieldPrinter system.
Apptronik has half of its general-purpose robot ready to go!
The other half is still a work in progress, but here’s progress:
A spotted-lanternfly-murdering robot is my kind of murdering robot.
ANYmal is rated IP67 for water resistance, but this still terrifies me.
Check out the impressive ankle action on this humanoid walking over squishy terrain.
Wing’s progress can be charted along the increasingly dense environments in which we’ve been able to operate: from rural farms to lightly populated suburbs to more dense suburbs to large metropolitan areas like Brisbane, Australia; Helsinki, Finland; and the Dallas Fort Worth metro area in Texas. Earlier this month, we did a demonstration delivery at Coors Field–home of the Colorado Rockies–delivering beer (Coors of course) and peanuts to the field. Admittedly, it wasn’t on a game day, but there were 1,000 people in the stands enjoying the kickoff party for AUVSI’s annual autonomous systems conference.
[ Wing ]
Pollen Robotics’ team will be going to ICRA 2023 in London! Come and meet us there to try teleoperating Reachy by yourself and give us your feedback!
[ Pollen Robotics ]
The most efficient drone engine is no engine at all.
[ MAVLab ]
Is your robot spineless? Should it be? Let’s find out.
[ UPenn ]
Looks like we’re getting closer to that robot butler.
[ Prisma Lab ]
This episode of the Robot Brains podcast features Raff D’Andrea, from Kiva, Verity, and ETH Zurich.
[ Robot Brains ]
Calling all robot fanatics! We are the creators of the Robots Guide, IEEE’s interactive site about robotics, and we need your help.
Today, we’re expanding our massive catalog to nearly 250 robots, and we want your opinion to decide which are the coolest, most wanted, and also creepiest robots out there.
To submit your votes, find robots on the site that are interesting to you and rate them based on their design and capabilities. Every Friday, we’ll crunch the votes to update our Robot Rankings.
Rate this robot: For each robot on the site, you can submit your overall rating, answer if you’d want to have this robot, and rate its appearance.IEEE Spectrum
May the coolest (or creepiest) robot win!
Our collection currently features 242 robots, including humanoids, drones, social robots, underwater vehicles, exoskeletons, self-driving cars, and more.
The Robots Guide features three rankings: Top Rated, Most Wanted, and Creepiest.IEEE Spectrum
explore the collection by filtering robots by category, capability, and country, or sorting them by name, year, or size. And you can also search robots by keywords.
In particular, check out some of the new additions, which could use more votes. These include some really cool robots like LOVOT, Ingenuity, GITAI G1, Tertill, Salto, Proteus, and SlothBot.
Each robot profile includes detailed tech specs, photos, videos, history, and some also have interactives that let you move and spin robots 360 degrees on the screen.
And note that these are all real-world robots. If you’re looking for sci-fi robots, check out our new Face-Off: Sci-Fi Robots game.
Today, we’re also relaunching the Robots Guide site with a fast and sleek new design, more sections and games, and thousands of photos and videos.
The new site was designed by Pentagram, the prestigious design consultancy, in collaboration with Standard, a design and technology studio.
The site is built as a modern, fully responsive Web app. It’s powered by Remix.run, a React-based Web framework, with structured content by Sanity.io and site search by Algolia.
The Robots Guide was designed for anyone interested in learning more about robotics, including robot enthusiasts, both experts and beginners, researchers, entrepreneurs, STEM educators, teachers, and students.
The foundation for the Robots Guide is IEEE’s Robots App, which was downloaded 1.3 million times and is used in classrooms and STEM programs all over the world.
The Robots Guide is an editorial product of IEEE Spectrum, the world’s leading technology and engineering magazine and the flagship publication of the IEEE. Thank you to the IEEE Foundation and our sponsors for their support, which enables all of the Robots Guide content to be open and free to everyone.
The most advanced manufacturers of computer processors are in the middle of the first big change in device architecture in a decade—the shift from finFETs to nanosheets. Another 10 years should bring about another fundamental change, where nanosheet devices are stacked atop each other to form complementary FETs (CFETs), capable of cutting the size of some circuits in half. But the latter move is likely to be a heavy lift, say experts. An in-between transistor called the forksheet might keep circuits shrinking without quite as much work.
The idea for the forksheet came from exploring the limits of the nanosheet architecture, says Julien Ryckaert, the vice president for logic technologies at Imec. The nanosheet’s main feature is its horizontal stacks of silicon ribbons surrounded by its current-controlling gate. Although nanosheets only recently entered production, experts were already looking for their limits years ago. Imec was tasked with figuring out “at what point nanosheet will start tanking,” he says.
Ryckaert’s team found that one of the main limitations to shrinking nanosheet-based logic is keeping the separation between the two types of transistor that make up CMOS logic. The two types—NMOS and PMOS—must maintain a certain distance to limit capacitance that saps the devices’ performance and power consumption. “The forksheet is a way to break that limitation,” Ryckaert says.
Instead of individual nanosheet devices, the forksheet scheme builds them as pairs on either side of a dielectric wall. (No, it doesn’t really resemble a fork much.) The wall allows the devices to be placed closer together without causing a capacitance problem, says Naoto Horiguchi, the director of CMOS technology at Imec. Designers could use the extra space to shrink logic cells, or they could use the extra room to build transistors with wider sheets leading to better performance, he says.
Leading-edge transistors are already transitioning from the fin field-effect transistor (FinFET) architecture to nanosheets. The ultimate goal is to stack two devices atop each other in a CFET configuration. The forksheet may be an intermediary step on the way.Imec
“CFET is probably the ultimate CMOS architecture,” says Horiguchi of the device that Imec expects to reach production readiness around 2032. But he adds that CFET “integration is very complex.” Forksheet reuses most of the nanosheet production steps, potentially making it an easier job, he says. Imec predicts it could be ready around 2028.
There are still many hurdles to leap over, however. “It’s more complex than initially thought,” Horiguchi says. From a manufacturing perspective, the dielectric wall is a bit of a headache. There are several types of dielectric used in advanced CMOS and several steps that involve etching it away. Making forksheets means etching those others without accidentally attacking the wall. And it’s still an open question which types of transistor should go on either side of the wall, Horiguchi says. The initial idea was to put PMOS on one side and NMOS on the other, but there may be advantages to putting the same type on both sides instead.
There was a time, decades really, when all it took to make a better computer chip were smaller transistors and narrower interconnects. That time’s long gone now, and although transistors will continue to get a bit smaller, simply making them so is no longer the point. The only way to keep up the exponential pace of computing now is a scheme called system technology co-optimization, or STCO, argued researchers at ITF World 2023 last week in Antwerp, Belgium. It’s the ability to break chips up into their functional components, use the optimal transistor and interconnect technology for each function, and stitch them back together to create a lower-power, better-functioning whole.
“This leads us to a new paradigm for CMOS,” says Imec R&D manager Marie Garcia Bardon. CMOS 2.0, as the Belgium-based nanotech research organization is calling it, is a complicated vision. But it may be the most practical way forward, and parts of it are already evident in today’s most advanced chips.
In a sense, the semiconductor industry was spoiled by the decades prior to about 2005, says Julien Ryckaert, R&D vice president at Imec. During that time, chemists and device physicists were able to regularly produce a smaller, lower-power, faster transistor that could be used for every function on a chip and that would lead to a steady increase in computing capability. But the wheels began to come off that scheme not long thereafter. Device specialists could come up with excellent new transistors, but those transistors weren’t making better, smaller circuits, such as the SRAM memory and standard logic cells that make up the bulk of CPUs. In response, chipmakers began to break down the barriers between standard cell design and transistor development. Called design technology co-optimization, or DTCO, the new scheme led to devices designed specifically to make better standard cells and memory.
But DTCO isn’t enough to keep computing going. The limits of physics and economic realities conspired to put barriers in the path to progressing with a one-size-fits-all transistor. For example, physical limits have prevented CMOS operating voltages from decreasing below about 0.7 volts, slowing down progress in power consumption, explains Anabela Veloso, principal engineer at Imec. Moving to multicore processors helped ameliorate that issue for a time. Meanwhile, input-output limits meant it became more and more necessary to integrate the functions of multiple chips onto the processor. So in addition to a system-on-chip (SoC) having multiple instances of processor cores, they also integrate network, memory, and often specialized signal-processing cores. Not only do these cores and functions have different power and other needs, they also can’t be made smaller at the same rate. Even the CPU’s cache memory, SRAM, isn’t scaling down as quickly as the processor’s logic.
Getting things unstuck is as much a philosophical shift as a collection of technologies. According to Ryckaert, STCO means looking at a system-on-chip as a collection of functions, such as power supply, I/O, and cache memory. “When you start reasoning about functions, you realize that an SoC is not this homogeneous system, just transistors and interconnect,” he says. “It is functions, which are optimized for different purposes.”
Ideally, you could build each function using the process technology best suited to it. In practice, that mostly means building each on its own sliver of silicon, or chiplet. Then you would bind those together using technology, such as advanced 3D stacking, so that all the functions act as if they were on the same piece of silicon.
Examples of this thinking are already present in advanced processors and AI accelerators. Intel’s high-performance computing accelerator Ponte Vecchio (now called Intel Data Center GPU Max) is made up of 47 chiplets built using two different processes, each from both Intel and Taiwan Semiconductor Manufacturing Co. AMD already uses different technologies for the I/O chiplet and compute chiplets in its CPUs, and it recently began separating out SRAM for the compute chiplet’s high-level cache memory.
Imec’s road map to CMOS 2.0 goes even further. The plan requires continuing to shrink transistors, moving power and possibly clock signals beneath a CPU’s silicon, and ever-tighter 3D-chip integration. “We can use those technologies to recognize the different functions, to disintegrate the SoC, and reintegrate it to be very efficient,” says Ryckaert.
Transistors will change form over the coming decade, but so will the metal that connects them. Ultimately, transistors could be stacked-up devices made of 2D semiconductors instead of silicon. Power delivery and other infrastructure could be layered beneath the transistors.Imec
Major chipmakers are already transitioning from the FinFET transistors that powered the last decade of computers and smartphones to a new architecture, nanosheet transistors [see “The Nanosheet Transistor Is the Next (and Maybe Last) Step in Moore’s Law”]. Ultimately, two nanosheet transistors will be built atop each other to form the complementary FET, or CFET, which Velloso says “represents the ultimate in CMOS scaling” [see “3D-Stacked CMOS Takes Moore’s Law to New Heights”].
As these devices scale down and change shape, one of the main goals is to drive down the size of standard logic cells. That is typically measured in “track height”—basically, the number of metal interconnect lines that can fit within the cell. Advanced FinFETs and early nanosheet devices are six-track cells. Moving to five tracks may require an interstitial design called a forksheet, which squeezes devices together more closely without necessarily making them smaller. CFETs will then reduce cells to four tracks or possibly fewer.
Leading-edge transistors are already transitioning from the fin field-effect transistor (FinFET) architecture to nanosheets. The ultimate goal is to stack two devices atop each other in a CFET configuration. The forksheet may be an intermediary step on the way.Imec
According to Imec, chipmakers will be able to produce the finer features needed for this progression using ASML’s next generation of extreme-ultraviolet lithography. That tech, called high-numerical-aperture EUV, is under construction at ASML now, and Imec is next in line for delivery. Increasing numerical aperture, an optics term related to the range of angles over which a system can gather light, leads to more precise images.
The basic idea in backside power-delivery networks is to remove all the interconnects that send power—as opposed to data signals—from above the silicon surface and place them below it. This should allow for less power loss, because the power delivering interconnects can be larger and less resistant. It also frees up room above the transistor layer for signal-carrying interconnects, possibly leading to more compact designs [see “Next-Gen Chips Will Be Powered From Below”].
In the future, even more could be moved to the backside of the silicon. For example, so-called global interconnects—those that span (relatively) great distances to carry clock and other signals—could go beneath the silicon. Or engineers could add active power-delivery devices, such as electrostatic discharge safety diodes.
There are several ways to do 3D integration, but the most advanced today are wafer-to-wafer and die-to-wafer hybrid bonding [see “3 Ways 3D Chip Tech Is Upending Computing”]. These two provide the highest density of interconnections between two silicon dies. But this method requires that the two dies are designed together, so their functions and interconnect points align, allowing them to act as a single chip, says Anne Jourdain, principal member of the technical staff. Imec R&D is on track to be able to produce millions of 3D connections per square millimeter in the near future.
CMOS 2.0 would take disaggregation and heterogeneous integration to the extreme. Depending on which technologies make sense for the particular applications, it could result in a 3D system that incorporates layers of embedded memory, I/O and power infrastructure, high-density logic, high drive-current logic, and huge amounts of cache memory.
Getting to that point will take not just technology development but also the tools and training to discern which technologies would actually improve a system. As Bardon points out, smartphones, servers, machine-learning accelerators, and augmented- and virtual-reality systems all have very different requirements and constraints. What makes sense for one might be a dead end for the other.
Stephen Cass: Welcome to Fixing the Future, an IEEE Spectrum podcast. This episode is brought to you by IEEE Xplore, the digital library with over 6 million technical documents and free search. I’m senior editor Stephen Cass, and today I’m talking with a former Spectrum editor, Sally Adee, about her new book, We Are Electric: The New Science of Our Body’s Electrome. Sally, welcome to the show.
Sally Adee: Hi, Stephen. Thank you so much for having me.
Cass: It’s great to see you again, but before we get into exactly what you mean by the body’s electrome and so on, I see that in researching this book, you actually got yourself zapped quite a bit in a number of different ways. So I guess my first question is: are you okay?
Adee: I mean, as okay as I can imagine being. Unfortunately, there’s no experimental sort of condition and control condition. I can’t see the self I would have been in the multiverse version of myself that didn’t zap themselves. So I think I’m saying yes.
Cass: The first question I have then is what is an electrome?
Adee: So the electrome is this word, I think, that’s been burbling around the bioelectricity community for a number of years. The first time it was committed to print is a 2016 paper by this guy called Arnold De Loof, a researcher out in Europe. But before that, a number of the researchers I spoke to for this book told me that they had started to see it in papers that they were reviewing. And I think it wasn’t sort of defined consistently always because there’s this idea that seems to be sort of bubbling to the top, bubbling to the surface, that there are these electrical properties that the body has, and they’re not just epiphenomena, and they’re not just in the nervous system. They’re not just action potentials, but that there are electrical properties in every one of our cells, but also at the organ level, potentially at the sort of entire system level, that people are trying to figure out what they actually do.
And just as action potentials aren’t just epiphenomena, but actually our control mechanisms, they’re looking at how these electrical properties work in the rest of the body, like in the cells, membrane voltages and skin cells, for example, are involved in wound healing. And there’s this idea that maybe these are an epigenetic variable that we haven’t been able to conscript yet. And there’s such promise in it, but a lot of the research, the problem is that a lot of the research is being done across really far-flung scientific communities, some in developmental biology, some of it in oncology, a lot of it in neuroscience, obviously. But what this whole idea of the electrome is— I was trying to pull this all together because the idea behind the book is I really want people to just develop this umbrella of bioelectricity, call it the electrome, call it bioelectricity, but I kind of want the word electrome to do for bioelectricity research what the word genome did for molecular biology. So that’s basically the spiel.
Cass: So I want to surf back to a couple points you raised there, but first off, just for people who might not know, what is an action potential?
Adee: So the action potential is the electrical mechanism by which the nervous signal travels, either to actuate motion at the behest of your intent or to gain sensation and sort of perceive the world around you. And that’s the electrical part of the electrochemical nervous impulse. So everybody knows about neurotransmitters at the synapse and— well, not everybody, but probably Spectrum listeners. They know about the serotonin that’s released and all these other little guys. But the thing is you wouldn’t be able to have that release without the movement of charged particles called ions in and out of the nerve cell that actually send this impulse down and allow it to travel at a rate of speed that’s fast enough to let you yank your hand away from a hot stove when you’ve touched it, before you even sort of perceive that you did so.
Cass: So that actually brings me to my next question. So you may remember in some Spectrum‘s editorial meetings when we were deciding if a tech story was for us or not, that literally, we would often ask, “Where is the moving electron? Where is the moving electron?” But bioelectricity is not really based on moving electrons. It’s based on these ions.
Yeah. So let’s take the neuron as an example. So what you’ve got is— let me do like a— imagine a spherical cow for a neuron, okay? So you’ve got a blob and it’s a membrane, and that separates the inside of your cell from the outside of your cell. And this membrane is studded with tens of thousands, I think, little pores called ion channels. And the pores are not just sieve pores. They’re not inert. They’re really smart. And they decide which ions they like. Now, let’s go to the ions. Ions are suffusing your extracellular fluid, all the stuff that bathes you. It’s basically the reason they say you’re 66 percent water or whatever. This is like sieve water. It’s got sodium, potassium, calcium, etc., and these ions are charged particles.
So when you’ve got a cell, it likes potassium, the neuron, it likes potassium, it lets it in. It doesn’t really like sodium so much. It’s got very strong preferences. So in its resting state, which is its happy place, those channels allow potassium ions to enter. And those are probably where the electrons are, actually, because an ion, it’s got a plus-one charge or a minus-one charge based on— but let’s not go too far into it. But basically, the cell allows the potassium to come inside, and its resting state, which is its happy place, the separation of the potassium from the sodium causes, for all sorts of complicated reasons, a charge inside the cell that is minus 70 degree— sorry, minus 70 millivolts with respect to the extracellular fluid.
Cass: Before I read your book, I kind of had the idea that how neurons use electricity was, essentially, settled science, very well understood, all kind of squared away, and this was how the body used electricity. But even when it came to neurons, there’s a lot of fundamentals, kind of basic things about how neurons use electricity that we really only established relatively recently. Some of the research you’re talking about is definitely not a century-old kind of basic science about how these things work.
Adee: No, not at all. In fact, there was a paper released in 2018 that I didn’t include, which I’m really annoyed by. I just found it recently. Obviously, you can’t find all the papers. But it’s super interesting because it blends that whole sort of ionic basis of the action potential with another thing in my book that’s about how cell development is a little bit like a battery getting charged. Do you know how cells assume an electrical identity that may actually be in charge of the cell fate that they meet? And so we know abou— sorry, the book goes into more detail, but it’s like when a cell is stem or a fertilized egg, it’s depolarized. It’s at zero. And then when it becomes a nerve cell, it goes to that minus 70 that I was talking about before. If it becomes a fat cell, it’s at minus 50. If it’s musculoskeletal tissue, it goes to minus 90. Liver cells are like around minus 40. And so you’ve got real identitarian diversity, electrical diversity in your tissues, which has something to do with what they end up doing in the society of cells. So this paper that I was talking about, the 2018 paper, they actually looked at neurons. This was work from Denis Jabaudon at the University of Geneva, and they were looking at how neurons actually differentiate. Because when baby neurons are born-- your brain is made of all kinds of cells. It’s not just cortical cells. There’s staggering variety of classes of neurons. And as cells actually differentiate, you can watch their voltage change, just like you can do in the rest of the body with these electrosensitive dyes. So that’s an aspect of the brain that we hadn’t even realized until 2018.
Cass: And that all leads me to my next point, which is if you think bioelectricity, we think, okay, nerves zapping around. But neurons are not the only bioelectric network in the body. So talk about some of the other sorts of electrical networks we have, completely, or are largely separate from our neural networks?
Adee: Well, so Michael Levin is a professor at Tufts University. He does all kinds of other stuff, but mainly, I guess, he’s like the Paul Erdos of bioelectricity, I like to call him, because he’s sort of the central node. He’s networked into everybody, and I think he’s really trying to, again, also assemble this umbrella of bioelectricity to study this all in the aggregate. So his idea is that we are really committed to this idea of bioelectricity being in charge of our sort of central communications network, the way that we understand the environment around us and the way that we understand our ability to move and feel within it. But he thinks that bioelectricity is also how— that the nervous system kind of hijacked this mechanism, which is way older than any nervous system. And he thinks that we have another underlying network that is about our shape, and that this is bioelectrically mediated in really important ways, which impacts development, of course, but also wound healing. Because if you think about the idea that your body understands its own shape, what happens when you get a cut? How does it heal it? It has to go back to some sort of memory of what its shape is in order to heal it over. In animals that regenerate, they have a completely different electrical profile after they’ve been—so after they’ve had an arm chopped off.
So it’s a very different electrical— yeah, it’s a different electrical process that allows a starfish to regrow a limb than the one that allows us to scar over. So you’ve got this thing called a wound current. Your skin cells are arranged in this real tight wall, like little soldiers, basically. And what’s important is that they’re polarized in such a way that if you cut your skin, all the sort of ions flow out in a certain way, which creates this wound current, which then generates an electric field, and the electric field acts like a beacon. It’s like a bat signal, right? And it guides in these little helper cells, the macrophages that come and gobble up the mess and the keratinocytes and the guys who build it back up again and scar you over. And it starts out strong, and as you scar over, as the wound heals, it very slowly goes away. By the time the wound is healed, there’s no more field. And what was super interesting is this guy, Richard Nuccitelli, invented this thing called the Dermacorder that’s able to sense and evaluate the electric field. And he found that in people over the age of 65, the wound field is less than half of what it is in people under 25. And that actually goes in line with another weird thing about us, which is that our bioelectricity— or sorry, our regeneration capabilities are time-dependent and tissue-dependent.
So you probably know that the intestinal tissue regenerates all the time. You’re going to digest next week’s food with totally different cells than this morning’s food. But also, we’re time-dependent because when we’re just two cells, if you cleave that in half, you get identical twins. Later on during fetal development, it’s totally scarless, which is something we found out, because when we started being able to do fetal surgery in the womb, it was determined that we heal, basically, scarlessly. Then we’re born, and then between the ages of 7 and 11— until we are between the ages of 7 and 11, you chop off a fingertip, it regenerates perfectly, including the nail, but we lose that ability. And so it seems like the older we get, the less we regenerate. And so they’re trying to figure out now how— various programs are trying to figure out how to try to take control of various aspects of our sort of bioelectrical systems to do things like radically accelerate healing, for example, or how to possibly re-engage the body’s developmental processes in order to regenerate preposterous things like a limb. I mean, it sounds preposterous now. Maybe in 20 years, it’ll just be.
Cass: I want to get into some of the technologies that people are thinking of building on this sort of new science. Part of it is that the history of this field, both scientifically and technologically, has really been plagued by the shadow of quackery. And can you talk a little bit about this and how, on the one hand, there’s been some things we’re very glad that we stopped doing some very bad ideas, but it’s also had this shadow on sort of current research and trying to get real therapies to patients?
Adee: Yeah, absolutely. That was actually one of my favorite chapters to write, was the spectacular pseudoscience one, because, I mean, that is so much fun. So it can be boiled down to the fact that we were trigger happy because we see this electricity, we’re super excited about it. We start developing early tools to start manipulating it in the 1700s. And straight away, it’s like, this is an amazing new tool, and there’s all these sort of folk cures out there that we then decide that we’re going to take— not into the clinic. I don’t know what you’d call it, but people just start dispensing this stuff. This is separate from the discovery of endogenous electrical activity, which is what Luigi Galvani famously discovered in the late 1700s. He starts doing this. He’s an anatomist. He’s not an electrician. Electrician, by the way, is what they used to call the sort of literati who were in charge of discovery around electricity. And it had a really different connotation at the time, that they were kind of like the rocket scientists of their day.
But Galvani’s just an anatomist, and he starts doing all of these experiments using these new tools to zap frogs in various ways and permutations. And he decides that he has answered a whole different old question, which is how does man’s will animate his hands and let him feel the world around him? And he says, “This is electrical in nature.” This is a long-standing mystery. People have been bashing their heads against it for the past 100, 200 years. But he says that this is electrical, and there’s a big, long fight. I won’t get into too much between Volta, the guy who invented the battery, and Galvani. Volta says, “No, this is not electrical.” Galvani says, “Yes, it is.” But owing to events, when Volta invents the battery, he basically wins the argument, not because Galvani was wrong, but because Volta had created something useful. He had created a tool that people could use to advance the study of all kinds of things. Galvani’s idea that we have an endogenous electrical sort of impulse, it didn’t lead to anything that anybody could use because we didn’t have tools sensitive enough to really measure it. We only sort of had indirect measurements of it.
And his nephew, after he dies in ignominy, his nephew decides to bring it on himself to rescue, single-handedly, his uncle’s reputation. The problem is, the way he does it is with a series of grotesque, spectacular experiments. He very famously reanimated— well, zapped until they shivered, the corpses of all these dead guys, dead criminals, and he was doing really intense things like sticking electrodes connected to huge voltaic piles, Proto batteries, into the rectums of dead prisoners, which would make them sit up halfway and point at the people who are assembled, this very titillating stuff. Many celebrities of the time would crowd around these demonstrations.
Anyway, so Galvani basically—or sorry, Aldini, the nephew, basically just opens the door to everyone to be like, “Look what we can do with electricity.” Then in short order, there’s a guy who creates something called the Celestial Bed, which is a thing— they’ve got rings, they’ve got electric belts for stimulating the nethers. The Celestial Bed is supposed to help infertile couples. This is how sort of just wild electricity is in those days. It’s kind of like— you know how everybody went crazy for crypto scams last year? Electricity was like the crypto of 1828 or whatever, 1830s. And the Celestial Bed, so people would come and they would pay £9,000 to spend a night in it, right? Well, not at the time. That’s in today’s money. And it didn’t even use electricity. It used the idea of electricity. It was homeopathy, but electricity. You don’t even know where to start. So this is the sort of caliber of pseudoscience, and this is really echoed down through the years. That was in the 1800s. But when people submit papers or grant applications, I heard more than one researchers say to me— people would look at this electric stuff, and they’d be like, “Does anyone still believe this shit?” And it’s like, this is rigorous science, but it’s been just tarnished by the association with this.
Cass: So you mentioned wound care, and the book talks about some of the ways [inaudible] would care. But we’re also looking at other really ambitious ideas like regenerating limbs as part of this extension of wound care. And also, you make the point of certainly doing diagnostics and then possibly treatments for things like cancer. In thinking about cancer in a very different way than the really very, very tightly-focused genetic view we have of cancer now, and thinking about it kind of literally in a wider context. So can you talk about that a little bit?
Adee: Sure. And I want to start by saying that I went to a lot of trouble to be really careful in the book. I think cancer is one of those things that— I’ve had cancer in my family, and it’s tough to talk about it because you don’t want to give people the idea that there’s a cure for cancer around the corner when this is basic research and intriguing findings because it’s not fair. And I sort of struggled. I thought for a while, like, “Do I even bring this up?” But the ideas behind it are so intriguing, and if there were more research dollars thrown at it or pounds or whatever, Swiss francs, you might be able to really start moving the needle on some of this stuff. The idea is, there are two electrical— oh God, I don’t want to say avenues, but it is unfortunately what I have to do. There are two electrical avenues to pursue in cancer. The first one is something that a researcher called Mustafa Djamgoz at Imperial College here in the UK, he has been studying this since the ‘90s. Because he used to be a neurobiologist. He was looking at vision. And he was talking to some of his oncologist Friends, and they gave him some cancer cell lines, and he started looking at the behavior of cancer cells, the electrical behavior of cancer cells, and he started finding some really weird behaviors.
Cancer cells that should not have had anything to do with action potentials, like from prostate cancer lines, when he looked at them, they were oscillating like crazy, as if they were nerves. And then he started looking at other kinds of cancer cells, and they were all oscillating, and they were doing this oscillating behavior. So he spent like seven years sort of bashing his head against the wall. Nobody wanted to listen to him. But now, way more people are now investigating this. There’s going to be an ion channel at Cancer Symposium I think later this month, actually, in Italy. And he found, and a lot of other researchers like this woman, Annarosa Arcangeli, they have found that the reason that cancer cells may have these oscillating properties is that this is how they communicate with each other that it’s time to leave the nest of the tumor and start invading and metastasizing. Separately, there have been very intriguing-- this is really early days. It’s only a couple of years that they’ve started noticing this, but there have been a couple of papers now. People who are on certain kinds of ion channel blockers for neurological conditions like epilepsy, for example, they have cancer profiles that are slightly different from normal, which is that if they do get cancer, they are slightly less likely to die of it. In the aggregate. Nobody should be starting to eat ion channel blockers.
But they’re starting to zero in on which particular ion channels might be responsible, and it’s not just one that you and I have. These cancer kinds, they are like a expression of something that normally only exists when we’re developing in the womb. It’s part of the reason that we can grow ourselves so quickly, which of course, makes sense because that’s what cancer does when it metastasizes, it grows really quickly. So there’s a lot of work right now trying to identify how exactly to target these. And it wouldn’t be a cure for cancer. It would be a way to keep a tumor in check. And this is part of a strategy that has been proposed in the UK a little bit for some kinds of cancer, like the triple-negative kind that just keep coming back. Instead of subjecting someone to radiation and chemo, especially when they’re older, sort of just really screwing up their quality of life while possibly not even giving them that much more time. What if instead you sort of tried to treat cancer more like a chronic disease, keep it managed, and maybe that gives a person like 10 or 20 years? That’s a huge amount of time. And while not messing up with their quality of life.
This is a whole conversation that’s being had, but that’s one avenue. And there’s a lot of research going on in this right now that may yield fruit sort of soon. The much more sci-fi version of this, the studies have mainly been done in tadpoles, but they’re so interesting. So Michael Levin, again, and his postdoc at the time, I think, Brook Chernet, they were looking at what happens— so it’s uncontroversial that as a cancer cell-- so let’s go back to that society of cells thing that I was talking about. You get fertilized egg, it’s depolarized, zero, but then its membrane voltage charges, and it becomes a nerve cell or skin cell or a fat cell. What’s super interesting is that when those responsible members of your body’s society decide to abscond and say, “Screw this. I’m not participating in society anymore. I’m just going to eat and grow and become cancer,” their membrane voltage also changes. It goes much closer to zero again, almost like it’s having a midlife crisis or whatever.
So what they found, what Levin and Chernet found is that you can manipulate those cellular electrics to make the cell stop behaving cancerously. And so they did this in tadpoles. They had genetically engineered the tadpoles to express tumors, but when they made sure that the cells could not depolarize, most of those tadpoles did not express the tumors. And when they later took tadpoles that already had the tumors and they repolarized the voltage, those tumors, that tissue started acting like normal tissue, not like cancer tissue. But again, this is the sci-fi stuff, but the fact that it was done at all is so fascinating, again, from that epigenetic sort of body pattern perspective, right?
Cass: So sort of staying with that sci-fi stuff, except this one, even more closer to reality. And this goes back to some of these experiments which you zapped yourself. Can you talk a little bit about some of these sort of device that you can wear which appear to really enhance certain mental abilities? And some of these you [inaudible].
Adee: So the kit that I wore, I actually found out about it while I was at Spectrum, when I was a DARPATech. And this program manager told me about it, and I was really stunned to find out that just by running two milliamps of current through your brain, you would be able to improve your-- well, it’s not that your ability is improved. It was that you could go from novice to expert in half the time that it would take you normally, according to the papers. And so I really wanted to try it. I was trying to actually get an expert feature written for IEEE Spectrum, but they kept ghosting me, and then by the time I got to New Scientist, I was like, fine, I’m just going to do it myself. So they let me come over, and they put this kit on me, and it was this very sort of custom electrodes, these things, they look like big daisies. And this guy had brewed his own electrolyte solution and sort of smashed it onto my head, and it was all very slimy.
So I was doing this video game called DARWARS Ambush!, which is just like a training— it’s a shooter simulation to help you with shooting. So it was a Gonzo stunt. It was not an experiment. But he was trying to replicate the conditions of me not knowing whether the electricity was on as much as he could. So he had it sort of behind my back, and he came in a couple of times and would either pretend to turn it on or whatever. And I was practicing and I was really bad at it. That is not my game. Let’s just put it that way. I prefer driving games. But it was really frustrating as well because I never knew when the electricity was on. So I was just like, “There’s no difference. This sucks. I’m terrible.” And that sort of inner sort of buzz kept getting stronger and stronger because I’d also made bad choices. I’d taken a red-eye flight the night before. And I was like, “Why would I do that? Why wouldn’t I just give myself one extra day to recover before I go in and do this really complicated feature where I have to learn about flow state and electrical stimulation?” And I was just getting really tense and just angrier and angrier. And then at one point, he came in after my, I don’t know, 5th or 6th, I don’t know, 400th horrible attempt where I just got blown up every time. And then he turned on the electricity, and I could totally feel that something had happened because I have a little retainer in my mouth just at the bottom. And I was like, “Whoa.” But then I was just like, “Okay. Well, now this is going to suck extra much because I know the electricity is on, so it’s not even a freaking sham condition.” So I was mad.
But then the thing started again, and all of a sudden, all the sort of buzzing little angry voices just stopped, and it was so profound. And I’ve talked about it quite a bit, but every time I remember it, I get a little chill because it was the first time I’d ever realized, number one, how pissy my inner voices are and just how distracting they are and how abusive they are. And I was like, “You guys suck, all of you.” But somebody had just put a bell jar between me and them, and that feeling of being free from them was profound. At first, I didn’t even notice because I was just busy doing stuff. And all of a sudden, I was amazing at this game and I dispatched all of the enemies and whatnot, and then afterwards, when they came in, I was actually pissed because I was just like, “Oh, now I get it right and you come in after three minutes. But the last times when I was screwing it up, you left me in there to cook for 20 minutes.” And they were like, “No, 20 minutes has gone by,” which I could not believe. But yeah, it was just a really fairly profound experience, which is what led me down this giant rabbit hole in the first place. Because when I wrote the feature afterwards, all of a sudden I started paying attention to the whole TDCS thing, which I hadn’t yet. I had just sort of been focusing [crosstalk].
Cass: And that’s transcranial—?
Adee: Oh sorry, transcranial direct current stimulation.
Cass: There you go. Thank you. Sorry.
Adee: No. Yeah, it’s a mouthful. But then that’s when I started to notice that quackery we were talking about before. All that history was really informing the discussion around it because people were just like, “Oh, sure. Why don’t you zap your brain with some electricity and you become super smart.” And I was like, “Oh, did I like fall for the placebo effect? What happened here?” And there was this big study from Australia where the guy was just like, “When we average out all of the effects of TDCS, we find that it does absolutely nothing.” Other guys stimulated a cadaver to see if it would even reach the brain tissue and included it wouldn’t. But that’s basically what started me researching the book, and I was able to find answers to all those questions. But of course, TDCS, I mean, it’s finicky just like the electrome. It’s like your living bone is conductive. So when you’re trying to put an electric field on your head, basically, you have to account for things like how thick is that person’s skull in the place that you want to stimulate. They’re still working out the parameters.
There have been some really good studies that show sort of under which particular conditions they’ve been able to make it work. It does not work for all conditions for which it is claimed to work. There is some snake oil. There’s a lot left to be done, but a better understanding of how this affects the different layers of the sort of, I guess, call it, electrome, would probably make it something that you could use replicability. Is that a word? But also, that applies to things like deep brain stimulation, which, also, for Parkinson’s, it’s fantastic. But they’re trying to use it for depression, and in some cases, it works so—I want to use a bad word—amazingly. Just Helen Mayberg, who runs these trials, she said that for some people, this is an option of last resort, and then they get the stimulation, and they just get back on the bus. That’s her quote. And it’s like a switch that you flip. And for other people, it doesn’t work at all.
Cass: Well the book is packed with even more fantastic stuff, and I’m sorry we don’t have time to go through it, because literally, I could sit here and talk to you all day about this.
Adee: I didn’t even get into the frog battery, but okay, that’s fine. Fine, fine skip the frog. Sorry, I’m just kidding. I’m kidding, I’m kidding.
Cass: And thank you so much, Sally, for chatting with us today.
Adee: Oh, thank you so much. I really love talking about it, especially with you.
Cass: Today on Fixing the Future, we’re talking with Sally Adee about her new book on the body’s electrome. For IEEE Spectrum I’m Stephen Cass.
First-year college students are understandably frustrated when they can’t get into popular upper-level electives. But they usually just gripe. Paras Jha was an exception. Enraged that upper-class students were given priority to enroll in a computer-science elective at Rutgers, the State University of New Jersey, Paras decided to crash the registration website so that no one could enroll.
On Wednesday night, 19 November 2014, at 10:00 p.m. EST—as the registration period for first-year students in spring courses had just opened—Paras launched his first distributed denial-of-service (DDoS) attack. He had assembled an army of some 40,000 bots, primarily in Eastern Europe and China, and unleashed them on the Rutgers central authentication server. The botnet sent thousands of fraudulent requests to authenticate, overloading the server. Paras’s classmates could not get through to register.
The next semester Paras tried again. On 4 March 2015, he sent an email to the campus newspaper, The Daily Targum: “A while back you had an article that talked about the DDoS attacks on Rutgers. I’m the one who attacked the network.… I will be attacking the network once again at 8:15 pm EST.” Paras followed through on his threat, knocking the Rutgers network offline at precisely 8:15 p.m.
On 27 March, Paras unleashed another assault on Rutgers. This attack lasted four days and brought campus life to a standstill. Fifty thousand students, faculty, and staff had no computer access from campus.
On 29 April, Paras posted a message on Pastebin, a website popular with hackers for sending anonymous messages. “The Rutgers IT department is a joke,” he taunted. “This is the third time I have launched DDoS attacks against Rutgers, and every single time, the Rutgers infrastructure crumpled like a tin can under the heel of my boot.”
Paras was furious that Rutgers chose Incapsula, a small cybersecurity firm based in Massachusetts, as its DDoS-mitigation provider. He claimed that Rutgers chose the cheapest company. “Just to show you the poor quality of Incapsula’s network, I have gone ahead and decimated the Rutgers network (and parts of Incapsula), in the hopes that you will pick another provider that knows what they are doing.”
Paras’s fourth attack on the Rutgers network, taking place during finals, caused chaos and panic on campus. Paras reveled in his ability to shut down a major state university, but his ultimate objective was to force it to abandon Incapsula. Paras had started his own DDoS-mitigation service, ProTraf Solutions, and wanted Rutgers to pick ProTraf over Incapsula. And he wasn’t going to stop attacking his school until it switched.
Paras Jha was born and raised in Fanwood, a leafy suburb in central New Jersey. When Paras was in the third grade, a teacher recommended that he be evaluated for attention deficit hyperactivity disorder, but his parents didn’t follow through.
As Paras progressed through elementary school, his struggles increased. Because he was so obviously intelligent, his teachers and parents attributed his lackluster performance to laziness and apathy. His perplexed parents pushed him even harder.
Paras sought refuge in computers. He taught himself how to code when he was 12 and was hooked. His parents happily indulged this passion, buying him a computer and providing him with unrestricted Internet access. But their indulgence led Paras to isolate himself further, as he spent all his time coding, gaming, and hanging out with his online friends.
Paras was particularly drawn to the online game Minecraft. In ninth grade, he graduated from playing Minecraft to hosting servers. It was in hosting game servers that he first encountered DDoS attacks.
Minecraft server administrators often hire DDoS services to knock rivals offline. As Paras learned more sophisticated DDoS attacks, he also studied DDoS defense. As he became proficient in mitigating attacks on Minecraft servers, he decided to create ProTraf Solutions.
Paras’s obsession with Minecraft attacks and defense, compounded by his untreated ADHD, led to an even greater retreat from family and school. His poor academic performance in high school frustrated and depressed him. His only solace was Japanese anime and the admiration he gained from the online community of Minecraft DDoS experts.
Paras’s struggles deteriorated into paralysis when he enrolled in Rutgers, studying for a B.S. in computer science. Without his mother’s help, he was unable to regulate the normal demands of living on his own. He could not manage his sleep, schedule, or study. Paras was also acutely lonely. So he immersed himself in hacking.
Paras and two hacker friends, Josiah White and Dalton Norman, decided to go after the kings of DDoS—a gang known as VDoS. The gang had been providing these services to the world for four years, which is an eternity in cybercrime. The decision to fight experienced cybercriminals may seem brave, but the trio were actually older than their rivals. The VDoS gang members had been only 14 years old when they started to offer DDoS services from Israel in 2012. These 19-year-old American teenagers would be going to battle against two 18-year-old Israeli teenagers. The war between the two teenage gangs would not only change the nature of malware. Their struggle for dominance in cyberspace would create a doomsday machine.
The Mirai botnet, with all its devastating potential, was not the product of an organized-crime or nation-state hacking group—it was put together by three teenage boys. They rented out their botnet to paying customers to do mischief with and used it to attack chosen targets of their own. But the full extent of the danger became apparent only later, after this team made the source code for their malware public. Then others used it to do greater harm: crashing Germany’s largest Internet service provider; attacking Dyn’s Domain Name System servers, making the Internet unusable for millions; and taking down all of Liberia’s Internet—to name a few examples.
The Mirai botnet exploited vulnerable Internet of Things devices, such as Web-connected video cameras, ones that supported Telnet, an outdated system for logging in remotely. Owners of these devices rarely updated their passwords, so they could be easily guessed using a strategy called a dictionary attack.
The first step in assembling a botnet was to scan random IP addresses looking for vulnerable IoT devices, ones whose passwords could be guessed. Once identified, the addresses of these devices were passed to a “loader,” which would put the malware on the vulnerable device. Infected devices located all over the world could then be used for distributed denial-of-service attacks, orchestrated by a command-and-control (C2) server. When not attacking a target, these bots would be enlisted to scan for more vulnerable devices to infect.
Botnet malware is useful for financially motivated crime because botmasters can tell the bots in their thrall to implant malware on vulnerable machines, send phishing emails, or engage in click fraud, in which botnets profit by directing bots to click pay-per-click ads. Botnets are also great DDoS weapons because they can be trained on a target and barrage it from all directions. One day in February 2000, for example, the hacker MafiaBoy knocked out Fifa.com, Amazon.com, Dell, E-Trade, eBay, CNN, as well as Yahoo, at the time the largest search engine on the Internet.
After taking so many major websites offline, MafiaBoy was deemed a national -security threat. President Clinton ordered a national manhunt to find him. In April 2000, MafiaBoy was arrested and charged, and in January 2001 he pled guilty to 58 charges of denial-of-service attacks. Law enforcement did not reveal MafiaBoy’s real name, as this national-security threat was 15 years old.
Both MafiaBoy and the VDoS crew were adolescent boys who crashed servers. But whereas MafiaBoy did it for the sport, VDoS did it for the money. Indeed, these teenage Israeli kids were pioneering tech entrepreneurs. They helped launch a new form of cybercrime: DDoS as a service. With it, anyone could now hack with the click of a button, no technical knowledge needed.
It might be surprising that DDoS providers could advertise openly on the Web. After all, DDoSing another website is illegal everywhere. To get around this, these “booter services” have long argued they perform a legitimate function: providing those who set up Web pages a means to stress test websites.
In theory, such services do play an important function. But only in theory. As a booter-service provider admitted to University of Cambridge researchers, “We do try to market these services towards a more legitimate user base, but we know where the money comes from.”
Paras dropped out of Rutgers in his sophomore year and, with his father’s encouragement, spent the next year focused on building ProTraf Solutions, his DDoS-mitigation business. And just like a mafia don running a protection racket, he had to make that protection needed. After launching four DDoS attacks his freshman year, he attacked Rutgers yet again in September 2015, still hoping that his former school would give up on Incapsula. Rutgers refused to budge.
ProTraf Solutions was failing, and Paras needed cash. In May 2016, Paras reached out to Josiah White. Like Paras, Josiah frequented Hack Forums. When he was 15, he developed major portions of Qbot, a botnet worm that at its height in 2014 had enslaved half a million computers. Now 18, Josiah switched sides and worked with his friend Paras at ProTraf doing DDoS mitigation.
The hacker’s command-and-control (C2) server orchestrates the actions of many geographically distributed bots (computers under its control). Those computers, which could be IoT devices like IP cameras, can be directed to overwhelm the victim’s servers with unwanted traffic, making them unable to respond to legitimate requests. IEEE Spectrum
But Josiah soon returned to hacking and started working with Paras to take the Qbot malware, improve it, and build a bigger, more powerful DDoS botnet. Paras and Josiah then partnered with 19-year-old Dalton Norman. The trio turned into a well-oiled team: Dalton found the vulnerabilities; Josiah updated the botnet malware to exploit these vulnerabilities; and Paras wrote the C2—software for the command-and-control server—for controlling the botnet.
But the trio had competition. Two other DDoS gangs—Lizard Squad and VDoS—decided to band together to build a giant botnet. The collaboration, known as PoodleCorp, was successful. The amount of traffic that could be unleashed on a target from PoodleCorp’s botnet hit a record value of 400 gigabits per second, almost four times the rate that any previous botnet had achieved. They used their new weapon to attack banks in Brazil, U.S. government sites, and Minecraft servers. They achieved this firepower by hijacking 1,300 Web-connected cameras. Web cameras tend to have powerful processors and good connectivity, and they are rarely patched. So a botnet that harnesses video has enormous cannons at its disposal.
While PoodleCorp was on the rise, Paras, Josiah, and Dalton worked on a new weapon. By the beginning of August 2016, the trio had completed the first version of their botnet malware. Paras called the new code Mirai, after the anime series Mirai Nikki.
When Mirai was released, it spread like wildfire. In its first 20 hours, it infected 65,000 devices, doubling in size every 76 minutes. And Mirai had an unwitting ally in the botnet war then raging.
Up in Anchorage, Alaska, the FBI cyber unit was building a case against VDoS. The FBI was unaware of Mirai or its war with VDoS. The agents did not regularly read online boards such as Hack Forums. They did not know that the target of their investigation was being decimated. The FBI also did not realize that Mirai was ready to step into the void.
The head investigator in Anchorage was Special Agent Elliott Peterson. A former U.S. Marine, Peterson is a calm and self-assured agent with a buzz cut of red hair. At the age of 33, Peterson had returned to his native state of Alaska to prosecute cybercrime.
On 8 September 2016, the FBI’s Anchorage and New Haven cyber units teamed up and served a search warrant in Connecticut on the member of PoodleCorp who ran the C2 that controlled all its botnets. On the same day, the Israeli police arrested the VDoS founders in Israel. Suddenly, PoodleCorp was no more.
The Mirai group waited a couple of days to assess the battlefield. As far as they could tell, they were the only botnet left standing. And they were ready to use their new power. Mirai won the war because Israeli and American law enforcement arrested the masterminds behind PoodleCorp. But Mirai would have triumphed anyway, as it was ruthlessly efficient in taking control of Internet of Things devices and excluding competing malware.
A few weeks after the arrests of those behind VDoS, Special Agent Peterson found his next target: the Mirai botnet. In the Mirai case, we do not know the exact steps that Peterson’s team took in their investigation: Court orders in this case are currently “under seal,” meaning that the court deems them secret. But from public reporting, we know that Peterson’s team got its break in the usual way—from a Mirai victim: Brian Krebs, a cybersecurity reporter whose blog was DDoSed by the Mirai botnet on 25 September.
The FBI uncovered the IP address of the C2 and loading servers but did not know who had opened the accounts. Peterson’s team likely subpoenaed the hosting companies to learn the names, emails, cellphones, and payment methods of the account holders. With this information, it would seek court orders and then search warrants to acquire the content of the conspirators’ conversations.
Still, the hunt for the authors of the Mirai malware must have been a difficult one, given how clever these hackers were. For example, to evade detection Josiah didn’t just use a VPN. He hacked the home computer of a teenage boy in France and used his computer as the “exit node.” The orders for the botnet, therefore, came from this computer. Unfortunately for the owner, he was a big fan of Japanese anime and thus fit the profile of the hacker. The FBI and the French police discovered their mistake after they raided the boy’s house.
After wielding its power for two months, Paras dumped nearly the complete source code for Mirai on Hack Forums. “I made my money, there’s lots of eyes looking at IOT now, so it’s time to GTFO [Get The F*** Out],” Paras wrote. With that code dump, Paras had enabled anyone to build their own Mirai. And they did.
Dumping code is reckless, but not unusual. If the police find source code on a hacker’s devices, they can claim that they “downloaded it from the Internet.” Paras’s irresponsible disclosure was part of a false-flag operation meant to throw off the FBI, which had been gathering evidence indicating Paras’s involvement in Mirai and had contacted him to ask questions. Though he gave the agent a fabricated story, getting a text from the FBI probably terrified him.
Mirai had captured the attention of the cybersecurity community and of law enforcement. But not until after Mirai’s source code dropped would it capture the attention of the entire United States. The first attack after the dump was on 21 October, on Dyn, a company based in Manchester, N.H., that provides Domain Name System (DNS) resolution services for much of the East Coast of the United States.
It began at 7:07 a.m. EST with a series of 25-second attacks, thought to be tests of the botnet and Dyn’s infrastructure. Then came the sustained assaults: of one hour, and then five hours. Interestingly, Dyn was not the only target. Sony’s PlayStation video infrastructure was also hit. Because the torrents were so immense, many other websites were affected. Domains such as cnn.com, facebook.com, and nytimes.com wouldn’t work. For the vast majority of these users, the Internet became unusable. At 7:00 p.m., another 10-hour salvo hit Dyn and PlayStation.
Further investigations confirmed the point of the attack. Along with Dyn and PlayStation traffic, the botnet targeted Xbox Live and Nuclear Fallout game-hosting servers. Nation-states were not aiming to hack the upcoming U.S. elections. Someone was trying to boot players off their game servers. Once again—just like MafiaBoy, VDoS, Paras, Dalton, and Josiah—the attacker was a teenage boy, this time a 15-year-old in Northern Ireland named Aaron Sterritt.
Meanwhile, the Mirai trio left the DDoS business, just as Paras said. But Paras and Dalton did not give up on cybercrime. They just took up click fraud.
Click fraud was more lucrative than running a booter service. While Mirai was no longer as big as it had been, the botnet could nevertheless generate significant advertising revenue. Paras and Dalton earned as much money in one month from click fraud as they ever made with DDoS. By January 2017, they had earned over US $180,000, as opposed to a mere $14,000 from DDoSing.
Had Paras and his friends simply shut down their booter service and moved on to click fraud, the world would likely have forgotten about them. But by releasing the Mirai code, Paras created imitators. Dyn was the first major copycat attack, but many others followed. And due to the enormous damage these imitators wrought, law enforcement was intensely interested in the Mirai authors.
After collecting information tying Paras, Josiah, and Dalton to Mirai, the FBI quietly brought each up to Alaska. Peterson’s team showed the suspects its evidence and gave them the chance to cooperate. Given that the evidence was irrefutable, each folded.
Paras Jha was indicted twice, once in New Jersey for his attack on Rutgers, and once in Alaska for Mirai. Both indictments carried the same charge—one violation of the Computer Fraud and Abuse Act. Paras faced up to 10 years in federal prison for his actions. Josiah and Dalton were only indicted in Alaska and so faced 5 years in prison.
The trio pled guilty. At the sentencing hearing held on 18 September 2018, in Anchorage, each of the defendants expressed remorse for his actions. Josiah White’s lawyer conveyed his client’s realization that Mirai was “a tremendous lapse in judgment.”
Unlike Josiah, Paras spoke directly to Judge Timothy Burgess in the courtroom. Paras began by accepting full responsibility for his actions and expressed his deep regret for the trouble he’d caused his family. He also apologized for the harm he’d caused businesses and, in particular, Rutgers, the faculty, and his fellow students.
The Department of Justice made the unusual decision not to ask for jail time. In its sentencing memo, the government noted “the divide between [the defendants’] online personas, where they were significant, well-known, and malicious actors in the DDoS criminal milieu and their comparatively mundane ‘real lives’ where they present as socially immature young men living with their parents in relative obscurity.” It recommended five years of probation and 2,500 hours of community service.
The government had one more request —for that community service “to include continued work with the FBI on cybercrime and cybersecurity matters.” Even before sentencing, Paras, Josiah, and Dalton had logged close to 1,000 hours helping the FBI hunt and shut down Mirai copycats. They contributed to more than a dozen law enforcement and research efforts. In one instance, the trio assisted in stopping a nation-state hacking group. They also helped the FBI prevent DDoS attacks aimed at disrupting Christmas-holiday shopping. Judge Burgess accepted the government’s recommendation, and the trio escaped jail time.
The most poignant moments in the hearing were Paras’s and Dalton’s singling out for praise the very person who caught them. “Two years ago, when I first met Special Agent Elliott Peterson,” Paras told the court, “I was an arrogant fool believing that somehow I was untouchable. When I met him in person for the second time, he told me something I will never forget: ‘You’re in a hole right now. It’s time you stop digging.’ ” Paras finished his remarks by thanking “my family, my friends, and Agent Peterson for helping me through this.”
This article appears in the June 2023 print issue as “Patch Me if You Can.”
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
Ever since the Global Positioning System (GPS) first took off in the 1970s, it has been an essential part of many things—whether for personal navigation, agriculture, aviation, or military operations. But relying too much on this single technology involves huge risks if it were to fail or be hacked, which has prompted researchers to explore alternatives.
In a recent advancement, one research team succeeded in using a collection of low Earth orbit (LEO) satellites to calculate their ground location within several meters—outperforming today’s GPS-inertial navigation system (INS) in certain scenarios.
Zak Kassas, the director of the U.S. Department of Transportation’s Center for Automated Vehicles Research with Multimodal Assured Navigation (CARMEN) at Ohio State University, first started exploring alternatives to GPS in 2014 after becoming concerned at the thought of GPS failure. He says that GPS signals can be unreliable in certain situations, such as indoors or under dense tree canopy, and that the jamming of and interference in GPS signals have skyrocketed over the past few years.
“This has disrupted airport and port operations in the U.S. and around the world, caused chaos on rideshares, put vessels into disarray, caused drones to come crashing down, and is becoming commonplace in military conflicts,” he says. “All of this keeps me up at night thinking that we ought to find a backup alternative to [GPS] before it is too late.”
GPS systems currently rely on a constellation of 31 satellites circling in Earth’s medium orbit (about 20,000 kilometers high). But Kassas has his sights set on the tens of thousands of LEO satellites poised for launch over the next decade, which will be about 20 times as close to Earth. That means they’ll have stronger signals, making them more difficult to jam or spoof and more reliable in environments where GPS signals are not. Kassas’s idea is to use a ground receiver to detect the beacon signals of LEO satellites to determine their relative position and then use that information to calculate the position of the ground receiver—in other words, the location of the person or object of interest.
But whereas data from GPS satellites are available to the public, LEO satellites are privately owned, and their beacon signals are not publicly available. Kassas’s team therefore developed an algorithm that can decipher a satellite’s unknown beacon signal.
Researchers used their new positioning technique, called STAN [see green line at right], to determine how accurately it could track their position [blue line at right] in a moving vehicle. Sharbel Kozhaya
Next, they developed another algorithm called simultaneous tracking and navigation (STAN). “[STAN] can essentially detect the presence of a LEO satellite, decipher its beacon, and track the signal, while simultaneously estimating the satellite’s position, allowing us to produce an estimate of our distance to the satellite,” explains Kassas.
In 2021, Kassas’s team successfully decoded the signal of Starlink satellites, and used six of these to track their own position on Earth to within 10 meters. Later attempts reduced that tracking distance to just 6.5 meters. In their most recent advancement, they developed a receiver that could work with virtually any satellite constellation and tested it on satellites from four different companies—Iridium NEXT, OneWeb, Orbcomm, and SpaceX’s Starlink—to hone in on Earth locations with even greater precision. The advancement is described in their 2023 IEEE/ION Position, Location and Navigation Symposium (PLANS) paper.
In their paper, the team describes a stationary ground experiment where they were able to pinpoint their true location on the rooftop of their lab in Ohio with an error of about 5 meters, using the four different types of LEO satellites. Kassas notes that this is the first time anyone has been able to crack OneWeb signals for positioning and navigation purposes.
In a second experiment, the researchers mounted their off-the-shelf receiver onto the top of a car and drove while listening to signals from four LEO satellite constellations (one OneWeb, four Starlink, one Iridium NEXT, and two Orbcomm satellites).
When someone is driving or moving and GPS signals are lost, the GPS-INS system estimates the person’s location based on their trajectory, which accumulates errors rather quickly.
In their driving experiment, the research team drove in a car for 100 meters before cutting their GPS off, and continued driving for nearly a kilometer. Whereas the conventional GPS-INS system predicted their location to be about 500 meters away from their true location, the STAN system was just 4.4 meters off—proving that STAN algorithms can be incredibly useful when GPS fails.
“It is like shooting in the dark, yet hitting your target with an amazing accuracy,” says Kassas, noting that this approach uses numerous unknown variables together to successfully find a target. “We are dealing with satellites whose position is uncertain—we only know where the satellite is to within a few kilometers—and that are transmitting signals that we do not know much about. Yet we are able to use the satellites’ signals to locate ourselves with impressive accuracy.”
Despite these exciting first milestones, Kassas notes, this technology is still in its infancy. For instance, STAN still can’t achieve real-time navigation like GPS, and he says some of their algorithms need refining. “We have still not tested our approach in all environments, so it is difficult to assess whether these signals will allow you to navigate anytime, anywhere, which is the holy grail,” says Kassas.
Nevertheless, the researchers have filed for patents and say they have received widespread attention from startups and tech giants from across diverse industries who are interested in discussing commercialization. If all goes according to plan, STAN algorithms may lead us down a path where an alternative or backup to GPS exists.
Inside today’s computers, phones, and other mobile devices, more and more sensors, processors, and other electronics are fighting for space. Taking up a big part of this valuable real estate are the cameras—just about every gadget needs a camera, or two, three, or more. And the most space-consuming part of the camera is the lens.
The lenses in our mobile devices typically collect and direct incoming light by refraction, using a curve in a transparent material, usually plastic, to bend the rays. So these lenses can’t shrink much more than they already have: To make a camera small, the lens must have a short focal length; but the shorter the focal length, the greater the curvature and therefore the thickness at the center. These highly curved lenses also suffer from all sorts of aberrations, so camera-module manufacturers use multiple lenses to compensate, adding to the camera’s bulk.
With today’s lenses, the size of the camera and image quality are pulling in different directions. The only way to make lenses smaller and better is to replace refractive lenses with a different technology.
That technology exists. It’s the metalens, a device developed at Harvard and commercialized at Metalenz, where I am an applications engineer. We create these devices using traditional semiconductor-processing techniques to build nanostructures onto a flat surface. These nanostructures use a phenomenon called metasurface optics to direct and focus light. These lenses can be extremely thin—a few hundred micrometers thick, about twice the thickness of a human hair. And we can combine the functionality of multiple curved lenses into just one of our devices, further addressing the space crunch and opening up the possibility of new uses for cameras in mobile devices.
Before I tell you how the metalens evolved and how it works, consider a few previous efforts to replace the traditional curved lens.
Conceptually, any device that manipulates light does so by altering its three fundamental properties: phase, polarization, and intensity. The idea that any wave or wave field can be deconstructed down to these properties was proposed by Christiaan Huygens in 1678 and is a guiding principle in all of optics.
In this single metalens [between tweezers], the pillars are less than 500 nanometers in diameter. The black box at the bottom left of the enlargement represents 2.5 micrometers. Metalenz
In the early 18th century, the world’s most powerful economies placed great importance on the construction of lighthouses with larger and more powerful projection lenses to help protect their shipping interests. However, as these projection lenses grew larger, so did their weight. As a result, the physical size of a lens that could be raised to the top of a lighthouse and structurally supported placed limitations on the power of the beam that could be produced by the lighthouse.
French physicist Augustin-Jean Fresnel realized that if he cut a lens into facets, much of the central thickness of the lens could be removed but still retain the same optical power. The Fresnel lens represented a major improvement in optical technology and is now used in a host of applications, including automotive headlights and brake lights, overhead projectors, and—still—for lighthouse projection lenses. However, the Fresnel lens has limitations. For one, the flat edges of facets become sources of stray light. For another, faceted surfaces are more difficult to manufacture and polish precisely than continuously curved ones are. It’s a no-go for camera lenses, due to the surface accuracy requirements needed to produce good images.
Another approach, now widely used in 3D sensing and machine vision, traces its roots to one of the most famous experiments in modern physics: Thomas Young’s 1802 demonstration of diffraction. This experiment showed that light behaves like a wave, and when the waves meet, they can amplify or cancel one another depending on how far the waves have traveled. The so-called diffractive optical element (DOE) based on this phenomenon uses the wavelike properties of light to create an interference pattern—that is, alternating regions of dark and light, in the form of an array of dots, a grid, or any number of shapes. Today, many mobile devices use DOEs to convert a laser beam into “structured light.” This light pattern is projected, captured by an image sensor, then used by algorithms to create a 3D map of the scene. These tiny DOEs fit nicely into small gadgets, yet they can’t be used to create detailed images. So, again, applications are limited.
Enter the metalens. Developed at Harvard by a team led by professor Federico Capasso, then-graduate student Rob Devlin, research associates Reza Khorasaninejad, Wei Ting Chen, and others, metalenses work in a way that’s fundamentally different from any of these other approaches.
A metalens is a flat glass surface with a semiconductor layer on top. Etched in the semiconductor is an array of pillars several hundred nanometers high. These nanopillars can manipulate light waves with a degree of control not possible with traditional refractive lenses.
Imagine a shallow marsh filled with seagrass standing in water. An incoming wave causes the seagrass to sway back and forth, sending pollen flying off into the air. If you think of that incoming wave as light energy, and the nanopillars as the stalks of seagrass, you can picture how the properties of a nanopillar, including its height, thickness, and position next to other nanopillars, might change the distribution of light emerging from the lens.
A 12-inch wafer can hold up to 10,000 metalenses, made using a single semiconductor layer.Metalenz
We can use the ability of a metalens to redirect and change light in a number of ways. We can scatter and project light as a field of infrared dots. Invisible to the eye, these dots are used in many smart devices to measure distance, mapping a room or a face. We can sort light by its polarization (more on that in a moment). But probably the best way to explain how we are using these metasurfaces as a lens is by looking at the most familiar lens application—capturing an image.
The process starts by illuminating a scene with a monochromatic light source—a laser. (While using a metalens to capture a full-color image is conceptually possible, that is still a lab experiment and far from commercialization.) The objects in the scene bounce the light all over the place. Some of this light comes back toward the metalens, which is pointed, pillars out, toward the scene. These returning photons hit the tops of the pillars and transfer their energy into vibrations. The vibrations—called plasmons—travel down the pillars. When that energy reaches the bottom of a pillar, it exits as photons, which can be then captured by an image sensor. Those photons don’t need to have the same properties as those that entered the pillars; we can change these properties by the way we design and distribute the pillars.
Researchers around the world have been exploring the concept of metalenses for decades.
In a paper published in 1968 in Soviet Physics Uspekhi, Russian physicist Victor Veselago put the idea of metamaterials on the map, hypothesizing that nothing precluded the existence of a material that exhibits a negative index of refraction. Such a material would interact with light very differently than a normal material would. Where light ordinarily bounces off a material in the form of reflection, it would pass around this type of metamaterial like water going around a boulder in a stream.
It took until 2000 before the theory of metamaterials was implemented in the lab. That year, Richard A. Shelby and colleagues at the University of California, San Diego, demonstrated a negative refractive index metamaterial in the microwave region. They published the discovery in 2001 in Science, causing a stir as people imagined invisibility cloaks. (While intriguing to ponder, creating such a device would require precisely manufacturing and assembling thousands of metasurfaces.)
The first metalens to create high-quality images with visible light came out of Federico Capasso’s lab at Harvard. Demonstrated in 2016, with a description of the research published in Science, the technology immediately drew interest from smartphone manufacturers. Harvard then licensed the foundational intellectual property exclusively to Metalenz, where it has now been commercialized.
A single metalens [right] can replace a stack of traditional lenses [left], simplifying manufacturing and dramatically reducing the size of a lens package.Metalenz
Since then, researchers at Columbia University, Caltech, and the University of Washington, working with Tsinghua University, in Beijing, have also demonstrated the technology.
Much of the development work Metalenz does involves fine-tuning the way the devices are designed. In order to translate image features like resolution into nanoscale patterns, we developed tools to help calculate the way light waves interact with materials. We then convert those calculations into design files that can be used with standard semiconductor processing equipment.
The first wave of optical metasurfaces to make their way into mobile imaging systems have on the order of 10 million silicon pillars on a single flat surface only a few millimeters square, with each pillar precisely tuned to accept the correct phase of light, a painstaking process even with the help of advanced software. Future generations of the metalens won’t necessarily have more pillars, but they’ll likely have more sophisticated geometries, like sloped edges or asymmetric shapes.
Metalenz came out of stealth mode in 2021, announcing that it was getting ready to scale up production of devices. Manufacturing was not as big a challenge as design because the company manufactures metasurfaces using the same materials, lithography, and etching processes that it uses to make integrated circuits.
In fact, metalenses are less demanding to manufacture than even a very simple microchip because they require only a single lithography mask as opposed to the dozens required by a microprocessor. That makes them less prone to defects and less expensive. Moreover, the size of the features on an optical metasurface are measured in hundreds of nanometers, whereas foundries are accustomed to making chips with features that are smaller than 10 nanometers.
And, unlike plastic lenses, metalenses can be made in the same foundries that produce the other chips destined for smartphones. This means they could be directly integrated with the CMOS camera chips on site rather than having to be shipped to another location, which reduces their costs still further.
A single meta-optic, in combination with an array of laser emitters, can be used to create the type of high-contrast, near-infrared dot or line pattern used in 3D sensing. Metalenz
In 2022, ST Microelectronics announced the integration of Metalenz’s metasurface technology into its FlightSense modules. Previous generations of FlightSense have been used in more than 150 models of smartphones, drones, robots, and vehicles to detect distance. Such products with Metalenz technology inside are already in consumer hands, though ST Microelectronics isn’t releasing specifics.
Indeed, distance sensing is a sweet spot for the current generation of metalens technology, which operates at near-infrared wavelengths. For this application, many consumer electronics companies use a time-of-flight system, which has two optical components: one that transmits light and one that receives it. The transmitting optics are more complicated. These involve multiple lenses that collect light from a laser and transform it to parallel light waves—or, as optical engineers call it, a collimated beam. These also require a diffraction grating that turns the collimated beam into a field of dots. A single metalens can replace all of those transmitting and receiving optics, saving real estate within the device as well as reducing cost.
And a metalens does the field-of-dots job better in difficult lighting conditions because it can illuminate a broader area using less power than a traditional lens, directing more of the light to where you want it.
Conventional imaging systems, at best, gather information only about the spatial position of objects and their color and brightness.But the light carries another type of information: the orientation of the light waves as they travel through space—that is, the polarization. Future metalens applications will take advantage of the technology’s ability to detect polarized light.
The polarization of light reflecting off an object conveys all sorts of information about that object, including surface texture, type of surface material, and how deeply light penetrates the material before bouncing back to the sensor. Prior to the development of the metalens, a machine vision system would require complex optomechanical subsystems to gather polarization information. These typically rotate a polarizer—structured like a fence to allow only waves oriented at a certain angle to pass through—in front of a sensor. They then monitor how the angle of rotation impacts the amount of light hitting the sensor.
Metasurface optics are capable of capturing polarization information from light, revealing a material’s characteristics and providing depth information.Metalenz
A metalens, by contrast, doesn’t need a fence; all the incoming light comes through. Then it can be redirected to specific regions of the image sensor based on its polarization state, using a single optical element. If, for example, light is polarized along the X axis, the nanostructures of the metasurface will direct the light to one section of the image sensor. However, if it is polarized at 45 degrees to the X axis, the light will be directed to a different section. Then software can reconstruct the image with information about all its polarization states.
Using this technology, we can replace previously large and expensive laboratory equipment with tiny polarization-analysis devices incorporated into smartphones, cars, and even augmented-reality glasses. A smartphone-based polarimeter could let you determine whether a stone in a ring is diamond or glass, whether concrete is cured or needs more time, or whether an expensive hockey stick is worth buying or contains micro cracks. Miniaturized polarimeters could be used to determine whether a bridge’s support beam is at risk of failure, whether a patch on the road is black ice or just wet, or if a patch of green is really a bush or a painted surface being used to hide a tank. These devices could also help enable spoof-proof facial identification, since light reflects off a 2D photo of a person at different angles than a 3D face and from a silicone mask differently than it does from skin. Handheld polarizers could improve remote medical diagnostics—for example, polarization is used in oncology to examine tissue changes.
But like the smartphone itself, it’s hard to predict where metalenses will take us. When Apple introduced the iPhone in 2008, no one could have predicted that it would spawn companies like Uber. In the same way, perhaps the most exciting applications of metalenses are ones we can’t even imagine yet.
A rocket carrying CubeSats launched into Earth orbit two years ago, on 22 March 2021. Two of those CubeSats represented competing approaches to bringing the Internet of Things (IoT) to space. One, operated by Lacuna Space, uses a protocol called LoRaWAN, a long-range, low-power protocol owned by Semtech. The other, owned by Sateliot, uses the narrowband IoT protocol, following in the footsteps of OQ Technology, which launched a similar IoT satellite demonstration in 2019. And separately, in late 2022, the cellular industry standard-setter 3GPP incorporated satellite-based 5G into standard cellular service with its release 17.
In other words, there is now an IoT space race.
In addition to Lacuna and Sateliot, OQ Technology is also nipping at the heels of satellite telecom incumbents such as Iridium, Orbcomm, and Inmarsat for a share of the growing satellite-IoT subscriber market. OQ Technology has three satellites in low Earth orbit and plans to launch seven more this year, says OQ Technology’s chief innovation officer, Prasanna Nagarajan. OQ has paying customers in the oil and gas, agriculture, and transport logistics industries.
Sateliot, based in Barcelona, has the satellite it launched in 2021 in orbit and plans to launch four more this year, says Sateliot’s business development manager, Paula Caudet. The company is inviting early adopters to sample its service for free this year while it builds more coverage. “Certain use cases are fine with flybys every few hours, such as agricultural sensors,” Caudet says.
OQ Technology claims it will launch enough satellites to offer at least hourly coverage by 2024 and near-real-time coverage later that year. Sateliot is also aiming for better-than-hourly coverage sometime in 2024 and near-real-time coverage in 2025.
Incumbent satellite operators are already offering IoT coverage, but so far they require specific IoT hardware tuned to their spectrum bands and protocols. Insurgent companies that make use of the 3GPP release 17 standard will be able to offer satellite connectivity to devices originally designed to connect only to cellular towers.
New companies also see an opportunity to offer lower, more attractive pricing. “Legacy satellite providers were charging maybe [US] $100 for a few kilobits of data, and customers are not willing to pay so much for IoT,” says Nagarajan. “There seemed to be a huge market gap.” Another company, Swarm, which is a subsidiary of SpaceX, offers low-bandwidth connectivity via proprietary devices to its tiny satellites for $5 per month.
Thanks to shared launch infrastructure and cheaper IoT-compatible modules and satellites, new firms can compete with companies that have had satellites in orbit for decades. More and more hardware and services are available on an off-the-shelf basis. “An IoT-standard module is maybe 8 or 10 euros, versus 300 euros for satellite-specific modules,” says Caudet.
In fact, Sateliot contracted the construction of its first satellite to Open Cosmos. Open Cosmos mission manager Jordi Castellví says that CubeSat subsystems and certain specialized services are now available online from suppliers including AlénSpace, CubeSatShop, EnduroSat, and Isispace, among others.
By building constellations of hundreds of satellites with IoT modules in low Earth orbit, IoT-satellite companies will be able to save money on hardware and still detect the faint signals from IoT gateways or even individual IoT sensors, such as those aboard shipping containers packed onto cargo ships at sea. They won’t move as much data as voice and broadband offerings in the works from AST SpaceMobile and Lynk Global’s larger and more complex satellites, for example, but they may be able to meet growing demand for narrowband applications.
OQ Technology has its own licensed spectrum and can operate as an independent network operator for IoT users with the latest 3GPP release—although at first most users might not have direct contact with such providers; both Sateliot and OQ Technology have partnered with existing mobile-network operators to offer a sort of global IoT roaming package. For example, while a cargo ship is in port, a customer’s onboard IoT device will transmit via the local cellular network. Farther out at sea, the device will switch to transmitting to satellites overhead. “The next step is being able to integrate cellular and satellite services,” Caudet says.
This post was updated on 28 March to clarify the planned launch schedules and coverage schedules for OQ Technology and Sateliot.
This article appears in the June 2023 print issue .
The 19-seater Dornier 228 propeller plane that took off into the cold blue January sky looked ordinary at first glance. Spinning its left propeller, however, was a 2-megawatt electric motor powered by two hydrogen fuel cells—the right side ran on a standard kerosene engine—making it the largest aircraft flown on hydrogen to date. Val Miftakhov, founder and CEO of ZeroAvia, the California startup behind the 10-minute test flight in Gloucestershire, England, called it a “historical day for sustainable aviation.”
Los Angeles–based Universal Hydrogen plans to test a 50-seat hydrogen-powered aircraft by the end of February. Both companies promise commercial flights of retrofitted turboprop aircraft by 2025. French aviation giant Airbus is going bigger with a planned 2026 demonstration flight of its iconic A380 passenger airplane, which will fly using hydrogen fuel cells and by burning hydrogen directly in an engine. And Rolls Royce is making headway on aircraft engines that burn pure hydrogen.
The aviation industry, responsible for some 2.5 percent of global carbon emissions, has committed to net-zero emissions by 2050. Getting there will require several routes, including sustainable fuels, hybrid-electric engines, and battery-electric aircraft.
Hydrogen is another potential route. Whether used to make electricity in fuel cells or burned in an engine, it combines with oxygen to emit water vapor. If green hydrogen scales up for trucks and ships, it could be a low-cost fuel without the environmental issues of batteries.
Flying on hydrogen brings storage and aircraft-certification challenges, but aviation companies are doing the groundwork now for hydrogen flight by 2035. “Hydrogen is headed off to the sky, and we’re going to take it there,” says Amanda Simpson, vice president for research and technology at Airbus Americas.
The most plentiful element, hydrogen is also the lightest—key for an industry fighting gravity—packing three times the energy of jet fuel by weight. The problem with hydrogen is its volume. For transport, it has to be stored in heavy tanks either as a compressed high-pressure gas or a cryogenic liquid.
ZeroAvia is using compressed hydrogen gas, since it is already approved for road transport. Its test airplane had two hydrogen fuel cells and tanks sitting inside the cabin, but the team is now thinking creatively about a compact system with minimal changes to aircraft design to speed up certification in the United States and Europe. The fuel cells’ added weight could reduce flying range, but “that’s not a problem, because aircraft are designed to fly much further than they’re used,” says vice president of strategy James McMicking.
The company has backing from investors that include Bill Gates and Jeff Bezos; partnerships with British Airways and United Airlines; and 1,500 preorders for its hydrogen-electric power-train system, half of which are for smaller, 400-kilometer-range 9- to 19-seaters.
By 2027, ZeroAvia plans to convert larger, 70-seater turboprop aircraft with twice the range, used widely in Europe. The company is developing 5-MW electric motors for those, and it plans to switch to more energy-dense liquid hydrogen to save space and weight. The fuel is novel for the aviation industry and could require a longer regulatory approval process, McMicking says.
Next will come a 10-MW power train for aircraft with 100 to 150 seats, “the workhorses of the industry,” he says. Those planes—think Boeing 737—are responsible for 60 percent of aviation emissions. Making a dent in those with hydrogen will require much more efficient fuel cells. ZeroAvia is working on proprietary high-temperature fuel cells for that, McMicking says, with the ability to reuse the large amounts of waste heat generated. “We have designs and a technology road map that takes us into jet-engine territory for power,” he says.
Universal Hydrogen, which counts Airbus, GE Aviation, and American Airlines among its strategic investors, is placing bets on liquid hydrogen. The startup, “a hydrogen supply and logistics company at our core,” wants to ensure a seamless delivery network for hydrogen aviation as it catches speed, says founder and CEO Paul Eremenko. The company sources green hydrogen, turns it into liquid, and puts it in relatively low-tech insulated aluminum tanks that it will deliver via road, rail, or ship. “We want them certified by the Federal Aviation Administration for 2025, which means they can’t be a science project,” he says.
The cost of green hydrogen is expected to be on par with kerosene by 2025, Eremenko says. But “there’s nobody out there with an incredible hydrogen-airplane solution. It’s a chicken-and-egg problem.”
To crack it, Universal Hydrogen partnered with leading fuel-cell-maker Plug Power to develop a few thousand conversion kits for regional turboprop airplanes. The kits swap the engine in its streamlined housing (also known as nacelle) for a fuel-cell stack, power electronics, and a 2-MW electric motor. While the company’s competitors use batteries as buffers during takeoff, Eremenko says Universal uses smart algorithms to manage fuel cells, so they can ramp up and respond quickly. “We are the Nespresso of hydrogen,” he says. “We buy other people’s coffee, put it into capsules, and deliver to customers. But we have to build the first coffee machine. We’re the only company incubating the chicken and egg at the same time.”
This rendering of an Airbus A380 demonstrator flight (presently slated for 2026) reveals current designs on an aircraft that’s expected to fly using fuel cells and by burning hydrogen directly in the engine. Airbus
Fuel cells have a few advantages over a large central engine. They allow manufacturers to spread out smaller propulsion motors over an aircraft, giving them more design freedom. And because there are no high-temperature moving parts, maintenance costs can be lower. For long-haul aircraft, however, the weight and complexity of high-power fuel cells makes hydrogen-combustion engines appealing.
Airbus is considering both fuel-cell and combustion propulsion for its ZEROe hydrogen aircraft system. It has partnered with German automotive fuel-cell-maker Elring Klinger and, for direct combustion engines, with CFM International, a joint venture between GE Aviation and Safran. Burning liquid hydrogen in today’s engines is still expected to require slight modifications, such as a shorter combustion chamber and better seals.
Airbus is also evaluating hybrid propulsion concepts with a hydrogen-engine-powered turbine and a hydrogen-fuel-cell-powered motor on the same shaft, says Simpson, of Airbus Americas. “Then you can optimize it so you use both propulsion systems for takeoff and climb, and then turn one off for cruising.”
The company isn’t limiting itself to simple aircraft redesign. Hydrogen tanks could be stored in a cupola on top of the plane, pods under the wings, or a large tank at the back, Simpson says. Without liquid fuel in the wings, as in traditional airplanes, she says, “you can optimize wings for aerodynamics, make them thinner or longer. Or maybe a blended-wing body, which could be very different. This opens up the opportunity to optimize aircraft for efficiency.” Certification for such new aircraft could take years, and Airbus isn’t expecting commercial flights until 2035.
Conventional aircraft made today will be around in 2050 given their 25- to 30-year life-span, says Robin Riedel, an analyst at McKinsey & Co. Sustainable fuels are the only green option for those. He says hydrogen could play a role there, through “power-to-liquid technology, where you can mix hydrogen and captured carbon dioxide to make aviation fuel.”
Even then, Riedel thinks hydrogen will likely be a small part of aviation’s sustainability solution until 2050. “By 2070, hydrogen is going to play a much bigger role,” he says. “But we have to get started on hydrogen now.” The money that Airbus and Boeing are putting into hydrogen is a small fraction of aerospace, he says, but big airlines investing in hydrogen companies or placing power-train orders “shows there is desire.”
The aviation industry has to clean up if it is to grow, Simpson says. Biofuels are a stepping-stone, because they reduce only carbon emissions, not other harmful ones. “If we’re going to move towards clean aviation, we have to rethink everything from scratch and that’s what ZEROe is doing,” she says. “This is an opportunity to make not an evolutionary change but a truly revolutionary one.”
This article appears in the April 2023 print issue as “Hydrogen-Powered Flight Cleared for Takeoff.”
Terrae Novae is ESA’s exploration programme.
Terrae Novae is not only literally about exploring new worlds, but by describing the limitless opportunities for discovery, economic growth and inspiration it also expresses our ambitions for Europe’s future innovators, scientists and explorers.
This video shows the many exploration activities ESA is conducting or has planned in our Solar System, from the International Space Station to the Moon with the European Service Module and lunar Gateway modules for Artemis, and on to Mars with the Mars Sample Return campaign.
For more on ESA’s human and robotic exploration strategy see: https://www.esa.int/terraenovae
The typical positional accuracy provided by standalone GNSS is 3-10 meters, which is suitable for automotive applications such as navigation and emergency call. With the emergence of high precision GNSS, able to mitigate the influence of errors, sub-decimeter positional accuracy is possible and it becomes usable for more advanced applications such as C-V2X and autonomous driving. Naturally, to be applicable for these safety-critical applications, other preconditions such as availability and data integrity must also be met.
Register now to download this free poster!
This poster shows the different types of error sources and explains how high precision GNSS calculates and transmits error correction data to vehicles, enabling new automotive applications for GNSS.
For about as long as engineers have talked about beaming solar power to Earth from space, they’ve had to caution that it was an idea unlikely to become real anytime soon. Elaborate designs for orbiting solar farms have circulated for decades—but since photovoltaic cells were inefficient, any arrays would need to be the size of cities. The plans got no closer to space than the upper shelves of libraries.
That’s beginning to change. Right now, in a sun-synchronous orbit about 525 kilometers overhead, there is a small experimental satellite called the Space Solar Power Demonstrator One (SSPD-1 for short). It was designed and built by a team at the California Institute of Technology, funded by donations from the California real estate developer Donald Bren, and launched on 3 January—among 113 other small payloads—on a SpaceX Falcon 9 rocket.
“To the best of our knowledge, this would be the first demonstration of actual power transfer in space, of wireless power transfer,” says Ali Hajimiri, a professor of electrical engineering at Caltech and a codirector of the program behind SSPD-1, the Space Solar Power Project.
The Caltech team is waiting for a go-ahead from the operators of a small space tug to which it is attached, providing guidance and attitude control. If all goes well, SSPD-1 will spend at least five to six months testing prototype components of possible future solar stations in space. In the next few weeks, the project managers hope to unfold a lightweight frame, called DOLCE (short for Deployable on-Orbit ultraLight Composite Experiment), on which parts of future solar arrays could be mounted. Another small assembly on the spacecraft contains samples of 32 different types of photovoltaic cells, intended to see which would be most efficient and robust. A third part of the vehicle contains a microwave transmitter, set up to prove that energy from the solar cells can be sent to a receiver. For this first experiment, the receivers are right there on board the spacecraft, but if it works, an obvious future step would be to send electricity via microwave to receivers on the ground.
Caltech’s Space Solar Power Demonstrator, shown orbiting Earth in this artist’s conception, was launched on 3 January.Caltech
One can dismiss the 50-kilogram SSPD-1 as yet another nonstarter, but a growing army of engineers and policymakers take solar energy from space seriously. Airbus, the European aerospace company, has been testing its own technology on the ground, and government agencies in China, Japan, South Korea, and the United States have all mounted small projects. “Recent technology and conceptual advances have made the concept both viable and economically competitive,” said Frazer-Nash, a British engineering consultancy, in a 2021 report to the U.K. government. Engineers working on the technology say microwave power transmissions would be safe, unlike ionizing radiation, which is harmful to people or other things in its path.
No single thing has happened to start this renaissance. Instead, say engineers, several advances are coming together.
For one thing, the cost of launching hardware into orbit keeps dropping, led by SpaceX and other, smaller companies such as Rocket Lab. SpaceX has a simplified calculator on its website, showing that if you want to launch a 50-kg satellite into sun-synchronous orbit, they’ll do it for US $275,000.
Meanwhile, photovoltaic technology has improved, step by step. Lightweight electronic components keep getting better and cheaper. And there is political pressure as well: Governments and major companies have made commitments to decarbonize in the battle against global climate change, committing to renewable energy sources to replace fossil fuels.
Most solar power, at least for the foreseeable future, will be Earth-based, which will be cheaper and easier to maintain than anything anyone can launch into space. Proponents of space-based solar power say that for now, they see it as best used for specialty needs, such as remote outposts, places recovering from disasters, or even other space vehicles.
But Hajimiri says don’t underestimate the advantages of space, such as unfiltered sunlight that is far stronger than what reaches the ground and is uninterrupted by darkness or bad weather—if you can build an orbiting array light enough to be practical.
Most past designs, dictated by the technology of their times, included impossibly large truss structures to hold solar panels and wiring to route power to a central transmitter. The Caltech team would dispense with all that. An array would consist of thousands of independent tiles as small as 100 square centimeters, each with its own solar cells, transmitter, and avionics. They might be loosely connected, or they might even fly in formation.
Time-lapse images show the experimental DOLCE frame for an orbiting solar array being unfolded in a clean room.Caltech
“The analogy I like to use is that it’s like an army of ants instead of an elephant,” says Hajimiri. Transmission to receivers on the ground could be by phased array—microwave signals from the tiles synchronized so that they can be aimed with no moving parts. And the parts—the photovoltaic cells with their electronics—could perhaps be so lightweight that they’re flexible. New algorithms could keep their signals focused.
“That’s the kind of thing we’re talking about,” said Harry Atwater, a coleader of the Caltech project, as SSPD-1 was being planned. “Really gossamer-like, ultralight, the limits of mass-density deployable systems.”
If it works out, in 30 years maybe there could be orbiting solar power fleets, adding to the world’s energy mix. In other words, as a recent report from Frazer-Nash concluded, this is “a potential game changer.”
This article appears in the April 2023 print issue as “Trial Run for Orbiting Solar Array.”
In this article, we explore the top 10 AI tools that are driving innovation and efficiency in various industries. These tools are designed to automate repetitive tasks, improve workflow, and increase productivity. The tools included in our list are some of the most advanced and widely used in the market, and are suitable for a variety of applications. Some of the tools focus on natural language processing, such as ChatGPT and Grammarly, while others focus on image and video generation, such as DALL-E and Lumen5. Other tools such as OpenAI Codex, Tabnine, Canva, Jasper AI,, and Surfer SEO are designed to help with specific tasks such as code understanding content writing and website optimization. This list is a great starting point for anyone looking to explore the possibilities of AI and how it can be applied to their business or project.
So let’s dive into
ChatGPT is a large language model that generates human-like responses to a variety of prompts. It can be used for tasks such as language translation, question answering, and text completion. It can handle a wide range of topics and styles of writing, and generates coherent and fluent text, but should be used with care as it may generate text that is biased, offensive, or factually incorrect.
Overall, ChatGPT is a powerful tool for natural language processing, but it should be used with care and with an understanding of its limitations.
DALL-E is a generative model developed by OpenAI that is capable of generating images from text prompts. It is based on the GPT-3 architecture, which is a transformer-based neural network language model that has been trained on a massive dataset of text. DALL-E can generate images that are similar to a training dataset and it can generate high-resolution images that are suitable for commercial use.
Overall, DALL-E is a powerful AI-based tool for generating images, it can be used for a variety of applications such as creating images for commercial use, gaming, and other creative projects. It is important to note that the generated images should be reviewed and used with care, as they may not be entirely original and could be influenced by the training data.
Lumen5 is a content creation platform that uses AI to help users create videos, social media posts, and other types of content. It has several features that make it useful for content creation and marketing, including:
Overall, Lumen5 is a useful tool for creating content quickly and easily, it can help automate the process of creating videos, social media posts, and other types of content. However, the quality of the generated content may vary depending on the source material and it is important to review and edit the content before publishing it.
Grammarly is a writing-enhancement platform that uses AI to check for grammar, punctuation, and spelling errors in the text. It also provides suggestions for improving the clarity, concision, and readability of the text. It has several features that make it useful for improving writing, including:
OpenAI Codex is a system developed by OpenAI that can create code from natural language descriptions of software tasks. The system is based on the GPT-3 model and can generate code in multiple programming languages.
Overall, OpenAI Codex is a powerful tool that can help automate the process of writing code and make it more accessible to non-technical people. However, the quality of the generated code may vary depending on the task description and it is important to review and test the code before using it in a production environment. It is important to use the tool as an aid, not a replacement for the developer's knowledge.
Tabnine is a code completion tool that uses AI to predict and suggest code snippets. It is compatible with multiple programming languages and can be integrated with various code editors.
Overall, TabNine is a useful tool for developers that can help improve coding efficiency and reduce the time spent on writing code. However, it is important to review the suggestions provided by the tool and use them with caution, as they may not always be accurate or appropriate. It is important to use the tool as an aid, not a replacement for the developer's knowledge.
Jasper is a content writing and content generation tool that uses artificial intelligence to identify the best words and sentences for your writing style and medium in the most efficient, quick, and accessible way.
Surfer SEO is a software tool designed to help website owners and digital marketers improve their search engine optimization (SEO) efforts. The tool provides a variety of features that can be used to analyze a website's on-page SEO, including:
Overall, Surfer SEO can be a useful tool for website owners and digital marketers looking to improve their SEO efforts. However, it is important to remember that it is just a tool and should be used in conjunction with other SEO best practices. Additionally, the tool is not a guarantee of better ranking.
Zapier is a web automation tool that allows users to automate repetitive tasks by connecting different web applications together. It does this by creating "Zaps" that automatically move data between apps, and can also be used to trigger certain actions in one app based on events in another app.
Overall, Zapier is a useful tool that can help users automate repetitive tasks and improve workflow. It can save time and increase productivity by connecting different web applications together. However, it may require some technical skills and some features may require a paid subscription. It is important to use the tool with caution and not to rely too much on it, to understand the apps better.
Compose AI is a company that specializes in developing natural language generation (NLG) software. Their software uses AI to automatically generate written or spoken text from structured data, such as spreadsheets, databases, or APIs.
Overall, Compose AI's NLG software can be a useful tool for automating the process of creating written or spoken content from structured data. However, the quality of the generated content may vary depending on the data source, and it is essential to review the generated content before using it in a production environment. It is important to use the tool as an aid, not a replacement for the understanding of the data.
AI tools are becoming increasingly important in today's business and technology landscape. They are designed to automate repetitive tasks, improve workflow, and increase productivity. The top 10 AI tools included in this article are some of the most advanced and widely used in the market, and are suitable for various applications. Whether you're looking to improve your natural language processing, create high-resolution images, or optimize your website, there is an AI tool that can help. It's important to research and evaluate the different tools available to determine which one is the best fit for your specific needs. As AI technology continues to evolve, these tools will become even more powerful and versatile and will play an even greater role in shaping the future of business and technology.
A powerful trillion-watt laser shot at the sky can generate lightning rods in the air that can guide lightning strikes to keep them from causing havoc, a new study finds.
To date, the most common and effective form of protection against lightning is the lightning rod invented by Benjamin Franklin in 1752. These pointed electrically conductive metal rods intercept lightning strikes and guide their electric current safely to the ground.
However, a key drawback of a conventional lightning rod is that the radius of its area of protection is roughly equal to its height. Since there are practical limits to how tall one can build a lightning rod, this means they may not prove useful at protecting large areas, including sensitive infrastructure such as airports, rocket launchpads and nuclear power plants, says study senior author Jean-Pierre Wolf, a physicist at the University of Geneva.
“This is the first demonstration that lightning can be controlled by a laser.”
—Jean-Pierre Wolf, University of Geneva
Scientists first suggested using lasers to generate lightning rods in the air nearly 50 years ago. “The idea is to create a very long lightning rod with the laser,” Wolf says.
In the new study, researchers conducted experiments during the summer of 2021 at the top of Mount Säntis, which at 2,502 meters above sea level, is the highest mountain in the Alpstein massif of northeastern Switzerland. The laser was activated every time storms were forecast between June and September, with air traffic closed over the area during these tests.
Wolf and his colleagues sought to protect a 124-meter transmitter tower equipped with a traditional lightning rod at the summit belonging to telecommunications provider Swisscom. This tower is struck by lightning about 100 times a year, and scientists had previously equipped it with multiple sensors to analyze these strikes.
Near the tower, the researchers installed a near-infrared laser the size of a large car. It fired pulses each packing about a half-joule of energy and a picosecond (trillionth of a second) long roughly a thousand times a second, with a peak power of a terawatt (trillion watts). (It also shot a visible green beam to help show the laser’s path.)
“Imagine transporting a 10-ton laser to 2,500-meter altitude on a mountain with helicopters, making it run in very harsh conditions, tracking lightning in extreme weather like winds up to 200 kilometers per hour, heavy rain, hail, temperatures varying from -10 degrees to 20 degrees Celsius in the same day, and then, when it works, you get a massive lightning bolt some tens of meters next to you—and you’re so happy,” Wolf says.
The laser pulses can alter the refractive index of the air—the quality of a material that controls how quickly light travels within it. This can make the air behave like a series of lenses.
After crossing this lensing air, the intense, short laser pulses can rapidly ionize and heat air molecules, expelling them from the path of the beam at supersonic speeds. This leaves behind a channel of low-density air for roughly a millisecond. These “filaments” possess high electric conductivity and can thus serve as lightning rods, and can range up to 100 meters long. The researchers could adjust the laser to create filaments that appear up to a kilometer from the machine.
In experiments, the scientists created filaments above, but near, the tip of the tower’s lightning rod. This essentially boosted the rod’s height by at least 30 meters, extending its area of protection so that lightning would not strike parts of the tower otherwise outside the rod’s shelter, says study lead author Aurélien Houard, a research scientist at the Superior National School of Advanced Techniques in Paris.
The laser operated for more than six hours during thunderstorms happening within three kilometers of the tower. The tower was hit by at least 16 lightning flashes, all of which streaked upward.
Four of these flashes occurred while the laser was operating. High-speed camera footage and radio and X-ray detectors showed the laser helped guide the course of these discharges. One of these guided strikes was recorded on camera and revealed it followed the laser path for nearly 60 meters.
During tests carried out on the summit of Mt. Säntis by Jean-Pierre Wolf and Aurélien Houard’s team, the scientists noted that lightning discharges followed laser beams for several dozen meters before reaching the Swisscom telecommunications tower (in red and white).Xavier Ravinet/UNIGE
“This is the first demonstration that lightning can be controlled by a laser,” Wolf says.
Although lab experiments had suggested that lasers could help guide lightning strikes, previous experiments failed to do so in the field over the past 20 or so years. Wolf, Houard and their colleagues suggest their new work may have succeeded because of the pulse rate of their laser was hundreds of times greater than prior attempts. The more pulses are used, the greater the chance one might successfully intercept all of the activity leading up to a lightning flash. In addition, higher pulse rates are likely better at keeping filaments electrically conductive, they added.
Wolf noted their work is not geoengineering research. “We are not modifying the climate,” he says. “We deflect lightning to protect areas.”
In the long term, the scientists would like to use lasers to extend lightning rods by 500 meters. In addition, they would likely to run experiments at sites such as airports and rocket launchpads, Wolf notes.
The researchers detailed their findings 16 January in the journal Nature Photonics.
This sponsored article is brought to you by COMSOL.
History teaches that the Industrial Revolution began in England in the mid-18th century. While that era of sooty foundries and mills is long past, manufacturing remains essential — and challenging. One promising way to meet modern industrial challenges is by using additive manufacturing (AM) processes, such as powder bed fusion and other emerging techniques. To fulfill its promise of rapid, precise, and customizable production, AM demands more than just a retooling of factory equipment; it also calls for new approaches to factory operation and management.
That is why Britain’s Manufacturing Technology Centre (MTC) has enhanced its in-house metal powder bed fusion AM facility with a simulation model and app to help factory staff make informed decisions about its operation. The app, built using the Application Builder in the COMSOL Multiphysics software, shows the potential for pairing a full-scale AM factory with a so-called “digital twin” of itself.
“The model helps predict how heat and humidity inside a powder bed fusion factory may affect product quality and worker safety,” says Adam Holloway, a technology manager within the MTC’s modeling team. “When combined with data feeds from our facility, the app helps us integrate predictive modeling into day-to-day decision-making.” The MTC project demonstrates the benefits of placing simulation directly into the hands of today’s industrial workforce and shows how simulation could help shape the future of manufacturing.
“We’re trying to present the findings of some very complex calculations in a simple-to-understand way. By creating an app from our model, we can empower staff to run predictive simulations on laptops during their daily shifts.”
—Adam Holloway, MTC Technology Manager
To help modern British factories keep pace with the world, the MTC promotes high-value manufacturing throughout the United Kingdom. The MTC is based in the historic English industrial city of Coventry (Figure 2), but its focus is solely on the future. That is why the team has committed significant human and technical resources to its National Centre for Additive Manufacturing (NCAM).
“Adopting AM is not just about installing new equipment. Our clients are also seeking help with implementing the digital infrastructure that supports AM factory operations,” says Holloway. “Along with enterprise software and data connectivity, we’re exploring how to embed simulation within their systems as well.”
The NCAM’s Digital Reconfigurable Additive Manufacturing for Aerospace (DRAMA) project provides a valuable venue for this exploration. Developed in concert with numerous manufacturers, the DRAMA initiative includes the new powder bed fusion AM facility mentioned previously. With that mini factory as DRAMA’s stage, Holloway and his fellow simulation specialists play important roles in making its production of AM aerospace components a success.
What makes a manufacturing process “additive”, and why are so many industries exploring AM methods? In the broadest sense, an additive process is one where objects are created by adding material layer by layer, rather than removing it or molding it. A reductive or subtractive process for producing a part may, for example, begin with a solid block of metal that is then cut, drilled, and ground into shape. An additive method for making the same part, by contrast, begins with empty space! Loose or soft material is then added to that space (under carefully controlled conditions) until it forms the desired shape. That pliable material must then be solidified into a durable finished part.
Different materials demand different methods for generating and solidifying additive forms. For example, common 3D printers sold to consumers produce objects by unspooling warm plastic filament, which bonds to itself and becomes harder as it cools. By contrast, the metal powder bed fusion process (Ref. 1) begins with, as its name suggests, a powdered metal which is then melted by applied heat and re-solidified when it cools. A part produced via the metal powder bed fusion process can be seen in Figure 3.
“The market opportunities for AM methods have been understood for a long time, but there have been many obstacles to large-scale adoption,” Holloway says. “Some of these obstacles can be overcome during the design phase of products and AM facilities. Other issues, such as the impact of environmental conditions on AM production, must be addressed while the facility is operating.”
For instance, maintaining careful control of heat and humidity is an essential task for the DRAMA team. “The metal powder used for the powder bed fusion process (Figure 4) is highly sensitive to external conditions,” says Holloway. “This means it can begin to oxidize and pick up ambient moisture even while it sits in storage, and those processes will continue as it moves through the facility. Exposure to heat and moisture will change how it flows, how it melts, how it picks up an electric charge, and how it solidifies,” he says. “All of these factors can affect the resulting quality of the parts you’re producing.”
Careless handling of powdered metal is not just a threat to product quality. It can threaten the health and safety of workers as well. “The metal powder used for AM processes is flammable and toxic, and as it dries out, it becomes even more flammable,” Holloway says. “We need to continuously measure and manage humidity levels, as well as how loose powder propagates throughout the facility.”
To maintain proper atmospheric conditions, a manufacturer could augment its factory’s ventilation with a full climate control system, but that could be prohibitively expensive. The NCAM estimated that it would cost nearly half a million English pounds to add climate control to its relatively modest facility. But what if they could adequately manage heat and humidity without adding such a complicated system?
Perhaps using multiphysics simulation for careful process management could provide a cost-effective alternative. “As part of the DRAMA program, we created a model of our facility using the computational fluid dynamics (CFD) capabilities of the COMSOL software. Our model (Figure 5) uses the finite element method to solve partial differential equations describing heat transfer and fluid flow across the air domain in our facility,” says Holloway. “This enabled us to study how environmental conditions would be affected by multiple variables, from the weather outside, to the number of machines operating, to the way machines were positioned inside the shop. A model that accounts for those variables helps factory staff adjust ventilation and production schedules to optimize conditions,” he explains.
The DRAMA team made their model more accessible by building a simulation app of it with the Application Builder in COMSOL Multiphysics (Figure 6). “We’re trying to present the findings of some very complex calculations in a simple-to-understand way,” Holloway explains. “By creating an app from our model, we can empower staff to run predictive simulations on laptops during their daily shifts.”
The app user can define relevant boundary conditions for the beginning of a factory shift and then make ongoing adjustments. Over the course of a shift, heat and humidity levels will inevitably fluctuate. Perhaps factory staff should alter the production schedule to maintain part quality, or maybe they just need to open doors and windows to improve ventilation. Users can change settings in the app to test the possible effects of actions like these. For example, Figure 8 presents isothermal surface plots that show the effect that opening the AM machines’ build chambers has on air temperature, while Figure 9 shows how airflow is affected by opening the facility doors.
While the current app is an important step forward, it does still require workers to manually input relevant data. Looking ahead, the DRAMA team envisions something more integral, and therefore, more powerful: a “digital twin” for its AM facility. A digital twin, as described by Ed Fontes in a 2019 post on the COMSOL Blog (Ref. 2), is “a dynamic, continuously updated representation of a real physical product, device, or process.” It is important to note that even the most detailed model of a system is not necessarily its digital twin.
“To make our factory environment model a digital twin, we’d first provide it with ongoing live data from the actual factory,” Holloway explains. “Once our factory model was running in the background, it could adjust its forecasts in response to its data feeds and suggest specific actions based on those forecasts.”
“We want to integrate our predictive model into a feedback loop that includes the actual factory and its staff. The goal is to have a holistic system that responds to current factory conditions, uses simulation to make predictions about future conditions, and seamlessly makes self-optimizing adjustments based on those predictions,” Holloway says. “Then we could truly say we’ve built a digital twin for our factory.”
As an intermediate step toward building a full factory-level digital twin, the DRAMA simulation app has already proven its worth. “Our manufacturing partners may already see how modeling can help with planning an AM facility, but not really understand how it can help with operation,” Holloway says. “We’re showing the value of enabling a line worker to open up the app, enter in a few readings or import sensor data, and then quickly get a meaningful forecast of how a batch of powder will behave that day.”
Beyond its practical insights for manufacturers, the overall project may offer a broader lesson as well: By pairing its production line with a dynamic simulation model, the DRAMA project has made the entire operation safer, more productive, and more efficient. The DRAMA team has achieved this by deploying the model where it can do the most good — into the hands of the people working on the factory floor.
Each January, the editors of IEEE Spectrum offer up some predictions about technical developments we expect to be in the news over the coming year. You’ll find a couple dozen of those described in the following special report. Of course, the number of things we could have written about is far higher, so we had to be selective in picking which projects to feature. And we’re not ashamed to admit, gee-whiz appeal often shaped our choices.
For example, this year’s survey includes an odd pair of new aircraft that will be taking to the skies. One, whose design was inspired by the giant airships of years past, is longer than a football field; the other, a futuristic single-seat vertical-takeoff craft powered by electricity, is about the length of a small car.
While some of the other stories might not light up your imagination as much, they highlight important technical issues the world faces—like the challenges of shifting from fossil fuels to a hydrogen-based energy economy or the threat that new plutonium breeder reactors in China might accelerate the proliferation of nuclear weapons. So whether you prefer reading about topics that are heavy or light (even lighter than air), you should find something here to get you warmed up for 2023.
This article appears in the January 2023 print issue.
Top Tech 2023: A Special Report
Preview exciting technical developments for the coming year.
Can This Company Dominate Green Hydrogen?
Fortescue will need more electricity-generating capacity than France.
Pathfinder 1 could herald a new era for zeppelins
A New Way to Speed Up Computing
Blue microLEDs bring optical fiber to the processor.
The Personal-Use eVTOL Is (Almost) Here
Opener’s BlackFly is a pulp-fiction fever dream with wings.
Baidu Will Make an Autonomous EV
Its partnership with Geely aims at full self-driving mode.
China Builds New Breeder Reactors
The power plants could also make weapons-grade plutonium.
Economics Drives a Ray-Gun Resurgence
Lasers should be cheap enough to use against drones.
A Cryptocurrency for the Masses or a Universal ID?
What Worldcoin’s killer app will be is not yet clear.
The company’s Condor chip will boast more than 1,000 qubits.
Vagus-nerve stimulation promises to help treat autoimmune disorders.
New satellites can connect directly to your phone.
The E.U.’s first exascale supercomputer will be built in Germany.
A dozen more tech milestones to watch for in 2023.
The technical challenge of missile defense has been compared with that of hitting a bullet with a bullet. Then there is the still tougher economic challenge of using an expensive interceptor to kill a cheaper target—like hitting a lead bullet with a golden one.
Maybe trouble and money could be saved by shooting down such targets with a laser. Once the system was designed, built, and paid for, the cost per shot would be low. Such considerations led planners at the Pentagon to seek a solution from Lockheed Martin, which has just delivered a 300-kilowatt laser to the U.S. Army. The new weapon combines the output of a large bundle of fiber lasers of varying frequencies to form a single beam of white light. This laser has been undergoing tests in the lab, and it should see its first field trials sometime in 2023. General Atomics, a military contractor in San Diego, is also developing a laser of this power for the Army based on what’s known as the distributed-gain design, which has a single aperture.
Both systems offer the prospect of being inexpensive to use. The electric bill itself would range “from US $5 to $10,” for a pulse lasting a few seconds, says Michael Perry, the vice president in charge of laser systems for General Atomics.
Why are we getting ray guns only now, more than a century after H.G. Wells imagined them in his sci-fi novel The War of the Worlds? Put it down partly to the rising demand for cheap antimissile defense, but it’s mainly the result of technical advances in high-energy lasers.
The old standby for powerful lasers employed chemical reactions in flowing gas. That method was clumsy, heavy, and dangerous, and the laser itself became a flammable target for enemies to attack. The advantage was that these chemical lasers could be made immensely powerful, a far cry from the puny pulsed ruby lasers that wowed observers back in the 1960s by punching holes in razor blades (at power levels jocularly measured in “gillettes”).
“With lasers, if you can see it, you can kill it.” —Robert Afzal, Lockheed Martin
By 2014, fiber lasers had reached the point where they could be considered for weapons, and one 30-kW model was installed on the USS Ponce, where it demonstrated the ability to shoot down speedboats and small drones at relatively close range. The 300-kW fiber lasers being employed now in the two Army projects emit about 100 kW in optical power, enough to burn through much heftier targets (not to mention quite a few gillettes) at considerable distances.
“A laser of that class can be effective against a wide variety of targets, including cruise missiles, mortars, UAVs, and aircraft,” says Perry. “But not reentry vehicles [launched by ballistic missiles].” Those are the warheads, and to ward them off, he says, you’d probably have to hit the rocket when it’s still in the boost phase, which would mean placing your laser in orbit. Laser tech is still far from performing such a feat.
Even so, these futuristic weapons will no doubt find plenty of applications in today’s world. Israel made news in April by field-testing an airborne antimissile laser called Iron Beam, a play on the name Iron Dome, the missile system it has used to down rockets fired from Gaza. The laser system, reportedly rated at about 100 kW, is still not in service and hasn’t seen combat, but one day it may be able to replace some, if not all, of Iron Dome’s missiles with photons. Other countries have similar capabilities, or say they do. In May, Russia said it had used a laser to incinerate a Ukrainian drone from 5 kilometers away, a claim that Ukraine’s president, Volodymyr Zelenskyy, derided.
A missile is destroyed by a low-power, 2013 version of Lockheed Martin’s fiber laser www.youtube.com
Not all ray guns must be lasers, though. In March, Taiwan News reported that Chinese researchers had built a microwave weapon that in principle could be placed in orbit from where its 5-megawatt pulses could fry the electronic heart of an enemy satellite. But making such a machine in the lab is quite different from operating it in the field, not to mention in outer space, where supplying power and removing waste heat constitute major problems.
Because lasers performance falls off in bad weather, they can’t be relied on by themselves to defend critically important targets. They must instead be paired with kinetic weapons—missiles or bullets—to create a layered defense system.
“With lasers, if you can see it, you can kill it; typically rain and snow are not big deterrents,” says Robert Afzal, an expert on lasers at Lockheed Martin. “But a thundercloud—that’s hard.”
Afzal says that the higher up a laser is placed, the less interference it will face, but there is a trade-off. “With an airplane you have the least amount of resources—least volume, least weight—that is available to you. On a ship, you have a lot more resources available, but you’re in the maritime atmosphere, which is pretty hazy, so you may need a lot more power to get to the target. And the Army is in between: It deals with closer threats, like rockets and mortars, and they need a deep magazine, because they deal with a lot more targets.”
In every case, the point is to use expensive antimissile missiles only when you must. Israel opted to pursue laser weapons in part because its Iron Dome missiles cost so much more than the unguided, largely homemade rockets they defend against. Some of the military drones that Russia and Ukraine are now flying wouldn’t break the budget of the better-heeled sort of hobbyist. And it would be a Pyrrhic victory indeed to shoot them from the sky with projectiles so costly that you went broke.
This article appears in the January 2023 print issue as “Economics Drives a Ray-Gun Resurgence .”
Top Tech 2023: A Special Report
Preview exciting technical developments for the coming year.
Can This Company Dominate Green Hydrogen?
Fortescue will need more electricity-generating capacity than France.
Pathfinder 1 could herald a new era for zeppelins
A New Way to Speed Up Computing
Blue microLEDs bring optical fiber to the processor.
The Personal-Use eVTOL Is (Almost) Here
Opener’s BlackFly is a pulp-fiction fever dream with wings.
Baidu Will Make an Autonomous EV
Its partnership with Geely aims at full self-driving mode.
China Builds New Breeder Reactors
The power plants could also make weapons-grade plutonium.
Economics Drives a Ray-Gun Resurgence
Lasers should be cheap enough to use against drones.
A Cryptocurrency for the Masses or a Universal ID?
What Worldcoin’s killer app will be is not yet clear.
The company’s Condor chip will boast more than 1,000 qubits.
Vagus-nerve stimulation promises to help treat autoimmune disorders.
New satellites can connect directly to your phone.
The E.U.’s first exascale supercomputer will be built in Germany.
A dozen more tech milestones to watch for in 2023.
Optical fiber has long since replaced copper wiring in core information networks. But that’s not the case for free-space optical (FSO) communications using optical lasers to transmit data through the air. Despite FSO having the potential to provide orders of magnitude more data capacity compared with that of the traditional radio-frequency communications space missions currently rely on, the technology has been stuck on the launch pad because of atmospheric interference that can absorb and scatter the signals, as well as the strict acquisition and tracking requirements for communicating between ground stations and orbiting satellites.
But now researchers from the International Centre for Radio Astronomy Research, in Western Australia, have developed a coherent FSO link operating at 1,550 nanometers across a turbulent atmosphere between an optical ground terminal and a retroreflector mounted on an airborne drone. Their findings were published this October in Scientific Reports.
“We’ve been able to maintain a robust single-mode fiber coupling resulting in an uninterrupted 100-gigabits-per-second optical-data link,” says Shane Walsh, team leader of the project. “We do this by tracking the drone at angular rates up to 1.5 degrees a second—the equivalent of tracking a satellite in low Earth orbit (LEO).”
With the greater data capacity of coherent communications and its compatibility with standard fiber optics, Walsh says the way is now open to developing terabits-per-second communications between LEO satellites and suitably equipped ground stations. “You can think of it as taking ground-to-space communications from dial-up speeds to superfast broadband speeds,” he adds.
“This multidisciplinary approach by the researchers and the test results are impressive,” says Alan Willner, a professor of electrical engineering specializing in optical communications at the University of Southern California. “They appear to have mitigated some of the key issues with free-space optical communications such as communicating through turbulence, and in pointing and tracking at speeds needed to communicate with low-orbiting satellites.”
Benjamin Dix-Matthews, who is researching the optics for the project, describes the setup used. A PlaneWave Instruments L-350 direct-drive mount is employed to enable tracking of the target. Attached to it is an optical breadboard housing the tracking and acquisition systems. These include a GPS module for initial tracking, a closed-loop machine-vision (MV) system that provides intermediate acquisition and tracking, and a tip-tilt adaptive optics (AO) system consisting of a 2-inch-diameter mirror connected to a commercial piezo tip/tilt platform.
“We’re reaching the limits in what we can do, at least not without a lot of pain, in communicating using radio frequencies. So we will likely adopt new optical technologies. And I don’t see any obvious showstoppers to further advances using the researchers’ approach.” —Alan Willner
“The tip/tilt AO system operates at 200 hertz,” says Dix-Matthews. “It plays a dual role of correcting beam wander of the outgoing beam to maintain pointing accuracy, and it also corrects the angle of arrival of the return beam to maintain fiber-coupling efficiency.”
Given the challenges “of tracking and dealing with turbulence, coupling light into a single-mode fiber is no easy matter,” says Willner. “That the researchers are able to do so is noteworthy.”
To test the technology, Walsh says they set up the terminal on the roof of the Institute’s physics building, 34 meters above sea level. To simulate the angular motion of a LEO satellite, they employed a drone outfitted with a gimbal-mounted corner cube retroreflector that returns the 1,550-nm signal, four green beacon LEDs for machine-vision tracking, and a camera for orienting the gimbal. Also included is GPS and a barometric altimeter to relay initial coordinates to the optical terminal.
The drone flew at an altitude of 120 meters at a line-of-sight distance of up to 700 meters for a laser-beam-folded link length of 1.4 kilometers. Initially, the gimbal was adjusted manually by the pilot so that the beacons were oriented toward the mount. At the next stage, the GPS module transmitted the drone’s position to the terminal computer, which enabled software to point the terminal at the target. With the target located, the MV loop closed and the mount-pointing was adjusted to track the drone beacons. The tip/tilt loop was then closed to provide fine-scale tracking. The MV and tip/tilt loops were run concurrently to maintain tracking and to correct for beam wander and wind buffeting.
“We conducted some 30 test flights, flying the drone in passes simulating the tracking rates needed for free-space optical links to satellites in LEO,” says Walsh. “And in spite of atmospheric turbulence and macroscopic motion, we were able to sustain a 100-gigabit-per-second link.”
“There’s a good reason why space agencies and major corporations are interested in free-space optical communications,” says Willner. “We’re reaching the limits in what we can do, at least not without a lot of pain, in communicating using radio frequencies. So we will likely adopt new optical technologies. And I don’t see any obvious showstoppers to further advances using the researchers approach.”
The terminal needs to be optimized further, and the MV system will require changes for satellite use. Walsh says the next step is to test the technology with an aircraft flying at higher altitudes and after that, testing with a satellite would begin.
In addition, the researchers are developing a purpose-built optical-communications ground station that they believe will lead to the commercialization of the technology. To this end, they are working with three space-related companies in Australia, says Walsh. “So we anticipate receiving our first LEO downlink sometime in 2023, and hope to provide commercial high-data-rate coherent optical communications to and from space in the next few years.”
Are you looking for a way to create content that is both effective and efficient? If so, then you should consider using an AI content generator. AI content generators are a great way to create content that is both engaging and relevant to your audience.
There are a number of different AI content generator tools available on the market, and it can be difficult to know which one is right for you. To help you make the best decision, we have compiled a list of the top 10 AI content generator tools that you should use in 2022.
So, without further ado, let’s get started!
Boss Mode: $99/Month
The utility of this service can be used for short-term or format business purposes such as product descriptions, website copy, market copy, and sales reports.
Free Trial – 7 days with 24/7 email support and 100 runs per day.
Pro Plan: $49 and yearly, it will cost you $420 i.e. $35 per month.
Wait! I've got a pretty sweet deal for you. Sign up through the link below, and you'll get (7,000 Free Words Plus 40% OFF) if you upgrade to the paid plan within four days.
Claim Your 7,000 Free Words With This Special Link - No Credit Card Required
Just like Outranking, Frase is an AI that helps you research, create and optimize your content to make it high quality within seconds. Frase works on SEO optimization where the content is made to the liking of search engines by optimizing keywords and keywords.
Solo Plan: $14.99/Month and $12/Month if billed yearly with 4 Document Credits for 1 user seat.
Basic Plan: $44.99/month and $39.99/month if billed yearly with 30 Document Credits for 1 user seat.
Team Plan: $114.99/month and $99.99/month if billed yearly for unlimited document credits for 3 users.
*SEO Add-ons and other premium features for $35/month irrespective of the plan.
Article Forge is another content generator that operates quite differently from the others on this list. Unlike Jasper.ai, which requires you to provide a brief and some information on what you want it to write this tool only asks for a keyword. From there, it’ll generate a complete article for you.
What’s excellent about Article Forge is they provide a 30-day money-back guarantee. You can choose between a monthly or yearly subscription. Unfortunately, they offer a free trial and no free plan:
Basic Plan: $27/Month
This plan allows users to produce up to 25k words each month. This is excellent for smaller blogs or those who are just starting.
Standard Plan: $57/month)
Unlimited Plan: $117/month
It’s important to note that Article Forge guarantees that all content generated through the platform passes Copyscape.
Rytr.me is a free AI content generator perfect for small businesses, bloggers, and students. The software is easy to use and can generate SEO-friendly blog posts, articles, and school papers in minutes.
Rytr offers a free plan that comes with limited features. It covers up to 5,000 characters generated each month and has access to the built-in plagiarism checker. If you want to use all the features of the software, you can purchase one of the following plans:
Saver Plan: $9/month, $90/year
Writesonic is a free, easy-to-use AI content generator. The software is designed to help you create copy for marketing content, websites, and blogs. It's also helpful for small businesses or solopreneurs who need to produce content on a budget.
Writesonic is free with limited features. The free plan is more like a free trial, providing ten credits. After that, you’d need to upgrade to a paid plan. Here are your options:
CopySmith is an AI content generator that can be used to create personal and professional documents, blogs, and presentations. It offers a wide range of features including the ability to easily create documents and presentations.
CopySmith also has several templates that you can use to get started quickly.
CopySmith offers a free trial with no credit card required. After the free trial, the paid plans are as follows:
Starter Plan: $19/month
Hypotenuse.ai is a free online tool that can help you create AI content. It's great for beginners because it allows you to create videos, articles, and infographics with ease. The software has a simple and easy-to-use interface that makes it perfect for new people looking for AI content generation.
Hypotenuse doesn’t offer a free plan. Instead, it offers a free trial period where you can take the software for a run before deciding whether it’s the right choice for you or not. Other than that, here are its paid options:
Starter Plan: $29/month
Growth Plan: $59/month
Enterprise – pricing is custom, so don’t hesitate to contact the company for more information.
Kafkai comes with a free trial to help you understand whether it’s the right choice for you or not. Additionally, you can also take a look at its paid plans:
Writer Plan: $29/month Create 100 articles per month. $0.29/article
Newsroom Plan $49/month – Generate 250 articles a month at $0.20 per article.
Printing Press Plan: $129 /month Create up to 1000 articles a month at roughly $0.13/article.
Industrial Printer Plan: ($199 a month) – Generate 2500 articles each month for $0.08/article.
Peppertype.ai is an online AI content generator that’s easy to use and best for small business owners looking for a powerful copy and content writing tool to help them craft and generate various content for many purposes.
Unfortunately, Peppertype.ai isn’t free. However, it does have a free trial to try out the software before deciding whether it’s the right choice for you. Here are its paid plans:
Team Plan: $199/month
Enterprise – pricing is custom, so please contact the company for more information.
It is no longer a secret that humans are getting overwhelmed with the daily task of creating content. Our lives are busy, and the process of writing blog posts, video scripts, or other types of content is not our day job. In comparison, AI writers are not only cheaper to hire, but also perform tasks at a high level of excellence. This article explores 10 writing tools that used AI to create better content choose the one which meets your requirements and budget but in my opinion Jasper ai is one of the best tools to use to make high-quality content.
If you have any questions ask in the comments section
Note: Don't post links in your comments
Note: This article contains affiliate links which means we make a small commission if you buy any premium plan from our link.
The marketing industry is turning to artificial intelligence (AI) as a way to save time and execute smarter, more personalized campaigns. 61% of marketers say AI software is the most important aspect of their data strategy.
If you’re late to the AI party, don’t worry. It’s easier than you think to start leveraging artificial intelligence tools in your marketing strategy. Here are 11 AI marketing tools every marketer should start using today.
Personalize is an AI-powered technology that helps you identify and produce highly targeted sales and marketing campaigns by tracking the products and services your contacts are most interested in at any given time. The platform uses an algorithm to identify each contact’s top three interests, which are updated in real-time based on recent site activity.
Seventh Sense provides behavioral analytics that helps you win attention in your customers’ overcrowded email inboxes. Choosing the best day and time to send an email is always a gamble. And while some days of the week generally get higher open rates than others, you’ll never be able to nail down a time that’s best for every customer. Seventh Sense eases your stress of having to figure out the perfect send-time and day for your email campaigns. The AI-based platform figures out the best timing and email frequency for each contact based on when they’re opening emails. The tool is primarily geared toward HubSpot and Marketo customers
Phrasee uses artificial intelligence to help you write more effective subject lines. With its AI-based Natural Language Generation system, Phrasee uses data-driven insights to generate millions of natural-sounding copy variants that match your brand voice. The model is end-to-end, meaning when you feed the results back to Phrasee, the prediction model rebuilds so it can continuously learn from your audience.
HubSpot Search Engine Optimization (SEO) is an integral tool for the Human Content team. It uses machine learning to determine how search engines understand and categorize your content. HubSpot SEO helps you improve your search engine rankings and outrank your competitors. Search engines reward websites that organize their content around core subjects, or topic clusters. HubSpot SEO helps you discover and rank for the topics that matter to your business and customers.
When you’re limited to testing two variables against each other at a time, it can take months to get the results you’re looking for. Evolv AI lets you test all your ideas at once. It uses advanced algorithms to identify the top-performing concepts, combine them with each other, and repeat the process to achieve the best site experience.
Acrolinx is a content alignment platform that helps brands scale and improves the quality of their content. It’s geared toward enterprises – its major customers include big brands like Google, Adobe, and Amazon - to help them scale their writing efforts. Instead of spending time chasing down and fixing typos in multiple places throughout an article or blog post, you can use Acrolinx to do it all right there in one place. You start by setting your preferences for style, grammar, tone of voice, and company-specific word usage. Then, Acrolinx checks and scores your existing content to find what’s working and suggest areas for improvement. The platform provides real-time guidance and suggestions to make writing better and strengthen weak pages.
MarketMuse uses an algorithm to help marketers build content strategies. The tool shows you where to target keywords to rank in specific topic categories, and recommends keywords you should go after if you want to own particular topics. It also identifies gaps and opportunities for new content and prioritizes them by their probable impact on your rankings. The algorithm compares your content with thousands of articles related to the same topic to uncover what’s missing from your site.
Copilot is a suite of tools that help eCommerce businesses maintain real-time communication with customers around the clock at every stage of the funnel. Promote products, recover shopping carts and send updates or reminders directly through Messenger.
Yotpo’s deep learning technology evaluates your customers’ product reviews to help you make better business decisions. It identifies key topics that customers mention related to your products—and their feelings toward them. The AI engine extracts relevant reviews from past buyers and presents them in smart displays to convert new shoppers. Yotpo also saves you time moderating reviews. The AI-powered moderation tool automatically assigns a score to each review and flags reviews with negative sentiment so you can focus on quality control instead of manually reviewing every post.
Albert is a self-learning software that automates the creation of marketing campaigns for your brand. It analyzes vast amounts of data to run optimized campaigns autonomously, allowing you to feed in your own creative content and target markets, and then use data from its database to determine key characteristics of a serious buyer. Albert identifies potential customers that match those traits, and runs trial campaigns on a small group of customers—with results refined by Albert himself—before launching it on a larger scale.
Albert plugs into your existing marketing technology stack, so you still have access to your accounts, ads, search, social media, and more. Albert maps tracking and attribution to your source of truth so you can determine which channels are driving your business.
There are many tools and companies out there that offer AI tools, but this is a small list of resources that we have found to be helpful. If you have any other suggestions, feel free to share them in the comments below this article. As marketing evolves at such a rapid pace, new marketing strategies will be invented that we haven't even dreamed of yet. But for now, this list should give you a good starting point on your way to implementing AI into your marketing mix.
Note: This article contains affiliate links, meaning we make a small commission if you buy any premium plan from our link.
Imagine a world in which you can do transactions and many other things without having to give your personal information. A world in which you don’t need to rely on banks or governments anymore. Sounds amazing, right? That’s exactly what blockchain technology allows us to do.
It’s like your computer’s hard drive. blockchain is a technology that lets you store data in digital blocks, which are connected together like links in a chain.
Blockchain technology was originally invented in 1991 by two mathematicians, Stuart Haber and W. Scot Stornetta. They first proposed the system to ensure that timestamps could not be tampered with.
A few years later, in 1998, software developer Nick Szabo proposed using a similar kind of technology to secure a digital payments system he called “Bit Gold.” However, this innovation was not adopted until Satoshi Nakamoto claimed to have invented the first Blockchain and Bitcoin.
A blockchain is a distributed database shared between the nodes of a computer network. It saves information in digital format. Many people first heard of blockchain technology when they started to look up information about bitcoin.
Blockchain is used in cryptocurrency systems to ensure secure, decentralized records of transactions.
Blockchain allowed people to guarantee the fidelity and security of a record of data without the need for a third party to ensure accuracy.
To understand how a blockchain works, Consider these basic steps:
Let’s get to know more about the blockchain.
Blockchain records digital information and distributes it across the network without changing it. The information is distributed among many users and stored in an immutable, permanent ledger that can't be changed or destroyed. That's why blockchain is also called "Distributed Ledger Technology" or DLT.
Here’s how it works:
And that’s the beauty of it! The process may seem complicated, but it’s done in minutes with modern technology. And because technology is advancing rapidly, I expect things to move even more quickly than ever.
Even though blockchain is integral to cryptocurrency, it has other applications. For example, blockchain can be used for storing reliable data about transactions. Many people confuse blockchain with cryptocurrencies like bitcoin and ethereum.
Blockchain already being adopted by some big-name companies, such as Walmart, AIG, Siemens, Pfizer, and Unilever. For example, IBM's Food Trust uses blockchain to track food's journey before reaching its final destination.
Although some of you may consider this practice excessive, food suppliers and manufacturers adhere to the policy of tracing their products because bacteria such as E. coli and Salmonella have been found in packaged foods. In addition, there have been isolated cases where dangerous allergens such as peanuts have accidentally been introduced into certain products.
Tracing and identifying the sources of an outbreak is a challenging task that can take months or years. Thanks to the Blockchain, however, companies now know exactly where their food has been—so they can trace its location and prevent future outbreaks.
Blockchain technology allows systems to react much faster in the event of a hazard. It also has many other uses in the modern world.
Blockchain technology is safe, even if it’s public. People can access the technology using an internet connection.
Have you ever been in a situation where you had all your data stored at one place and that one secure place got compromised? Wouldn't it be great if there was a way to prevent your data from leaking out even when the security of your storage systems is compromised?
Blockchain technology provides a way of avoiding this situation by using multiple computers at different locations to store information about transactions. If one computer experiences problems with a transaction, it will not affect the other nodes.
Instead, other nodes will use the correct information to cross-reference your incorrect node. This is called “Decentralization,” meaning all the information is stored in multiple places.
Blockchain guarantees your data's authenticity—not just its accuracy, but also its irreversibility. It can also be used to store data that are difficult to register, like legal contracts, state identifications, or a company's product inventory.
Blockchain has many advantages and disadvantages.
I’ll answer the most frequently asked questions about blockchain in this section.
Blockchain is not a cryptocurrency but a technology that makes cryptocurrencies possible. It's a digital ledger that records every transaction seamlessly.
Yes, blockchain can be theoretically hacked, but it is a complicated task to be achieved. A network of users constantly reviews it, which makes hacking the blockchain difficult.
Coinbase Global is currently the biggest blockchain company in the world. The company runs a commendable infrastructure, services, and technology for the digital currency economy.
Blockchain is a decentralized technology. It’s a chain of distributed ledgers connected with nodes. Each node can be any electronic device. Thus, one owns blockhain.
Bitcoin is a cryptocurrency, which is powered by Blockchain technology while Blockchain is a distributed ledger of cryptocurrency
Generally a database is a collection of data which can be stored and organized using a database management system. The people who have access to the database can view or edit the information stored there. The client-server network architecture is used to implement databases. whereas a blockchain is a growing list of records, called blocks, stored in a distributed system. Each block contains a cryptographic hash of the previous block, timestamp and transaction information. Modification of data is not allowed due to the design of the blockchain. The technology allows decentralized control and eliminates risks of data modification by other parties.
Blockchain has a wide spectrum of applications and, over the next 5-10 years, we will likely see it being integrated into all sorts of industries. From finance to healthcare, blockchain could revolutionize the way we store and share data. Although there is some hesitation to adopt blockchain systems right now, that won't be the case in 2022-2023 (and even less so in 2026). Once people become more comfortable with the technology and understand how it can work for them, owners, CEOs and entrepreneurs alike will be quick to leverage blockchain technology for their own gain. Hope you like this article if you have any question let me know in the comments section
FOLLOW US ON TWITTER
Grammarly is a tool that checks for grammatical errors, spelling, and punctuation.it gives you comprehensive feedback on your writing. You can use this tool to proofread and edit articles, blog posts, emails, etc.
Grammarly also detects all types of mistakes, including sentence structure issues and misused words. It also gives you suggestions on style changes, punctuation, spelling, and grammar all are in real-time. The free version covers the basics like identifying grammar and spelling mistakes
whereas the Premium version offers a lot more functionality, it detects plagiarism in your content, suggests word choice, or adds fluency to it.
ProWritingAid is a style and grammar checker for content creators and writers. It helps to optimize word choice, punctuation errors, and common grammar mistakes, providing detailed reports to help you improve your writing.
ProWritingAid can be used as an add-on to WordPress, Gmail, and Google Docs. The software also offers helpful articles, videos, quizzes, and explanations to help improve your writing.
Here are some key features of ProWriting Aid:
Grammarly and ProWritingAid are well-known grammar-checking software. However, if you're like most people who can't decide which to use, here are some different points that may be helpful in your decision.
As both writing assistants are great in their own way, you need to choose the one that suits you best.
Both ProWritingAid and Grammarly are awesome writing tools, without a doubt. but as per my experience, Grammarly is a winner here because Grammarly helps you to review and edit your content. Grammarly highlights all the mistakes in your writing within seconds of copying and pasting the content into Grammarly’s editor or using the software’s native feature in other text editors.
Not only does it identify tiny grammatical and spelling errors, it tells you when you overlook punctuations where they are needed. And, beyond its plagiarism-checking capabilities, Grammarly helps you proofread your content. Even better, the software offers a free plan that gives you access to some of its features.
Are you searching for an ecomerce platform to help you build an online store and sell products?
In this Sellfy review, we'll talk about how this eCommerce platform can let you sell digital products while keeping full control of your marketing.
And the best part? Starting your business can be done in just five minutes.
Let us then talk about the Sellfy platform and all the benefits it can bring to your business.
Sellfy is an eCommerce solution that allows digital content creators, including writers, illustrators, designers, musicians, and filmmakers, to sell their products online. Sellfy provides a customizable storefront where users can display their digital products and embed "Buy Now" buttons on their website or blog. Sellfy product pages enable users to showcase their products from different angles with multiple images and previews from Soundcloud, Vimeo, and YouTube. Files of up to 2GB can be uploaded to Sellfy, and the company offers unlimited bandwidth and secure file storage. Users can also embed their entire store or individual project widgets in their site, with the ability to preview how widgets will appear before they are displayed.
Sellfy is a powerful e-commerce platform that helps you personalize your online storefront. You can add your logo, change colors, revise navigation, and edit the layout of your store. Sellfy also allows you to create a full shopping cart so customers can purchase multiple items. And Sellfy gives you the ability to set your language or let customers see a translated version of your store based on their location.
Sellfy gives you the option to host your store directly on its platform, add a custom domain to your store, and use it as an embedded storefront on your website. Sellfy also optimizes its store offerings for mobile devices, allowing for a seamless checkout experience.
Sellfy allows creators to host all their products and sell all of their digital products on one platform. Sellfy also does not place storage limits on your store but recommends that files be no larger than 5GB. Creators can sell both standard and subscription-based products in any file format that is supported by the online marketplace. Customers can purchase products instantly after making a purchase – there is no waiting period.
You can organize your store by creating your product categories, sorting by any characteristic you choose. Your title, description, and the image will be included on each product page. In this way, customers can immediately evaluate all of your products. You can offer different pricing options for all of your products, including "pay what you want," in which the price is entirely up to the customer. This option allows you to give customers control over the cost of individual items (without a minimum price) or to set pricing minimums—a good option if you're in a competitive market or when you have higher-end products. You can also offer set prices per product as well as free products to help build your store's popularity.
Sellfy is ideal for selling digital content, such as ebooks. But it does not allow you to copyrighted material (that you don't have rights to distribute).
Sellfy offers several ways to share your store, enabling you to promote your business on different platforms. Sellfy lets you integrate it with your existing website using "buy now" buttons, embed your entire storefront, or embed certain products so you can reach more people. Sellfy also enables you to connect with your Facebook page and YouTube channel, maximizing your visibility.
Sellfy is a simple online platform that allows customers to buy your products directly through your store. Sellfy has two payment processing options: PayPal and Stripe. You will receive instant payments with both of these processors, and your customer data is protected by Sellfy's secure (PCI-compliant) payment security measures. In addition to payment security, Sellfy provides anti-fraud tools to help protect your products including PDF stamping, unique download links, and limited download attempts.
The Sellfy platform includes marketing and analytics tools to help you manage your online store. You can send email product updates and collect newsletter subscribers through the platform. With Sellfy, you can also offer discount codes and product upsells, as well as create and track Facebook and Twitter ads for your store. The software's analytics dashboard will help you track your best-performing products, generated revenue, traffic channels, top locations, and overall store performance.
To expand functionality and make your e-commerce store run more efficiently, Sellfy offers several integrations. Google Analytics and Webhooks, as well as integrations with Patreon and Facebook Live Chat, are just a few of the options available. Sellfy allows you to connect to Zapier, which gives you access to hundreds of third-party apps, including tools like Mailchimp, Trello, Salesforce, and more.
The free plan comes with:
Starter plan comes with:
The business plan comes with:
The premium plan comes with:
Sellfy has its benefits and downsides, but fortunately, the pros outweigh the cons.
In this article, we have taken a look at some of the biggest benefits associated with using sellfy for eCommerce. Once you compare these benefits to what you get with other platforms such as Shopify, you should find that it is worth your time to consider sellfy for your business. After reading this article all of your questions will be solved but if you have still some questions let me know in the comment section below, I will be happy to answer your questions.
Note: This article contains affiliate links which means we make a small commission if you buy sellfy premium plan from our link.
Are you looking for a new graphic design tool? Would you like to read a detailed review of Canva? As it's one of the tools I love using. I am also writing my first ebook using canva and publish it soon on my site you can download it is free. Let's start the review.
Canva has a web version and also a mobile app
Canva is a free graphic design web application that allows you to create invitations, business cards, flyers, lesson plans, banners, and more using professionally designed templates. You can upload your own photos from your computer or from Google Drive, and add them to Canva's templates using a simple drag-and-drop interface. It's like having a basic version of Photoshop that doesn't require Graphic designing knowledge to use. It’s best for nongraphic designers.
Canva is a great tool for small business owners, online entrepreneurs, and marketers who don’t have the time and want to edit quickly.
To create sophisticated graphics, a tool such as Photoshop can is ideal. To use it, you’ll need to learn its hundreds of features, get familiar with the software, and it’s best to have a good background in design, too.
Also running the latest version of Photoshop you need a high-end computer.
So here Canva takes place, with Canva you can do all that with drag-and-drop feature. It’s also easier to use and free. Also an even-more-affordable paid version is available for $12.95 per month.
The product is available in three plans: Free, Pro ($12.99/month per user or $119.99/year for up to 5 people), and Enterprise ($30 per user per month, minimum 25 people).
To get started on Canva, you will need to create an account by providing your email address, Google, Facebook or Apple credentials. You will then choose your account type between student, teacher, small business, large company, non-profit, or personal. Based on your choice of account type, templates will be recommended to you.
You can sign up for a free trial of Canva Pro, or you can start with the free version to get a sense of whether it’s the right graphic design tool for your needs.
When you sign up for an account, Canva will suggest different post types to choose from. Based on the type of account you set up you'll be able to see templates categorized by the following categories: social media posts, documents, presentations, marketing, events, ads, launch your business, build your online brand, etc.
Start by choosing a template for your post or searching for something more specific. Search by social network name to see a list of post types on each network.
Next, you can choose a template. Choose from hundreds of templates that are ready to go, with customizable photos, text, and other elements.
You can start your design by choosing from a variety of ready-made templates, searching for a template matching your needs, or working with a blank template.
Inside the Canva designer, the Elements tab gives you access to lines and shapes, graphics, photos, videos, audio, charts, photo frames, and photo grids.The search box on the Elements tab lets you search everything on Canva.
To begin with, Canva has a large library of elements to choose from. To find them, be specific in your search query. You may also want to search in the following tabs to see various elements separately:
The Photos tab lets you search for and choose from millions of professional stock photos for your templates.
You can replace the photos in our templates to create a new look. This can also make the template more suited to your industry.
You can find photos on other stock photography sites like pexel, pixabay and many more or simply upload your own photos.
When you choose an image, Canva’s photo editing features let you adjust the photo’s settings (brightness, contrast, saturation, etc.), crop, or animate it.
When you subscribe to Canva Pro, you get access to a number of premium features, including the Background Remover. This feature allows you to remove the background from any stock photo in library or any image you upload.
The Text tab lets you add headings, normal text, and graphical text to your design.
When you click on text, you'll see options to adjust the font, font size, color, format, spacing, and text effects (like shadows).
Canva Pro subscribers can choose from a large library of fonts on the Brand Kit or the Styles tab. Enterprise-level controls ensure that visual content remains on-brand, no matter how many people are working on it.
Create an animated image or video by adding audio to capture user’s attention in social news feeds.
If you want to use audio from another stock site or your own audio tracks, you can upload them in the Uploads tab or from the more option.
Want to create your own videos? Choose from thousands of stock video clips. You’ll find videos that range upto 2 minutes
You can upload your own videos as well as videos from other stock sites in the Uploads tab.
Once you have chosen a video, you can use the editing features in Canva to trim the video, flip it, and adjust its transparency.
On the Background tab, you’ll find free stock photos to serve as backgrounds on your designs. Change out the background on a template to give it a more personal touch.
The Styles tab lets you quickly change the look and feel of your template with just a click. And if you have a Canva Pro subscription, you can upload your brand’s custom colors and fonts to ensure designs stay on brand.
If you have a Canva Pro subscription, you’ll have a Logos tab. Here, you can upload variations of your brand logo to use throughout your designs.
With Canva, you can also create your own logos. Note that you cannot trademark a logo with stock content in it.
With Canva, free users can download and share designs to multiple platforms including Instagram, Facebook, Twitter, LinkedIn, Pinterest, Slack and Tumblr.
Canva Pro subscribers can create multiple post formats from one design. For example, you can start by designing an Instagram post, and Canva's Magic Resizer can resize it for other networks, Stories, Reels, and other formats.
Canva Pro subscribers can also use Canva’s Content Planner to post content on eight different accounts on Instagram, Facebook, Twitter, LinkedIn, Pinterest, Slack, and Tumblr.
Canva Pro allows you to work with your team on visual content. Designs can be created inside Canva, and then sent to your team members for approval. Everyone can make comments, edits, revisions, and keep track via the version history.
When it comes to printing your designs, Canva has you covered. With an extensive selection of printing options, they can turn your designs into anything from banners and wall art to mugs and t-shirts.
Canva Print is perfect for any business seeking to make a lasting impression. Create inspiring designs people will want to wear, keep, and share. Hand out custom business cards that leave a lasting impression on customers' minds.
The Canva app is available on the Apple App Store and Google Play. The Canva app has earned a 4.9 out of five star rating from over 946.3K Apple users and a 4.5 out of five star rating from over 6,996,708 Google users.
In addition to mobile apps, you can use Canva’s integration with other Internet services to add images and text from sources like Google Maps, Emojis, photos from Google Drive and Dropbox, YouTube videos, Flickr photos, Bitmojis, and other popular visual content elements.
In general, Canva is an excellent tool for those who need simple images for projects. If you are a graphic designer with experience, you will find Canva’s platform lacking in customization and advanced features – particularly vectors. But if you have little design experience, you will find Canva easier to use than advanced graphic design tools like Adobe Photoshop or Illustrator for most projects. If you have any queries let me know in the comments section.
If you are looking for the best wordpress plugins, then you are at the right place. Here is the list of best wordpress plugins that you should use in your blog to boost SEO, strong your security and know every aspects of your blog . Although creating a good content is one factor but there are many wordpress plugins that perform different actions and add on to your success. So let's start
Those users who are serious about SEO, Yoast SEO will do the work for them to reach their goals. All they need to do is select a keyword, and the plugin will then optimize your page according to the specified keyword
Yoast offers many popular SEO WordPress plugin functions. It gives you real-time page analysis to optimize your content, images, meta descriptions, titles, and kewords. Yoast also checks the length of your sentences and paragraphs, whether you’re using enough transition words or subheadings, how often you use passive voice, and so on. Yoast tells Google whether or not to index a page or a set of pages too.
A website running WordPress can put a lot of strain on a server, which increases the chances that the website will crash and harm your business. To avoid such an unfortunate situation and ensure that all your pages load quickly, you need a caching plugin like WP Rocket.
WP Rocket plugin designed to increases your website speed. Instead of waiting for pages to be saved to cache, WP Rocket turns on desired caching settings, like page cache and gzip compression. The plugin also activates other features, such as CDN support and llazy image loadding, to enhance your site speed.
Wordfence Security is a WordPress firewall and security scanner that keeps your site safe from malicious hackers, spam, and other online threats. This Plugin comes with a web application firewall (WAF) called tthread Defence Feed that helps to prevents brute force attacks by ensuring you set stronger passwords and limiting login attempts. It searches for malware and compares code, theme, and plugin files with the records in the WordPress.org repository to verify their integrity and reports changes to you.
Wordfence security scanner provides you with actionable insights into your website's security status and will alert you to any potential threats, keeping it safe and secure. It also includes login security features that let you activate reCAPTCHA and two-factor authentication for your website.
Akismet can help prevent spam from appearing on your site. Every day, it automatically checks every comment against a global database of spam to block malicious content. With Akismet, you also won’t have to worry about innocent comments being caught by the filter or false positives. You can simply tell Akismet about those and it will get better over time. It also checks your contact form submissions against its global spam database and weed out unnecessary fake information.
Contact Form 7 is a plug-in that allows you to create contact forms that make it easy for your users to send messages to your site. The plug-in was developed by Takayuki Miyoshi and lets you create multiple contact forms on the same site; it also integrates Akismet spam filtering and lets you customize the styling and fields that you want to use in the form. The plug-in provides CAPTCHA and Ajax submitting.
When you’re looking for an easy way to manage your Google Analytics-related web tracking services, Monster Insights can help. You can add, customize, and integrate Google Analytics data with ease so you’ll be able to see how every webpage performs, which online campaigns bring in the most traffic, and which content readers engage with the most. It’s same as Google Analytics
It is a powerful tool to keep track of your traffic stats. With it, you can view stats for your active sessions, conversions, and bounce rates. You’ll also be able to see your total revenue, the products you sell, and how your site is performing when it comes to referrals.
MonsterInsights offers a free plan that includes basic Google Analytics integration, data insights, and user activity metrics.
Pretty Links is a powerful WordPress plugin that enables you to easily cloak affiliate links on your websiteIt even allows you to easily redirect visitors based on a specific request, including permanent 301 and temporary 302/307 redirects.
Pretty links also helps you to automatically shorten your url for your post and pages.
You can also enable auto-linking feature to automatically add affiliate links for certain keywords
We hope you’ve found this article useful. We appreciate you reading and welcome your feedback if you have it.
Ginger VS Grammarly: When it comes to grammar checkers, Ginger and Grammarly are two of the most popular choices on the market. This article aims to highlight the specifics of each one so that you can make a more informed decision about the one you'll use.
If you are a writer, you must have heard of Grammarly before. Grammarly has over 10M users across the globe, it's probably the most popular AI writing enhancement tool, without a doubt. That's why there's a high chance that you already know about Grammarly.
But today we are going to do a comparison between Ginger and Grammarly, So let's define Grammarly here. Like Ginger, Grammarly is an AI writing assistant that checks for grammatical errors, spellings, and punctuation. The free version covers the basics like identifying grammar and spelling mistakes
While the Premium version offers a lot more functionality, it detects plagiarism in your content, suggests word choice, or adds fluency to it.
Ginger is a writing enhancement tool that not only catches typos and grammatical mistakes but also suggests content improvements. As you type, it picks up on errors then shows you what’s wrong, and suggests a fix. It also provides you with synonyms and definitions of words and allows you to translate your text into dozens of languages.
In addition, the program provides a text reader, so you can gauge your writing’s conversational tone.
Grammarly and Ginger are two popular grammar checker software brands that help you to become a better writer. But if you’re undecided about which software to use, consider these differences:
Grammarly Score: 7/10
So Grammarly wins here.
For companies with three or more employees, the Business plan costs $12.50/month for each member of your team.
Ginger Wins Here
While both writing assistants are fantastic in their ways, you need to choose the one you want.
For example, go for Grammarly if you want a plagiarism tool included.
Choose Ginger if you want to write in languages other than English. I will to the differences for you in order to make the distinctions clearer.
Which one you like let me know in the comments section also give your opinions in the comments section below.
The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.
Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.
But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.
How is AI currently being used to design the next generation of chips?
Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.
Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.
What are the benefits of using AI for chip design?
Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.
So it’s like having a digital twin in a sense?
Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.
So, it’s going to be more efficient and, as you said, cheaper?
Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.
We’ve talked about the benefits. How about the drawbacks?
Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.
Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.
One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.
How can engineers use AI to better prepare and extract insights from hardware or sensor data?
Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.
One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.
What should engineers and designers consider when using AI for chip design?
Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.
How do you think AI will affect chip designers’ jobs?
Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.
How do you envision the future of AI and chip design?
Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.
IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.
Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.
“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”
The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.
Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).
Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT
In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.
As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.
In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.
“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.
On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.
While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.
“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”
This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.
“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.
Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.
Non-fungible tokens (NFTs) are the most popular digital assets today, capturing the attention of cryptocurrency investors, whales and people from around the world. People find it amazing that some users spend thousands or millions of dollars on a single NFT-based image of a monkey or other token, but you can simply take a screenshot for free. So here we share some freuently asked question about NFTs.
NFT stands for non-fungible token, which is a cryptographic token on a blockchain with unique identification codes that distinguish it from other tokens. NFTs are unique and not interchangeable, which means no two NFTs are the same. NFTs can be a unique artwork, GIF, Images, videos, Audio album. in-game items, collectibles etc.
A blockchain is a distributed digital ledger that allows for the secure storage of data. By recording any kind of information—such as bank account transactions, the ownership of Non-Fungible Tokens (NFTs), or Decentralized Finance (DeFi) smart contracts—in one place, and distributing it to many different computers, blockchains ensure that data can’t be manipulated without everyone in the system being aware.
The value of an NFT comes from its ability to be traded freely and securely on the blockchain, which is not possible with other current digital ownership solutionsThe NFT points to its location on the blockchain, but doesn’t necessarily contain the digital property. For example, if you replace one bitcoin with another, you will still have the same thing. If you buy a non-fungible item, such as a movie ticket, it is impossible to replace it with any other movie ticket because each ticket is unique to a specific time and place.
One of the unique characteristics of non-fungible tokens (NFTs) is that they can be tokenised to create a digital certificate of ownership that can be bought, sold and traded on the blockchain.
As with crypto-currency, records of who owns what are stored on a ledger that is maintained by thousands of computers around the world. These records can’t be forged because the whole system operates on an open-source network.
NFTs also contain smart contracts—small computer programs that run on the blockchain—that give the artist, for example, a cut of any future sale of the token.
Non-fungible tokens (NFTs) aren't cryptocurrencies, but they do use blockchain technology. Many NFTs are based on Ethereum, where the blockchain serves as a ledger for all the transactions related to said NFT and the properties it represents.5) How to make an NFT?
Anyone can create an NFT. All you need is a digital wallet, some ethereum tokens and a connection to an NFT marketplace where you’ll be able to upload and sell your creations
When you purchase a stock in NFT, that purchase is recorded on the blockchain—the bitcoin ledger of transactions—and that entry acts as your proof of ownership.
The value of an NFT varies a lot based on the digital asset up for grabs. People use NFTs to trade and sell digital art, so when creating an NFT, you should consider the popularity of your digital artwork along with historical statistics.
In the year 2021, a digital artist called Pak created an artwork called The Merge. It was sold on the Nifty Gateway NFT market for $91.8 million.
Non-fungible tokens can be used in investment opportunities. One can purchase an NFT and resell it at a profit. Certain NFT marketplaces let sellers of NFTs keep a percentage of the profits from sales of the assets they create.
Many people want to buy NFTs because it lets them support the arts and own something cool from their favorite musicians, brands, and celebrities. NFTs also give artists an opportunity to program in continual royalties if someone buys their work. Galleries see this as a way to reach new buyers interested in art.
There are many places to buy digital assets, like opensea and their policies vary. On top shot, for instance, you sign up for a waitlist that can be thousands of people long. When a digital asset goes on sale, you are occasionally chosen to purchase it.
To mint an NFT token, you must pay some amount of gas fee to process the transaction on the Etherum blockchain, but you can mint your NFT on a different blockchain called Polygon to avoid paying gas fees. This option is available on OpenSea and this simply denotes that your NFT will only be able to trade using Polygon's blockchain and not Etherum's blockchain. Mintable allows you to mint NFTs for free without paying any gas fees.
The answer is no. Non-Fungible Tokens are minted on the blockchain using cryptocurrencies such as Etherum, Solana, Polygon, and so on. Once a Non-Fungible Token is minted, the transaction is recorded on the blockchain and the contract or license is awarded to whoever has that Non-Fungible Token in their wallet.
You can sell your work and creations by attaching a license to it on the blockchain, where its ownership can be transferred. This lets you get exposure without losing full ownership of your work. Some of the most successful projects include Cryptopunks, Bored Ape Yatch Club NFTs, SandBox, World of Women and so on. These NFT projects have gained popularity globally and are owned by celebrities and other successful entrepreneurs. Owning one of these NFTs gives you an automatic ticket to exclusive business meetings and life-changing connections.
That’s a wrap. Hope you guys found this article enlightening. I just answer some question with my limited knowledge about NFTs. If you have any questions or suggestions, feel free to drop them in the comment section below. Also I have a question for you, Is bitcoin an NFTs? let me know in The comment section below
Are you a great Chrome user? That’s nice to hear. But first, consider whether or not there are any essential Chrome extensions you are currently missing from your browsing life, so here we're going to share with you 10 Best Chrome Extensions That Are Perfect for Everyone. So Let's Start.
When you have too several passwords to remember, LastPass remembers them for you.
This chrome extension is an easy way to save you time and increase security. It’s a single password manager that will log you into all of your accounts. you simply ought to bear in mind one word: your LastPass password to log in to all or any your accounts.
MozBar is an SEO toolbar extension that makes it easy for you to analyze your web pages' SEO while you surf. You can customize your search so that you see data for a particular region or for all regions. You get data such as website and domain authority and link profile. The status column tells you whether there are any no-followed links to the page.You can also compare link metrics. There is a pro version of MozBar, too.
Grammarly is a real-time grammar checking and spelling tool for online writing. It checks spelling, grammar, and punctuation as you type, and has a dictionary feature that suggests related words. if you use mobile phones for writing than Grammerly also have a mobile keyboard app.
VidIQ is a SaaS product and Chrome Extension that makes it easier to manage and optimize your YouTube channels. It keeps you informed about your channel's performance with real-time analytics and powerful insights.
ColorZilla is a browser extension that allows you to find out the exact color of any object in your web browser. This is especially useful when you want to match elements on your page to the color of an image.
Honey is a chrome extension with which you save each product from the website and notify it when it is available at low price it's one among the highest extensions for Chrome that finds coupon codes whenever you look online.
GMass (or Gmail Mass) permits users to compose and send mass emails using Gmail. it is a great tool as a result of you'll use it as a replacement for a third-party email sending platform. you will love GMass to spice up your emailing functionality on the platform.
It's a Chrome extension for geeks that enables you to highlight and save what you see on the web.
It's been designed by Notion, that could be a Google space different that helps groups craft higher ideas and collaborate effectively.
If you are someone who works online, you need to surf the internet to get your business done. And often there is no time to read or analyze something. But it's important that you do it. Notion Web Clipper will help you with that.
WhatFont is a Chrome extension that allows web designers to easily identify and compare different fonts on a page. The first time you use it on any page, WhatFont will copy the selected page.It Uses this page to find out what fonts are present and generate an image that shows all those fonts in different sizes. Besides the apparent websites like Google or Amazon, you'll conjointly use it on sites wherever embedded fonts ar used.
Similar Web is an SEO add on for both Chrome and Firefox.It allows you to check web site traffic and key metrics for any web site, as well as engagement rate, traffic ranking, keyword ranking, and traffic source. this is often a good tool if you are looking to seek out new and effective SEO ways similarly as analyze trends across the web.
I know everyone knows how to install extension in pc but most of people don't know how to install it in android phone so i will show you how to install it in android
1. Download Kiwi browser from Play Store and then Open it.
2. Tap the three dots at the top right corner and select Extension.
3. Click on (+From Store) to access chrome web store or simple search chrome web store and access it.
4. Once you found an extension click on add to chrome a message will pop-up asking if you wish to confirm your choice. Hit OK to install the extension in the Kiwi browser.
5. To manage extensions on the browser, tap the three dots in the upper right corner. Then select Extensions to access a catalog of installed extensions that you can disable, update or remove with just a few clicks.
Your Chrome extensions should install on Android, but there’s no guarantee all of them will work. Because Google Chrome Extensions are not optimized for Android devices.
We hope this list of 10 best chrome extensions that is perfect for everyone will help you in picking the right Chrome Extensions. We have selected the extensions after matching their features to the needs of different categories of people. Also which extension you like the most let me know in the comment section
RSS Rabbit links users to publicly available RSS entries.
Vet every link before clicking! The creators accept no responsibility for the contents of these entries.
We're not prepared to take user feedback yet. Check back soon!