********** TRAVEL **********
return to top
Ken Jennings Has Some Questions About Death
Thu, 08 Jun 2023 10:00:00 +0000
The “Jeopardy!” host on the meaning of trivia, the awkwardness of personal anecdotes, and his new book—a travel guide to the afterlife.
Match ID: 0 Score: 80.00 source: www.newyorker.com age: 1 day
qualifiers: 45.00 travel guide(|s), 35.00 travel(|ing)
Germans best tippers in Europe, finds poll. Italians? Not so much
Sat, 10 Jun 2023 07:00:17 GMT
Dutiful German generosity revealed in analysis of gratuity habits in six EU countries, the UK and US
In Germany it seems to be pretty much automatic, pretty much all the time. In France and Spain it all depends – presumably on social subtleties that you have to be French or Spanish to understand. In Italy, why would you even bother?
When, and how much, to tip is a question that has been vexing visitors to Europe for as long as people have been travelling around the continent. Outside their own country, it seems even Europeans don’t know the answer.Continue reading...
A retreat deep in an ancient Wiltshire forest offers jiu-jitsu, wild swimming and an emotional detox among like-minded strangers
I was an hour late by the time my car careened down a bumpy country lane into the ecovillage that would be my home for the weekend. The Easter getaway had turned my three-hour drive into five, rain lashing the windows throughout, and I arrived for my first wellness retreat about as far from zen as you could get.
One of the founders, the 29-year-old spiritual guru Josh Bolding, floats across the car park and greets me with a hug. I’d have preferred a beer, but I go with the man-hug. There is no time for small talk: I’m the last to arrive and he whisks me into the practice room, a wooden hut on stilts, where the other guests are settling into the first session on yin yoga and breathwork.Continue reading...
Plane can accommodate up to 16 people and is aimed at super-rich who ‘want to enjoy the money while still alive’
Forget partying on the ground: the super-rich are being told to do it at 33,000ft by a multimillionaire Dubai hotelier launching a £10,000-an-hour “five-star party jet”.
Kabir Mulchandani, the founder and chair of the luxury hotel group FIVE, has bought a Airbus ACJ TwoTwenty and claims to have transformed it into a “boundary-breaking fusion of hospitality and private aviation”, complete with dancing area, king-size bed and shower.Continue reading...
The Indigenous children – one of whom was just 11 months old – are thought to have eaten food dropped by rescuers and used their own ancestral knowledge
Malnourished and covered in insect bites, four Indigenous children were rescued alive from the Colombian Amazon on Friday afternoon, 40 days after the plane they were travelling in crashed into the jungle.
In a remarkable feat of resilience, the children survived heavy storms in one of the most inhospitable parts of the country, home to predatory animals and armed groups.Continue reading...
Researchers used a model to predict how the smoke would move through the region and said it wouldn’t pose a health risk
Smoke from Canadian wildfires that has descended upon parts of the eastern US and Canada in a thick haze has drifted over Norway and is expected to hit southern Europe, Norwegian officials said on Friday.
Using a climate forecast model, atmosphere and climate scientists with the Norwegian climate and environmental research institute (NILU) predicted how the smoke would travel through the atmosphere, flowing over the Scandinavian country before moving further south. The smoke was not expected to pose a health risk there.Continue reading...
Shares of Delta Air Lines Inc. DAL approach the end of the regular session Friday poised to achieve their longest winning streak on record, up for an 11th trading day. The stock has gained more than 13% in the period, and a close at Friday’s current levels would be Delta stock’s highest since March 8, when it closed at $39.73. Delta and other major U.S. airlines were in the black on Friday, with the U.S. Global Jets ETF looking at weekly gains of nearly 4%. U.S. airlines are bracing for a busy summer travel season
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
There are more than 400 fires burning across Canada, with many out of control, and as smoke travels south it is prompting air quality alerts in the US
There are more than 400 wildfires burning across Canada, with many out of control, according to officials. The fires are unusual in their timing, size and location. The “fire season”, when weather conditions are ripe for conflagrations, has only just begun. A third of the fires are in the boreal forest in the eastern province of Quebec, a place not used to dealing with large blazes.Continue reading...
The powerful lights mounted on the border wall threaten the dark skies that make southern Arizona a biodiversity hotspot.
The post The Feds Have Thousands of Stadium Lights on the Border. Switching Them On Would Devastate Desert Ecosystems. appeared first on The Intercept.
Mo-Shing Chen, a world-renowned power engineering educator and researcher, died on 1 May at the age of 91.
The IEEE Fellow was a professor at the University of Texas at Arlington for more than 40 years. He founded the university’s Energy Systems Research Center in 1968 and served as its director until he retired in 2003.
Chen created UTA’s first Ph.D. program in electrical engineering in 1969, and it quickly became one of the nation’s largest and top-rated graduate programs in power systems engineering.
Chen’s research included the modeling of electrical loads, the effect of voltage control in energy savings, real-time testing to improve power system efficiency, computer representation of cogeneration systems, reducing efficiency losses in transmission lines, and voltage stability.
Through his work, he solved complex problems engineers were facing with power networks, from small, rural electric cooperatives to ones that serve large metropolitan areas including New York City’s Consolidated Edison Co.
He taught his students not only how to solve such problems but also how to identify and understand what caused the troubles.
Born in the village of Wuxing in China, Chen and his family moved to Taiwan in 1949 when he was a teenager. After Chen earned a bachelor’s degree in electrical engineering in 1954 from National Taiwan University in Taipei, he joined the Taiwan Power Co. as a power engineer in Wulai. There he became fascinated by difficult, real-world problems of power systems, such as frequent blackouts and sudden spikes of electric loads.
Deciding he wanted to pursue master’s and doctoral degrees in electrical engineering, Chen moved to the United States to do so at the University of Texas at Austin under the mentorship of Edith Clarke, an EE professor there. She had invented an early graphing calculator and worked on the design and construction of hydroelectric power systems including the Hoover Dam, located on the Nevada-Arizona border.
Clarke and Chen had lively discussions about their work, and they had mutual respect for one another. He studied under Clarke until she retired in 1957.
Chen earned his master’s degree in 1958 and his Ph.D. in 1962.
He joined UTA—then known as Arlington State College—in 1962 as an assistant professor of electrical engineering.
As a professor, Chen observed that electrical engineering programs at universities around the country were not meeting the needs of industry, so he founded UTA’s Power Systems Research Center. It was later renamed the Energy Systems Research Center.
He gained global recognition in the power industry through his intensive, two-week continuing-education course, Modeling and Analysis of Modern Power Systems, which he began teaching in 1967. Attendees learned how to design, operate, and stabilize systems. The course became the power industry’s hub for continuing education, attended by 1,500 participants from academia and industry. The attendees came from more than 750 universities and companies worldwide. Chen also traveled to more than 40 companies and universities to teach the course.
He mentored UTA’s first Ph.D. graduate, Howard Daniels, who became an IEEE life member and vice president of a multinational power company based in Switzerland. Chen went on to mentor more than 300 graduate students.
Chen this year was awarded one of UTA’s first College of Engineering Legacy Awards. The honor is designed to recognize a faculty member’s career-long performance and dedication to the university.
In 1968 he founded the Transmission and Substation Design and Operation Symposium. The event, still held today, serves as a forum for utility companies, engineers, contractors, and consultants to present and discuss trends and challenges.
He also created a distinguished-lecturer series at UTA and invited students, faculty, and industry engineers to campus to listen to speeches by power systems engineers including IEEE Fellow Charles Concordia and IEEE Life Fellow W.F. Tinney.
Chen said he was always cognizant that the primary purpose of a university was education, so before making any decision, he asked himself, “How will my students benefit?”
By the mid-1970s, the U.S. National Science Foundation consistently ranked UTA as one of the top power engineering programs in the country.
Chen said he believed any faculty member could teach top students, who generally need little help. A professor’s real service to society, he said, was turning average students into top-quality graduates who could compete with anyone.
Part of that process was recruiting, motivating, and mentoring students. Chen insisted that his graduate students have an office near his so he could be readily available for discussions.
Chen’s contagious enthusiasm and thorough understanding of power systems— along with a knack for communicating difficult concepts clearly, simply, and humorously—made him a popular professor. In 1976 he received the first Edison Electric Institute Power Engineering Educator Award. More than 50 of Chen’s students and colleagues endorsed him for the honor.
Chen founded the university’s first international visiting-scholars program in 1968. Through the program, more than 50 power systems researchers have spent a year at UTA, teaching and conducting research. Participants have come from China, Israel, Japan, Korea, Latvia, Macedonia, Spain, and Russia.
Chen was the principal investigator for more than 40 research projects at the Energy Systems Research Center. Many of them were supported by Consolidated Edison (ConEd) of New York and the Electric Power Research Institute, in Washington, D.C.
One of his first research projects involved creating a computer representation of an operational power system with Daniels. Running a computer was expensive in the late 1960s, and Chen and Daniels’ research helped decrease data acquisition costs from between US $10,000 and $20,000 to only 1 cent.
With that project, Chen quickly demonstrated his research value to the power industry.
In the first project Chen led for ConEd, he and his team created a computer representation of New York City’s underground electric power system. It was one of Chen’s favorite projects, he said, and he enjoyed looking back at his experiences with it.
“Before this study, computers were used to represent balanced systems, not unbalanced underground systems,” he once told me. “New York City is fundamentally a distribution system, not a transmission system. ConEd had paid $2 million to a different, very famous university to do this study, but it couldn’t deliver the results after two years. We bid $250,000 and delivered the results in nine months.”
ConEd’s CEO at the time said, “We asked for a Ford, and you delivered a Cadillac.” It was the beginning of a nearly 30-year relationship between Chen and the utility company.
Chen and his colleagues designed and built a small supervisory control and data acquisition system in the mid-1980s for a group of power companies in Texas. Such systems gather and analyze real-time data from power systems to monitor and control their equipment. Chen’s invention proved valuable when he and his team were modeling electric loads for analyzing power system stability, resulting in the reduction of blackouts.
He published more than 100 peer-reviewed papers, most of them in IEEE Transactions on Power Systems.
His awards included the 1984 IEEE Centennial Medal, an honorary professorship by eight universities in China and Taiwan, and an honorary EE doctorate in 1997 from the Universidad Autonoma de Nuevo Leon, in Mexico.
He was a member of the Texas Society of Professional Engineers, the American Society of Engineering Education, IEEE–Eta Kappa Nu, Tau Beta Pi, the New York Academy of Sciences, and Sigma Xi.
In 2013 and 2014, I wrote extensively about new revelations regarding NSA surveillance based on the documents provided by Edward Snowden. But I had a more personal involvement as well.
I wrote the essay below in September 2013. The New Yorker agreed to publish it, but the Guardian asked me not to. It was scared of UK law enforcement, and worried that this essay would reflect badly on it. And given that the UK police would raid its offices in July 2014, it had legitimate cause to be worried.
Now, ten years later, I offer this as a time capsule of what those early months of Snowden were like...
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
Does your robot know where it is right now? Does it? Are you sure? And what about all of its robot friends—do they know where they are too? This is important. So important, in fact, that some would say that multirobot simultaneous localization and mapping (SLAM) is a crucial capability to obtain timely situational awareness over large areas. Those some would be a group of MIT roboticists who just won the IEEE Transactions on Robotics Best Paper Award for 2022, presented at this year’s IEEE International Conference on Robotics and Automation (ICRA 2023), in London. Congratulations!
Out of more than 200 papers published in Transactions on Robotics last year, reviewers and editors voted to present the 2022 IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award to Yulun Tian, Yun Chang, Fernando Herrera Arias, Carlos Nieto-Granda, Jonathan P. How, and Luca Carlone from MIT for their paper Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems.
“The editorial board, and the reviewers, were deeply impressed by the theoretical elegance and practical relevance of this paper and the open-source code that accompanies it. Kimera-Multi is now the gold standard for distributed multirobot SLAM.”
—Kevin Lynch, editor in chief, IEEE Transactions on Robotics
Robots rely on simultaneous localization and mapping to understand where they are in unknown environments. But unknown environments are a big place, and it takes more than one robot to explore all of them. If you send a whole team of robots, each of them can explore their own little bit, and then share what they’ve learned with one another to make a much bigger map that they can all take advantage of. Like most things robot, this is much easier said than done, which is why Kimera-Multi is so useful and important. The award-winning researchers say that Kimera-Multi is a distributed system that runs locally on a bunch of robots all at once. If one robot finds itself in communications range with another robot, they can share map data, and use those data to build and improve a globally consistent map that includes semantic annotations.
Since filming the above video, the researchers have done real-world tests with Kimera-Multi. Below is an example of the map generated by three robots as they travel a total of more than 2 kilometers. You can easily see how the accuracy of the map improves significantly as the robots talk to each other:
More details and code are available on GitHub.
Transactions on Robotics also selected some excellent Honorable Mentions for 2022:
Stabilization of Complementarity Systems via Contact-Aware Controllers, by Alp Aydinoglu, Philip Sieg, Victor M. Preciado, and Michael Posa
Autonomous Cave Surveying With an Aerial Robot, by Wennie Tabib, Kshitij Goel, John Yao, Curtis Boirum, and Nathan Michael
Prehensile Manipulation Planning: Modeling, Algorithms and Implementation, by Florent Lamiraux and Joseph Mirabel
Rock-and-Walk Manipulation: Object Locomotion by Passive Rolling Dynamics and Periodic Active Control, by Abdullah Nazir, Pu Xu, and Jungwon Seo
Origami-Inspired Soft Actuators for Stimulus Perception and Crawling Robot Applications, by Tao Jin, Long Li, Tianhong Wang, Guopeng Wang, Jianguo Cai, Yingzhong Tian, and Quan Zhang
For more than a century, utility companies have used electromechanical relays to protect power systems against damage that might occur during severe weather, accidents, and other abnormal conditions. But the relays could neither locate the faults nor accurately record what happened.
Then, in 1977, Edmund O. Schweitzer III invented the digital microprocessor-based relay as part of his doctoral thesis. Schweitzer’s relay, which could locate a fault within the radius of 1 kilometer, set new standards for utility reliability, safety, and efficiency.
Schweitzer Engineering Laboratories
President and CTO
Purdue University, West Lafayette, Ind.; Washington State University, Pullman
To develop and manufacture his relay, he launched Schweitzer Engineering Laboratories in 1982 from his basement in Pullman, Wash. Today SEL manufactures hundreds of products that protect, monitor, control, and automate electric power systems in more than 165 countries.
Schweitzer, an IEEE Life Fellow, is his company’s president and chief technology officer. He started SEL with seven workers; it now has more than 6,000.
The 40-year-old employee-owned company continues to grow. It has four manufacturing facilities in the United States. Its newest one, which opened in March in Moscow, Idaho, fabricates printed circuit boards.
Schweitzer has received many accolades for his work, including the 2012 IEEE Medal in Power Engineering. In 2019 he was inducted into the U.S. National Inventors Hall of Fame.
Power system faults can happen when a tree or vehicle hits a power line, a grid operator makes a mistake, or equipment fails. The fault shunts extra current to some parts of the circuit, shorting it out.
If there is no proper scheme or device installed with the aim of protecting the equipment and ensuring continuity of the power supply, an outage or blackout could propagate throughout the grid.
Overcurrent is not the only damage that can occur, though. Faults also can change voltages, frequencies, and the direction of current.
A protection scheme should quickly isolate the fault from the rest of the grid, thus limiting damage on the spot and preventing the fault from spreading to the rest of the system. To do that, protection devices must be installed.
That’s where Schweitzer’s digital microprocessor-based relay comes in. He perfected it in 1982. It later was commercialized and sold as the SEL-21 digital distance relay/fault locator.
Schweitzer says his relay was, in part, inspired by an event that took place during his first year of college.
“Back in 1965, when I was a freshman at Purdue University, a major blackout left millions without power for hours in the U.S. Northeast and Ontario, Canada,” he recalls. “It was quite an event, and I remember it well. I learned many lessons from it. One was how difficult it was to restore power.”
He says he also was inspired by the book Protective Relays: Their Theory and Practice. He read it while an engineering graduate student at Washington State University, in Pullman.
“I bought the book on the Thursday before classes began and read it over the weekend,” he says. “I couldn’t put it down. I was hooked.
“I realized that these solid-state devices were special-purpose signal processors. They read the voltage and current from the power systems and decided whether the power systems’ apparatuses were operating correctly. I started thinking about how I could take what I knew about digital signal processing and put it to work inside a microprocessor to protect an electric power system.”
The 4-bit and 8-bit microprocessors were new at the time.
“I think this is how most inventions start: taking one technology and putting it together with another to make new things,” he says. “The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.”
He says he was introduced to signal processing, signal analysis, and how to use digital techniques in 1968 while at his first job, working for the U.S. Department of Defense at Fort Meade, in Maryland.
Faster ways to clear faults and improve cybersecurity
Schweitzer continues to invent ways of protecting and controlling electric power systems. In 2016 his company released the SEL-T400L, which samples a power system every microsecond to detect the time between traveling waves moving at the speed of light. The idea is to quickly detect and locate transmission line faults.
The relay decides whether to trip a circuit or take other actions in 1 to 2 milliseconds. Previously, it would take a protective relay on the order of 16 ms. A typical circuit breaker takes 30 to 40 ms in high-voltage AC circuits to trip.
“The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.”
“I like to talk about the need for speed,” Schweitzer says. “In this day and age, there’s no reason to wait to clear a fault. Faster tripping is a tremendous opportunity from a point of view of voltage and angle stability, safety, reducing fire risk, and damage to electrical equipment.
“We are also going to be able to get a lot more out of the existing infrastructure by tripping faster. For every millisecond in clearing time saved, the transmission system stability limits go up by 15 megawatts. That’s about one feeder per millisecond. So, if we save 12 ms, all of the sudden we are able to serve 12 more distribution feeders from one part of one transmission system.”
The time-domain technology also will find applications in transformer and distribution protection schemes, he says, as well as have a significant impact on DC transmission.
What excites Schweitzer today, he says, is the concept of energy packets, which he and SEL have been working on. The packets measure energy exchange for all signals including distorted AC systems or DC networks.
“Energy packets precisely measure energy transfer, independent of frequency or phase angle, and update at a fixed rate with a common time reference such as every millisecond,” he says. “Time-domain energy packets provide an opportunity to speed up control systems and accurately measure energy on distorted systems—which challenges traditional frequency-domain calculation methods.”
He also is focusing on improving the reliability of critical infrastructure networks by improving cybersecurity, situational awareness, and performance. Plug-and-play and best-effort networking aren’t safe enough for critical infrastructure, he says.
“SEL OT SDN technology solves some significant cybersecurity problems,” he says, “and frankly, it makes me feel comfortable for the first time with using Ethernet in a substation.”
Schweitzer didn’t start off planning to launch his own company. He began a successful career in academia in 1977 after joining the electrical engineering faculty at Ohio University, in Athens. Two years later, he moved to Pullman, Wash., where he taught at Washington State’s Voiland College of Engineering and Architecture for the next six years. It was only after sales of the SEL-21 took off that he decided to devote himself to his startup full time.
It’s little surprise that Schweitzer became an inventor and started his own company, as his father and grandfather were inventors and entrepreneurs.
His grandfather, Edmund O. Schweitzer, who held 87 patents, invented the first reliable high-voltage fuse in collaboration with Nicholas J. Conrad in 1911, the year the two founded Schweitzer and Conrad—today known as S&C Electric Co.—in Chicago.
Schweitzer’s father, Edmund O. Schweitzer Jr., had 208 patents. He invented several line-powered fault-indicating devices, and he founded the E.O. Schweitzer Manufacturing Co. in 1949. It is now part of SEL.
Schweitzer says a friend gave him the best financial advice he ever got about starting a business: Save your money.
“I am so proud that our 6,000-plus-person company is 100 percent employee-owned,” Schweitzer says. “We want to invest in the future, so we reinvest our savings into growth.”
He advises those who are planning to start a business to focus on their customers and create value for them.
“Unleash your creativity,” he says, “and get engaged with customers. Also, figure out how to contribute to society and make the world a better place.”
Stephen Cass: Welcome to Fixing the Future, an IEEE Spectrum podcast. This episode is brought to you by IEEE Xplore, the digital library with over 6 million technical documents and free search. I’m senior editor Stephen Cass, and today I’m talking with a former Spectrum editor, Sally Adee, about her new book, We Are Electric: The New Science of Our Body’s Electrome. Sally, welcome to the show.
Sally Adee: Hi, Stephen. Thank you so much for having me.
Cass: It’s great to see you again, but before we get into exactly what you mean by the body’s electrome and so on, I see that in researching this book, you actually got yourself zapped quite a bit in a number of different ways. So I guess my first question is: are you okay?
Adee: I mean, as okay as I can imagine being. Unfortunately, there’s no experimental sort of condition and control condition. I can’t see the self I would have been in the multiverse version of myself that didn’t zap themselves. So I think I’m saying yes.
Cass: The first question I have then is what is an electrome?
Adee: So the electrome is this word, I think, that’s been burbling around the bioelectricity community for a number of years. The first time it was committed to print is a 2016 paper by this guy called Arnold De Loof, a researcher out in Europe. But before that, a number of the researchers I spoke to for this book told me that they had started to see it in papers that they were reviewing. And I think it wasn’t sort of defined consistently always because there’s this idea that seems to be sort of bubbling to the top, bubbling to the surface, that there are these electrical properties that the body has, and they’re not just epiphenomena, and they’re not just in the nervous system. They’re not just action potentials, but that there are electrical properties in every one of our cells, but also at the organ level, potentially at the sort of entire system level, that people are trying to figure out what they actually do.
And just as action potentials aren’t just epiphenomena, but actually our control mechanisms, they’re looking at how these electrical properties work in the rest of the body, like in the cells, membrane voltages and skin cells, for example, are involved in wound healing. And there’s this idea that maybe these are an epigenetic variable that we haven’t been able to conscript yet. And there’s such promise in it, but a lot of the research, the problem is that a lot of the research is being done across really far-flung scientific communities, some in developmental biology, some of it in oncology, a lot of it in neuroscience, obviously. But what this whole idea of the electrome is— I was trying to pull this all together because the idea behind the book is I really want people to just develop this umbrella of bioelectricity, call it the electrome, call it bioelectricity, but I kind of want the word electrome to do for bioelectricity research what the word genome did for molecular biology. So that’s basically the spiel.
Cass: So I want to surf back to a couple points you raised there, but first off, just for people who might not know, what is an action potential?
Adee: So the action potential is the electrical mechanism by which the nervous signal travels, either to actuate motion at the behest of your intent or to gain sensation and sort of perceive the world around you. And that’s the electrical part of the electrochemical nervous impulse. So everybody knows about neurotransmitters at the synapse and— well, not everybody, but probably Spectrum listeners. They know about the serotonin that’s released and all these other little guys. But the thing is you wouldn’t be able to have that release without the movement of charged particles called ions in and out of the nerve cell that actually send this impulse down and allow it to travel at a rate of speed that’s fast enough to let you yank your hand away from a hot stove when you’ve touched it, before you even sort of perceive that you did so.
Cass: So that actually brings me to my next question. So you may remember in some Spectrum‘s editorial meetings when we were deciding if a tech story was for us or not, that literally, we would often ask, “Where is the moving electron? Where is the moving electron?” But bioelectricity is not really based on moving electrons. It’s based on these ions.
Yeah. So let’s take the neuron as an example. So what you’ve got is— let me do like a— imagine a spherical cow for a neuron, okay? So you’ve got a blob and it’s a membrane, and that separates the inside of your cell from the outside of your cell. And this membrane is studded with tens of thousands, I think, little pores called ion channels. And the pores are not just sieve pores. They’re not inert. They’re really smart. And they decide which ions they like. Now, let’s go to the ions. Ions are suffusing your extracellular fluid, all the stuff that bathes you. It’s basically the reason they say you’re 66 percent water or whatever. This is like sieve water. It’s got sodium, potassium, calcium, etc., and these ions are charged particles.
So when you’ve got a cell, it likes potassium, the neuron, it likes potassium, it lets it in. It doesn’t really like sodium so much. It’s got very strong preferences. So in its resting state, which is its happy place, those channels allow potassium ions to enter. And those are probably where the electrons are, actually, because an ion, it’s got a plus-one charge or a minus-one charge based on— but let’s not go too far into it. But basically, the cell allows the potassium to come inside, and its resting state, which is its happy place, the separation of the potassium from the sodium causes, for all sorts of complicated reasons, a charge inside the cell that is minus 70 degree— sorry, minus 70 millivolts with respect to the extracellular fluid.
Cass: Before I read your book, I kind of had the idea that how neurons use electricity was, essentially, settled science, very well understood, all kind of squared away, and this was how the body used electricity. But even when it came to neurons, there’s a lot of fundamentals, kind of basic things about how neurons use electricity that we really only established relatively recently. Some of the research you’re talking about is definitely not a century-old kind of basic science about how these things work.
Adee: No, not at all. In fact, there was a paper released in 2018 that I didn’t include, which I’m really annoyed by. I just found it recently. Obviously, you can’t find all the papers. But it’s super interesting because it blends that whole sort of ionic basis of the action potential with another thing in my book that’s about how cell development is a little bit like a battery getting charged. Do you know how cells assume an electrical identity that may actually be in charge of the cell fate that they meet? And so we know abou— sorry, the book goes into more detail, but it’s like when a cell is stem or a fertilized egg, it’s depolarized. It’s at zero. And then when it becomes a nerve cell, it goes to that minus 70 that I was talking about before. If it becomes a fat cell, it’s at minus 50. If it’s musculoskeletal tissue, it goes to minus 90. Liver cells are like around minus 40. And so you’ve got real identitarian diversity, electrical diversity in your tissues, which has something to do with what they end up doing in the society of cells. So this paper that I was talking about, the 2018 paper, they actually looked at neurons. This was work from Denis Jabaudon at the University of Geneva, and they were looking at how neurons actually differentiate. Because when baby neurons are born-- your brain is made of all kinds of cells. It’s not just cortical cells. There’s staggering variety of classes of neurons. And as cells actually differentiate, you can watch their voltage change, just like you can do in the rest of the body with these electrosensitive dyes. So that’s an aspect of the brain that we hadn’t even realized until 2018.
Cass: And that all leads me to my next point, which is if you think bioelectricity, we think, okay, nerves zapping around. But neurons are not the only bioelectric network in the body. So talk about some of the other sorts of electrical networks we have, completely, or are largely separate from our neural networks?
Adee: Well, so Michael Levin is a professor at Tufts University. He does all kinds of other stuff, but mainly, I guess, he’s like the Paul Erdos of bioelectricity, I like to call him, because he’s sort of the central node. He’s networked into everybody, and I think he’s really trying to, again, also assemble this umbrella of bioelectricity to study this all in the aggregate. So his idea is that we are really committed to this idea of bioelectricity being in charge of our sort of central communications network, the way that we understand the environment around us and the way that we understand our ability to move and feel within it. But he thinks that bioelectricity is also how— that the nervous system kind of hijacked this mechanism, which is way older than any nervous system. And he thinks that we have another underlying network that is about our shape, and that this is bioelectrically mediated in really important ways, which impacts development, of course, but also wound healing. Because if you think about the idea that your body understands its own shape, what happens when you get a cut? How does it heal it? It has to go back to some sort of memory of what its shape is in order to heal it over. In animals that regenerate, they have a completely different electrical profile after they’ve been—so after they’ve had an arm chopped off.
So it’s a very different electrical— yeah, it’s a different electrical process that allows a starfish to regrow a limb than the one that allows us to scar over. So you’ve got this thing called a wound current. Your skin cells are arranged in this real tight wall, like little soldiers, basically. And what’s important is that they’re polarized in such a way that if you cut your skin, all the sort of ions flow out in a certain way, which creates this wound current, which then generates an electric field, and the electric field acts like a beacon. It’s like a bat signal, right? And it guides in these little helper cells, the macrophages that come and gobble up the mess and the keratinocytes and the guys who build it back up again and scar you over. And it starts out strong, and as you scar over, as the wound heals, it very slowly goes away. By the time the wound is healed, there’s no more field. And what was super interesting is this guy, Richard Nuccitelli, invented this thing called the Dermacorder that’s able to sense and evaluate the electric field. And he found that in people over the age of 65, the wound field is less than half of what it is in people under 25. And that actually goes in line with another weird thing about us, which is that our bioelectricity— or sorry, our regeneration capabilities are time-dependent and tissue-dependent.
So you probably know that the intestinal tissue regenerates all the time. You’re going to digest next week’s food with totally different cells than this morning’s food. But also, we’re time-dependent because when we’re just two cells, if you cleave that in half, you get identical twins. Later on during fetal development, it’s totally scarless, which is something we found out, because when we started being able to do fetal surgery in the womb, it was determined that we heal, basically, scarlessly. Then we’re born, and then between the ages of 7 and 11— until we are between the ages of 7 and 11, you chop off a fingertip, it regenerates perfectly, including the nail, but we lose that ability. And so it seems like the older we get, the less we regenerate. And so they’re trying to figure out now how— various programs are trying to figure out how to try to take control of various aspects of our sort of bioelectrical systems to do things like radically accelerate healing, for example, or how to possibly re-engage the body’s developmental processes in order to regenerate preposterous things like a limb. I mean, it sounds preposterous now. Maybe in 20 years, it’ll just be.
Cass: I want to get into some of the technologies that people are thinking of building on this sort of new science. Part of it is that the history of this field, both scientifically and technologically, has really been plagued by the shadow of quackery. And can you talk a little bit about this and how, on the one hand, there’s been some things we’re very glad that we stopped doing some very bad ideas, but it’s also had this shadow on sort of current research and trying to get real therapies to patients?
Adee: Yeah, absolutely. That was actually one of my favorite chapters to write, was the spectacular pseudoscience one, because, I mean, that is so much fun. So it can be boiled down to the fact that we were trigger happy because we see this electricity, we’re super excited about it. We start developing early tools to start manipulating it in the 1700s. And straight away, it’s like, this is an amazing new tool, and there’s all these sort of folk cures out there that we then decide that we’re going to take— not into the clinic. I don’t know what you’d call it, but people just start dispensing this stuff. This is separate from the discovery of endogenous electrical activity, which is what Luigi Galvani famously discovered in the late 1700s. He starts doing this. He’s an anatomist. He’s not an electrician. Electrician, by the way, is what they used to call the sort of literati who were in charge of discovery around electricity. And it had a really different connotation at the time, that they were kind of like the rocket scientists of their day.
But Galvani’s just an anatomist, and he starts doing all of these experiments using these new tools to zap frogs in various ways and permutations. And he decides that he has answered a whole different old question, which is how does man’s will animate his hands and let him feel the world around him? And he says, “This is electrical in nature.” This is a long-standing mystery. People have been bashing their heads against it for the past 100, 200 years. But he says that this is electrical, and there’s a big, long fight. I won’t get into too much between Volta, the guy who invented the battery, and Galvani. Volta says, “No, this is not electrical.” Galvani says, “Yes, it is.” But owing to events, when Volta invents the battery, he basically wins the argument, not because Galvani was wrong, but because Volta had created something useful. He had created a tool that people could use to advance the study of all kinds of things. Galvani’s idea that we have an endogenous electrical sort of impulse, it didn’t lead to anything that anybody could use because we didn’t have tools sensitive enough to really measure it. We only sort of had indirect measurements of it.
And his nephew, after he dies in ignominy, his nephew decides to bring it on himself to rescue, single-handedly, his uncle’s reputation. The problem is, the way he does it is with a series of grotesque, spectacular experiments. He very famously reanimated— well, zapped until they shivered, the corpses of all these dead guys, dead criminals, and he was doing really intense things like sticking electrodes connected to huge voltaic piles, Proto batteries, into the rectums of dead prisoners, which would make them sit up halfway and point at the people who are assembled, this very titillating stuff. Many celebrities of the time would crowd around these demonstrations.
Anyway, so Galvani basically—or sorry, Aldini, the nephew, basically just opens the door to everyone to be like, “Look what we can do with electricity.” Then in short order, there’s a guy who creates something called the Celestial Bed, which is a thing— they’ve got rings, they’ve got electric belts for stimulating the nethers. The Celestial Bed is supposed to help infertile couples. This is how sort of just wild electricity is in those days. It’s kind of like— you know how everybody went crazy for crypto scams last year? Electricity was like the crypto of 1828 or whatever, 1830s. And the Celestial Bed, so people would come and they would pay £9,000 to spend a night in it, right? Well, not at the time. That’s in today’s money. And it didn’t even use electricity. It used the idea of electricity. It was homeopathy, but electricity. You don’t even know where to start. So this is the sort of caliber of pseudoscience, and this is really echoed down through the years. That was in the 1800s. But when people submit papers or grant applications, I heard more than one researchers say to me— people would look at this electric stuff, and they’d be like, “Does anyone still believe this shit?” And it’s like, this is rigorous science, but it’s been just tarnished by the association with this.
Cass: So you mentioned wound care, and the book talks about some of the ways [inaudible] would care. But we’re also looking at other really ambitious ideas like regenerating limbs as part of this extension of wound care. And also, you make the point of certainly doing diagnostics and then possibly treatments for things like cancer. In thinking about cancer in a very different way than the really very, very tightly-focused genetic view we have of cancer now, and thinking about it kind of literally in a wider context. So can you talk about that a little bit?
Adee: Sure. And I want to start by saying that I went to a lot of trouble to be really careful in the book. I think cancer is one of those things that— I’ve had cancer in my family, and it’s tough to talk about it because you don’t want to give people the idea that there’s a cure for cancer around the corner when this is basic research and intriguing findings because it’s not fair. And I sort of struggled. I thought for a while, like, “Do I even bring this up?” But the ideas behind it are so intriguing, and if there were more research dollars thrown at it or pounds or whatever, Swiss francs, you might be able to really start moving the needle on some of this stuff. The idea is, there are two electrical— oh God, I don’t want to say avenues, but it is unfortunately what I have to do. There are two electrical avenues to pursue in cancer. The first one is something that a researcher called Mustafa Djamgoz at Imperial College here in the UK, he has been studying this since the ‘90s. Because he used to be a neurobiologist. He was looking at vision. And he was talking to some of his oncologist Friends, and they gave him some cancer cell lines, and he started looking at the behavior of cancer cells, the electrical behavior of cancer cells, and he started finding some really weird behaviors.
Cancer cells that should not have had anything to do with action potentials, like from prostate cancer lines, when he looked at them, they were oscillating like crazy, as if they were nerves. And then he started looking at other kinds of cancer cells, and they were all oscillating, and they were doing this oscillating behavior. So he spent like seven years sort of bashing his head against the wall. Nobody wanted to listen to him. But now, way more people are now investigating this. There’s going to be an ion channel at Cancer Symposium I think later this month, actually, in Italy. And he found, and a lot of other researchers like this woman, Annarosa Arcangeli, they have found that the reason that cancer cells may have these oscillating properties is that this is how they communicate with each other that it’s time to leave the nest of the tumor and start invading and metastasizing. Separately, there have been very intriguing-- this is really early days. It’s only a couple of years that they’ve started noticing this, but there have been a couple of papers now. People who are on certain kinds of ion channel blockers for neurological conditions like epilepsy, for example, they have cancer profiles that are slightly different from normal, which is that if they do get cancer, they are slightly less likely to die of it. In the aggregate. Nobody should be starting to eat ion channel blockers.
But they’re starting to zero in on which particular ion channels might be responsible, and it’s not just one that you and I have. These cancer kinds, they are like a expression of something that normally only exists when we’re developing in the womb. It’s part of the reason that we can grow ourselves so quickly, which of course, makes sense because that’s what cancer does when it metastasizes, it grows really quickly. So there’s a lot of work right now trying to identify how exactly to target these. And it wouldn’t be a cure for cancer. It would be a way to keep a tumor in check. And this is part of a strategy that has been proposed in the UK a little bit for some kinds of cancer, like the triple-negative kind that just keep coming back. Instead of subjecting someone to radiation and chemo, especially when they’re older, sort of just really screwing up their quality of life while possibly not even giving them that much more time. What if instead you sort of tried to treat cancer more like a chronic disease, keep it managed, and maybe that gives a person like 10 or 20 years? That’s a huge amount of time. And while not messing up with their quality of life.
This is a whole conversation that’s being had, but that’s one avenue. And there’s a lot of research going on in this right now that may yield fruit sort of soon. The much more sci-fi version of this, the studies have mainly been done in tadpoles, but they’re so interesting. So Michael Levin, again, and his postdoc at the time, I think, Brook Chernet, they were looking at what happens— so it’s uncontroversial that as a cancer cell-- so let’s go back to that society of cells thing that I was talking about. You get fertilized egg, it’s depolarized, zero, but then its membrane voltage charges, and it becomes a nerve cell or skin cell or a fat cell. What’s super interesting is that when those responsible members of your body’s society decide to abscond and say, “Screw this. I’m not participating in society anymore. I’m just going to eat and grow and become cancer,” their membrane voltage also changes. It goes much closer to zero again, almost like it’s having a midlife crisis or whatever.
So what they found, what Levin and Chernet found is that you can manipulate those cellular electrics to make the cell stop behaving cancerously. And so they did this in tadpoles. They had genetically engineered the tadpoles to express tumors, but when they made sure that the cells could not depolarize, most of those tadpoles did not express the tumors. And when they later took tadpoles that already had the tumors and they repolarized the voltage, those tumors, that tissue started acting like normal tissue, not like cancer tissue. But again, this is the sci-fi stuff, but the fact that it was done at all is so fascinating, again, from that epigenetic sort of body pattern perspective, right?
Cass: So sort of staying with that sci-fi stuff, except this one, even more closer to reality. And this goes back to some of these experiments which you zapped yourself. Can you talk a little bit about some of these sort of device that you can wear which appear to really enhance certain mental abilities? And some of these you [inaudible].
Adee: So the kit that I wore, I actually found out about it while I was at Spectrum, when I was a DARPATech. And this program manager told me about it, and I was really stunned to find out that just by running two milliamps of current through your brain, you would be able to improve your-- well, it’s not that your ability is improved. It was that you could go from novice to expert in half the time that it would take you normally, according to the papers. And so I really wanted to try it. I was trying to actually get an expert feature written for IEEE Spectrum, but they kept ghosting me, and then by the time I got to New Scientist, I was like, fine, I’m just going to do it myself. So they let me come over, and they put this kit on me, and it was this very sort of custom electrodes, these things, they look like big daisies. And this guy had brewed his own electrolyte solution and sort of smashed it onto my head, and it was all very slimy.
So I was doing this video game called DARWARS Ambush!, which is just like a training— it’s a shooter simulation to help you with shooting. So it was a Gonzo stunt. It was not an experiment. But he was trying to replicate the conditions of me not knowing whether the electricity was on as much as he could. So he had it sort of behind my back, and he came in a couple of times and would either pretend to turn it on or whatever. And I was practicing and I was really bad at it. That is not my game. Let’s just put it that way. I prefer driving games. But it was really frustrating as well because I never knew when the electricity was on. So I was just like, “There’s no difference. This sucks. I’m terrible.” And that sort of inner sort of buzz kept getting stronger and stronger because I’d also made bad choices. I’d taken a red-eye flight the night before. And I was like, “Why would I do that? Why wouldn’t I just give myself one extra day to recover before I go in and do this really complicated feature where I have to learn about flow state and electrical stimulation?” And I was just getting really tense and just angrier and angrier. And then at one point, he came in after my, I don’t know, 5th or 6th, I don’t know, 400th horrible attempt where I just got blown up every time. And then he turned on the electricity, and I could totally feel that something had happened because I have a little retainer in my mouth just at the bottom. And I was like, “Whoa.” But then I was just like, “Okay. Well, now this is going to suck extra much because I know the electricity is on, so it’s not even a freaking sham condition.” So I was mad.
But then the thing started again, and all of a sudden, all the sort of buzzing little angry voices just stopped, and it was so profound. And I’ve talked about it quite a bit, but every time I remember it, I get a little chill because it was the first time I’d ever realized, number one, how pissy my inner voices are and just how distracting they are and how abusive they are. And I was like, “You guys suck, all of you.” But somebody had just put a bell jar between me and them, and that feeling of being free from them was profound. At first, I didn’t even notice because I was just busy doing stuff. And all of a sudden, I was amazing at this game and I dispatched all of the enemies and whatnot, and then afterwards, when they came in, I was actually pissed because I was just like, “Oh, now I get it right and you come in after three minutes. But the last times when I was screwing it up, you left me in there to cook for 20 minutes.” And they were like, “No, 20 minutes has gone by,” which I could not believe. But yeah, it was just a really fairly profound experience, which is what led me down this giant rabbit hole in the first place. Because when I wrote the feature afterwards, all of a sudden I started paying attention to the whole TDCS thing, which I hadn’t yet. I had just sort of been focusing [crosstalk].
Cass: And that’s transcranial—?
Adee: Oh sorry, transcranial direct current stimulation.
Cass: There you go. Thank you. Sorry.
Adee: No. Yeah, it’s a mouthful. But then that’s when I started to notice that quackery we were talking about before. All that history was really informing the discussion around it because people were just like, “Oh, sure. Why don’t you zap your brain with some electricity and you become super smart.” And I was like, “Oh, did I like fall for the placebo effect? What happened here?” And there was this big study from Australia where the guy was just like, “When we average out all of the effects of TDCS, we find that it does absolutely nothing.” Other guys stimulated a cadaver to see if it would even reach the brain tissue and included it wouldn’t. But that’s basically what started me researching the book, and I was able to find answers to all those questions. But of course, TDCS, I mean, it’s finicky just like the electrome. It’s like your living bone is conductive. So when you’re trying to put an electric field on your head, basically, you have to account for things like how thick is that person’s skull in the place that you want to stimulate. They’re still working out the parameters.
There have been some really good studies that show sort of under which particular conditions they’ve been able to make it work. It does not work for all conditions for which it is claimed to work. There is some snake oil. There’s a lot left to be done, but a better understanding of how this affects the different layers of the sort of, I guess, call it, electrome, would probably make it something that you could use replicability. Is that a word? But also, that applies to things like deep brain stimulation, which, also, for Parkinson’s, it’s fantastic. But they’re trying to use it for depression, and in some cases, it works so—I want to use a bad word—amazingly. Just Helen Mayberg, who runs these trials, she said that for some people, this is an option of last resort, and then they get the stimulation, and they just get back on the bus. That’s her quote. And it’s like a switch that you flip. And for other people, it doesn’t work at all.
Cass: Well the book is packed with even more fantastic stuff, and I’m sorry we don’t have time to go through it, because literally, I could sit here and talk to you all day about this.
Adee: I didn’t even get into the frog battery, but okay, that’s fine. Fine, fine skip the frog. Sorry, I’m just kidding. I’m kidding, I’m kidding.
Cass: And thank you so much, Sally, for chatting with us today.
Adee: Oh, thank you so much. I really love talking about it, especially with you.
Cass: Today on Fixing the Future, we’re talking with Sally Adee about her new book on the body’s electrome. For IEEE Spectrum I’m Stephen Cass.
Inside today’s computers, phones, and other mobile devices, more and more sensors, processors, and other electronics are fighting for space. Taking up a big part of this valuable real estate are the cameras—just about every gadget needs a camera, or two, three, or more. And the most space-consuming part of the camera is the lens.
The lenses in our mobile devices typically collect and direct incoming light by refraction, using a curve in a transparent material, usually plastic, to bend the rays. So these lenses can’t shrink much more than they already have: To make a camera small, the lens must have a short focal length; but the shorter the focal length, the greater the curvature and therefore the thickness at the center. These highly curved lenses also suffer from all sorts of aberrations, so camera-module manufacturers use multiple lenses to compensate, adding to the camera’s bulk.
With today’s lenses, the size of the camera and image quality are pulling in different directions. The only way to make lenses smaller and better is to replace refractive lenses with a different technology.
That technology exists. It’s the metalens, a device developed at Harvard and commercialized at Metalenz, where I am an applications engineer. We create these devices using traditional semiconductor-processing techniques to build nanostructures onto a flat surface. These nanostructures use a phenomenon called metasurface optics to direct and focus light. These lenses can be extremely thin—a few hundred micrometers thick, about twice the thickness of a human hair. And we can combine the functionality of multiple curved lenses into just one of our devices, further addressing the space crunch and opening up the possibility of new uses for cameras in mobile devices.
Before I tell you how the metalens evolved and how it works, consider a few previous efforts to replace the traditional curved lens.
Conceptually, any device that manipulates light does so by altering its three fundamental properties: phase, polarization, and intensity. The idea that any wave or wave field can be deconstructed down to these properties was proposed by Christiaan Huygens in 1678 and is a guiding principle in all of optics.
In this single metalens [between tweezers], the pillars are less than 500 nanometers in diameter. The black box at the bottom left of the enlargement represents 2.5 micrometers. Metalenz
In the early 18th century, the world’s most powerful economies placed great importance on the construction of lighthouses with larger and more powerful projection lenses to help protect their shipping interests. However, as these projection lenses grew larger, so did their weight. As a result, the physical size of a lens that could be raised to the top of a lighthouse and structurally supported placed limitations on the power of the beam that could be produced by the lighthouse.
French physicist Augustin-Jean Fresnel realized that if he cut a lens into facets, much of the central thickness of the lens could be removed but still retain the same optical power. The Fresnel lens represented a major improvement in optical technology and is now used in a host of applications, including automotive headlights and brake lights, overhead projectors, and—still—for lighthouse projection lenses. However, the Fresnel lens has limitations. For one, the flat edges of facets become sources of stray light. For another, faceted surfaces are more difficult to manufacture and polish precisely than continuously curved ones are. It’s a no-go for camera lenses, due to the surface accuracy requirements needed to produce good images.
Another approach, now widely used in 3D sensing and machine vision, traces its roots to one of the most famous experiments in modern physics: Thomas Young’s 1802 demonstration of diffraction. This experiment showed that light behaves like a wave, and when the waves meet, they can amplify or cancel one another depending on how far the waves have traveled. The so-called diffractive optical element (DOE) based on this phenomenon uses the wavelike properties of light to create an interference pattern—that is, alternating regions of dark and light, in the form of an array of dots, a grid, or any number of shapes. Today, many mobile devices use DOEs to convert a laser beam into “structured light.” This light pattern is projected, captured by an image sensor, then used by algorithms to create a 3D map of the scene. These tiny DOEs fit nicely into small gadgets, yet they can’t be used to create detailed images. So, again, applications are limited.
Enter the metalens. Developed at Harvard by a team led by professor Federico Capasso, then-graduate student Rob Devlin, research associates Reza Khorasaninejad, Wei Ting Chen, and others, metalenses work in a way that’s fundamentally different from any of these other approaches.
A metalens is a flat glass surface with a semiconductor layer on top. Etched in the semiconductor is an array of pillars several hundred nanometers high. These nanopillars can manipulate light waves with a degree of control not possible with traditional refractive lenses.
Imagine a shallow marsh filled with seagrass standing in water. An incoming wave causes the seagrass to sway back and forth, sending pollen flying off into the air. If you think of that incoming wave as light energy, and the nanopillars as the stalks of seagrass, you can picture how the properties of a nanopillar, including its height, thickness, and position next to other nanopillars, might change the distribution of light emerging from the lens.
A 12-inch wafer can hold up to 10,000 metalenses, made using a single semiconductor layer.Metalenz
We can use the ability of a metalens to redirect and change light in a number of ways. We can scatter and project light as a field of infrared dots. Invisible to the eye, these dots are used in many smart devices to measure distance, mapping a room or a face. We can sort light by its polarization (more on that in a moment). But probably the best way to explain how we are using these metasurfaces as a lens is by looking at the most familiar lens application—capturing an image.
The process starts by illuminating a scene with a monochromatic light source—a laser. (While using a metalens to capture a full-color image is conceptually possible, that is still a lab experiment and far from commercialization.) The objects in the scene bounce the light all over the place. Some of this light comes back toward the metalens, which is pointed, pillars out, toward the scene. These returning photons hit the tops of the pillars and transfer their energy into vibrations. The vibrations—called plasmons—travel down the pillars. When that energy reaches the bottom of a pillar, it exits as photons, which can be then captured by an image sensor. Those photons don’t need to have the same properties as those that entered the pillars; we can change these properties by the way we design and distribute the pillars.
Researchers around the world have been exploring the concept of metalenses for decades.
In a paper published in 1968 in Soviet Physics Uspekhi, Russian physicist Victor Veselago put the idea of metamaterials on the map, hypothesizing that nothing precluded the existence of a material that exhibits a negative index of refraction. Such a material would interact with light very differently than a normal material would. Where light ordinarily bounces off a material in the form of reflection, it would pass around this type of metamaterial like water going around a boulder in a stream.
It took until 2000 before the theory of metamaterials was implemented in the lab. That year, Richard A. Shelby and colleagues at the University of California, San Diego, demonstrated a negative refractive index metamaterial in the microwave region. They published the discovery in 2001 in Science, causing a stir as people imagined invisibility cloaks. (While intriguing to ponder, creating such a device would require precisely manufacturing and assembling thousands of metasurfaces.)
The first metalens to create high-quality images with visible light came out of Federico Capasso’s lab at Harvard. Demonstrated in 2016, with a description of the research published in Science, the technology immediately drew interest from smartphone manufacturers. Harvard then licensed the foundational intellectual property exclusively to Metalenz, where it has now been commercialized.
A single metalens [right] can replace a stack of traditional lenses [left], simplifying manufacturing and dramatically reducing the size of a lens package.Metalenz
Since then, researchers at Columbia University, Caltech, and the University of Washington, working with Tsinghua University, in Beijing, have also demonstrated the technology.
Much of the development work Metalenz does involves fine-tuning the way the devices are designed. In order to translate image features like resolution into nanoscale patterns, we developed tools to help calculate the way light waves interact with materials. We then convert those calculations into design files that can be used with standard semiconductor processing equipment.
The first wave of optical metasurfaces to make their way into mobile imaging systems have on the order of 10 million silicon pillars on a single flat surface only a few millimeters square, with each pillar precisely tuned to accept the correct phase of light, a painstaking process even with the help of advanced software. Future generations of the metalens won’t necessarily have more pillars, but they’ll likely have more sophisticated geometries, like sloped edges or asymmetric shapes.
Metalenz came out of stealth mode in 2021, announcing that it was getting ready to scale up production of devices. Manufacturing was not as big a challenge as design because the company manufactures metasurfaces using the same materials, lithography, and etching processes that it uses to make integrated circuits.
In fact, metalenses are less demanding to manufacture than even a very simple microchip because they require only a single lithography mask as opposed to the dozens required by a microprocessor. That makes them less prone to defects and less expensive. Moreover, the size of the features on an optical metasurface are measured in hundreds of nanometers, whereas foundries are accustomed to making chips with features that are smaller than 10 nanometers.
And, unlike plastic lenses, metalenses can be made in the same foundries that produce the other chips destined for smartphones. This means they could be directly integrated with the CMOS camera chips on site rather than having to be shipped to another location, which reduces their costs still further.
A single meta-optic, in combination with an array of laser emitters, can be used to create the type of high-contrast, near-infrared dot or line pattern used in 3D sensing. Metalenz
In 2022, ST Microelectronics announced the integration of Metalenz’s metasurface technology into its FlightSense modules. Previous generations of FlightSense have been used in more than 150 models of smartphones, drones, robots, and vehicles to detect distance. Such products with Metalenz technology inside are already in consumer hands, though ST Microelectronics isn’t releasing specifics.
Indeed, distance sensing is a sweet spot for the current generation of metalens technology, which operates at near-infrared wavelengths. For this application, many consumer electronics companies use a time-of-flight system, which has two optical components: one that transmits light and one that receives it. The transmitting optics are more complicated. These involve multiple lenses that collect light from a laser and transform it to parallel light waves—or, as optical engineers call it, a collimated beam. These also require a diffraction grating that turns the collimated beam into a field of dots. A single metalens can replace all of those transmitting and receiving optics, saving real estate within the device as well as reducing cost.
And a metalens does the field-of-dots job better in difficult lighting conditions because it can illuminate a broader area using less power than a traditional lens, directing more of the light to where you want it.
Conventional imaging systems, at best, gather information only about the spatial position of objects and their color and brightness.But the light carries another type of information: the orientation of the light waves as they travel through space—that is, the polarization. Future metalens applications will take advantage of the technology’s ability to detect polarized light.
The polarization of light reflecting off an object conveys all sorts of information about that object, including surface texture, type of surface material, and how deeply light penetrates the material before bouncing back to the sensor. Prior to the development of the metalens, a machine vision system would require complex optomechanical subsystems to gather polarization information. These typically rotate a polarizer—structured like a fence to allow only waves oriented at a certain angle to pass through—in front of a sensor. They then monitor how the angle of rotation impacts the amount of light hitting the sensor.
Metasurface optics are capable of capturing polarization information from light, revealing a material’s characteristics and providing depth information.Metalenz
A metalens, by contrast, doesn’t need a fence; all the incoming light comes through. Then it can be redirected to specific regions of the image sensor based on its polarization state, using a single optical element. If, for example, light is polarized along the X axis, the nanostructures of the metasurface will direct the light to one section of the image sensor. However, if it is polarized at 45 degrees to the X axis, the light will be directed to a different section. Then software can reconstruct the image with information about all its polarization states.
Using this technology, we can replace previously large and expensive laboratory equipment with tiny polarization-analysis devices incorporated into smartphones, cars, and even augmented-reality glasses. A smartphone-based polarimeter could let you determine whether a stone in a ring is diamond or glass, whether concrete is cured or needs more time, or whether an expensive hockey stick is worth buying or contains micro cracks. Miniaturized polarimeters could be used to determine whether a bridge’s support beam is at risk of failure, whether a patch on the road is black ice or just wet, or if a patch of green is really a bush or a painted surface being used to hide a tank. These devices could also help enable spoof-proof facial identification, since light reflects off a 2D photo of a person at different angles than a 3D face and from a silicone mask differently than it does from skin. Handheld polarizers could improve remote medical diagnostics—for example, polarization is used in oncology to examine tissue changes.
But like the smartphone itself, it’s hard to predict where metalenses will take us. When Apple introduced the iPhone in 2008, no one could have predicted that it would spawn companies like Uber. In the same way, perhaps the most exciting applications of metalenses are ones we can’t even imagine yet.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
LATTICE is an undergrad project from Caltech that’s developing a modular robotic transportation system for the lunar surface that uses autonomous rovers to set up a sort of cable car system to haul things like ice out of deep craters to someplace more useful. The prototype is fully functional, and pretty cool to watch in action.
We’re told that the team will be targeting a full system demonstration deploying across a “crater” on Earth this time next year. As to what those quotes around “crater” mean, your guess is as good as mine.
[ Caltech ]
Happy World Cocktail Day from Flexiv!
[ Flexiv ]
Here’s what Optimus has been up to lately.
As per usual, the robot is moderately interesting, but it’s probably best to mostly just ignore Musk.
[ Tesla ]
The INSECT tarsus-inspired compliant robotic grippER with soft adhesive pads (INSECTER) uses only one single electric actuator with a cable-driven mechanism. It can be easily controlled to perform a gripping motion akin to an insect tarsus (i.e., wrapping around the object) for handling various objects.
[ Paper ]
Congratulations to ANYbotics on their $50 million Series B!
And from 10 years ago (!) at ICRA 2013, here is video I took of StarlETH, one of ANYmal’s ancestors.
[ ANYbotics ]
In this video we present results from the recent field-testing campaign of the DigiForest project at Evo, Finland. The DigiForest project started in September 2022 and runs up to February 2026. It brings together diverse partners working on aerial robots, walking robots, autonomous lightweight harvesters, as well as forestry decision makers and commercial companies with the goal to create a full data pipeline for digitized forestry.
[ DigiForest ]
The Robotics and Perception Group at UZH will be presenting some new work on agile autonomous high-speed flight through cluttered environments at ICRA 2023.
[ Paper ]
Robots who lift together, stay together.
[ Sanctuary AI ]
The next CYBATHLON competition, which will take place again in 2024, breaks down barriers between the public, people with disabilities, researchers and technology developers. The initiative promotes the inclusion and participation of people with disabilities and improves assistance systems for use in everyday life by the end users.
[ Cybathlon ]
Russia’s invasion of Ukraine in 2022 put Ukrainian communications in a literal jam: Just before the invasion, Russian hackers knocked out Viasat satellite ground receivers across Europe. Then entrepreneur Elon Musk swept in to offer access to Starlink, SpaceX’s growing network of low Earth orbit (LEO) communications satellites. Musk soon reported that Starlink was suffering from jamming attacks and software countermeasures.
In March, the U.S. Department of Defense (DOD) concluded that Russia was still trying to jam Starlink, according to documents leaked by U.S. National Guard airman Ryan Teixeira and seen by the Washington Post. Ukrainian troops have likewise blamed problems with Starlink on Russian jamming, the website Defense One reports. If Russia is jamming a LEO constellation, it would be a new layer in the silent war in space-ground communications.
“There is really not a lot of information out there on this,” says Brian Weeden, the director of program planning for the Secure World Foundation, a nongovernmental organization that studies space governance. But, Weeden adds, “my sense is that it’s much harder to jam or interfere with Starlink [than with GPS satellites].”
Regardless of their altitude or size, communications satellites transmit more power and therefore require more power to jam than navigational satellites. However, compared with large geostationary satellites, LEO satellites—which orbit Earth at an altitude of 2,000 kilometers or lower—have frequent handovers that “introduce delays and opens up more surface for interference,” says Mark Manulis, a professor of privacy and applied cryptography at the University of the Federal Armed Forces’ Cyber Defense Research Institute (CODE) in Munich, Germany.
Security and communications researchers are working on defenses and countermeasures, mostly behind closed doors, but it is possible to infer from a few publications and open-source research how unprepared many LEO satellites are for direct attacks and some of the defenses that future LEO satellites may need.
For years, both private companies and government agencies have been planning LEO constellations, each numbering thousands of satellites. The DOD, for example, has been designing its own LEO satellite network to supplement its more traditional geostationary constellations for more than a decade and has already begun issuing contracts for the constellation’s construction. University research groups are also launching tiny, standardized cube satellites (CubeSats) into LEO for research and demonstration purposes. This proliferation of satellite constellations coincides with the emergence of off-the-shelf components and software-defined radio—both of which make the satellites more affordable, but perhaps less secure.
Russia’s defense agencies commissioned a system called Tobol that’s designed to counter jammers that might interfere with their own satellites, reported journalist and author Bart Hendrickx. That implies that Russia either can transmit jamming signals up to satellites, or suspects that adversaries can.
Many of the agencies and organizations launching the latest generation of low-cost satellites haven’t addressed the biggest security issues they face, researchers wrote in one review of LEO security in 2022. That may be because one of the temptations of LEO is the ability of relatively cheap new hardware to do smaller jobs.
“Satellites are becoming smaller. They are very purpose-specific,” says Ijaz Ahmad, a telecoms security researcher at the VTT Technical Research Centre in Espoo, Finland. “They have less resources for computing, processing, and also memory.” Less computing power means fewer encryption capabilities, as well as less ability to detect and respond to jamming or other active interference.
The rise of software-defined radio (SDR) has also made it easier to get hardware to accomplish new things, including allowing small satellites to cover many frequency bands. “When you make it programmable, you provide that hardware with some sort of remote connectivity so you can program it. But if the security side is overlooked, it will have severe consequences,” Ahmad says.
“At the moment there are no good standards focused on communications for LEO satellites.”
—Mark Manulis, professor of privacy and applied cryptography, University of the Federal Armed Forces
Among those consequences are organized criminal groups hacking and extorting satellite operators or selling information they have captured.
One response to the risks of software-defined radio and the fact that modern low-cost satellites require firmware updates is to include some simple physical security. Starlink did not respond to requests for comments on its security, but multiple independent researchers said they doubt today’s commercial satellites match military-grade satellite security countermeasures, or even meet the same standards as terrestrial communications networks. Of course, physical security can be defeated with a physical attack, and state actors have satellites capable of changing their orbits and grappling with, and thus perhaps physically hacking, communications satellites, the Secure World Foundation stated in an April report.
Despite that vulnerability, LEO satellites do bring certain advantages in a conflict: There are more of them, and they cost less per satellite. Attacking or destroying a satellite “might have been useful against an adversary who only has a few high-value satellites, but if the adversary has hundreds or thousands, then it’s a lot less of an impact,” Weeden says. LEO also offers a new option: sending a message to multiple satellites for later confirmation. That wasn’t possible when only a handful of GEO satellites covered Earth, but it is a way for cooperating transmitters and receivers to ensure that a message gets through intact. According to a 2021 talk by Vijitha Weerackody, a communications engineer at Johns Hopkins University, as few as three LEO satellites may be enough for such cooperation.
Even working together, future LEO constellation designers may need to respond with improved antennas, radio strategies that include spread spectrum modulation, and both temporal and transform-domain adaptive filtering. These strategies come at a cost to data transmission and complexity. But such measures may still be defeated by a strong enough signal that covers the satellite’s entire bandwidth and saturates its electronics.
“There’s a need to introduce a strong cryptographic layer,” says Manulis. “At the moment there are no good standards focused on communications for LEO satellites. Governments should push for standards in that area relying on cryptography.” The U.S. National Institute of Standards and Technology does have draft guidelines for commercial satellite cybersecurity that satellite operator OneWeb took into account when designing its LEO constellation, says OneWeb principal cloud-security architect Wendy Ng: “Hats off to them, they do a lot of work speaking to different vendors and organizations to make sure they’re doing the right thing.”
OneWeb uses encryption in its control channels, something a surprising number of satellite operators fail to do, says Johannes Willbold, a doctoral student at Ruhr University, in Bochum, Germany. Willbold is presenting his analysis of three research satellites’ security on 22 May 2023 at the IEEE Symposium on Security and Privacy. “A lot of satellites had straight-up no security measures to protect access in the first place,” he says.
Securing the growing constellations of LEO satellites matters to troops in trenches, investors in any space endeavor, anyone traveling into Earth orbit or beyond, and everyone on Earth who uses satellites to navigate or communicate. “I’m hoping there will be more initiatives where we can come together and share best practices and resources,” says OneWeb’s Ng. Willbold, who cofounded an academic workshop on satellite security, is optimistic that there will be: “It’s surprising to me how many people are now in the field, and how many papers they submitted.”
The Jet Propulsion Laboratory’s Ingenuity helicopter is preparing for the 50th flight of its five-flight mission to Mars. Flight 49, which took place last weekend, was its fastest and highest yet—the little helicopter flew 282 meters at an altitude of 16 meters, reaching a top speed of 6.50 meters per second. Not a bad performance for a tech demo that was supposed to be terminated two years ago.
From here, things are only going to get more difficult for Ingenuity. As the Perseverance rover continues its climb up the Jezero crater’s ancient river delta, Ingenuity is trying its best to scout ahead. But the winding hills and valleys make it difficult for the helicopter to communicate with the rover, and through the rover to its team back on Earth. And there isn’t a lot of time or room to spare, because Ingenuity isn’t allowed to fly too close to Perseverance, meaning that if the rover ever catches up to the helicopter, the helicopter may have to be left behind for the rover’s own safety. This high-stakes race between the helicopter scout and the science rover will continue for kilometers.
“Two years in, 10 kilometers flown, and we’re well over an hour now in the skies of Mars.”
—Teddy Tzanetos, NASA
For the Ingenuity team, this new mode of operation was both a challenge and an opportunity. This was nothing new for folks who have managed to keep this 30-day technology demo alive and healthy and productive for years, all from a couple hundred million kilometers away. IEEE Spectrum spoke with Ingenuity team lead Teddy Tzanetos at JPL last week about whether flying on Mars is ever routine, how they upgraded Ingenuity for its extended mission, and what the helicopter’s success means for the future of airborne exploration and science on Mars.
IEEE Spectrum: Is 50 flights on Mars a milestone for you folks, or are things routine enough now that you’re looking at it as just another flight?
Teddy Tzanetos: It’s hugely meaningful. We’ll come back to the routine question in a second, but it’s very meaningful for all of us. When we hit 10 and then 25 it was big, but 50 is a pretty serious number now that we’re 10 times our initial flight count. Two years in, 10 kilometers flown, and we’re well over an hour now in the skies of Mars. So hitting flight 50, it’s a big thing—we’re probably going to set up a happy hour and have a big party for the team.
Can you talk about some of the new challenges that Ingenuity has been facing as it makes its way up Jezero crater’s river delta along with the Perseverance rover?
Tzanetos: The core of the challenge here is that the paradigm has changed. When you look at the first year of Ingenuity’s extended operations, we were still in the Three Forks area, where the ground was flat. We could get line of sight from the helicopter to the rover from hundreds and hundreds of meters away. Our longest link that we established was 1.2 kilometers—a massive distance.
And then we started to realize that the rover was going to enter the river delta in like six months. It’s going to start climbing up through dozens and dozens of meters of elevation change and passing through ravines, and that’s going to start presenting a telecom issue for us. We knew that it couldn’t be business as usual anymore—if we still wanted to keep this helicopter mission going, not only did we need to change the way we were operating, we also had to change the helicopter itself.
“We owe it to everyone who worked on Ingenuity and everyone who will continue to work on rotorcraft on Mars to try and get everything out of this little spacecraft that we can.”
—Teddy Tzanetos, NASA
This realization culminated in the most challenging flight software upgrade we’ve ever done with Ingenuity, which happened last December. We went into the guts of our algorithms and added two new features. One was the ability to detect and react to landing hazards from the air, which involved handing over a little bit of autonomy back to Ingenuity, with the ability to tell it, “Fly to your terminal waypoint and try and land where we think is good, based off of orbital imagery. But if you have better information from your images than what we humans had here on Earth, and you see a hazard, pick a safer site and land there instead.” So that’s one huge change in what’s happening now. And we need that at the river delta because we’re no longer flying in a parking lot—besides the challenge of the elevation change, the terrain is different as well, with more and larger rocks that Ingenuity needs to avoid.
The second feature that we added was to include information about the terrain to Ingenuity’s navigation filter. When we designed Ingenuity, we assumed we were only going to be deployed on the flat terrain of Three Forks. Therefore, any change in the laser altimeter measurement we could trust to be a real change in the motion of the helicopter, or we could at least filter that into our altitude data. But that’s no longer the case. Now, as Ingenuity flies, if the altimeter sees a big decrease in elevation, that could be because the ground is rising to meet us rather than because we’re moving down. So since December, we’ve been telling Ingenuity about the elevation profile across its intended flight so that it knows what the ground is doing underneath it.
Now that both the rover and the helicopter have begun the river delta climb, we’re also paying very close attention to our telecom-link budget maps. You can imagine every hill or rise that could occlude the line of sight between the helicopter antenna and the rover antenna will have a big impact on your telecom link, and we have wonderful maps from orbit where we can pick a potential landing point and propagate our radio-link budget calculation across that point.
We’re trying to plan these flights as aggressively as we can to make sure that we stay ahead of Perseverance. We don’t want to run the risk of having a situation where the rover may need to wait for Ingenuity—that’s not a good thing for anybody. But we also want to provide value for the rover by scouting ahead, and what we hope to do on flight 50 is to get some imagery of the Belva crater, which is this beautiful massive crater to the north of where Ingenuity currently is. We’re going to get perspectives that the rover team would not be able to provide for the science team, and it’s really exciting for us when there are these moments that are uniquely driven by Ingenuity’s capability. We want to go after those, because we want to provide that value while she’s still healthy. While we still can. We owe it to everyone who worked on Ingenuity and everyone who will continue to work on rotorcraft on Mars to try and get everything out of this little spacecraft that we can.
“One of the best hallmarks of technology success is when you don’t realize it, or when it becomes boring. That means the technology is working, and that’s a wonderful feeling.”
—Teddy Tzanetos, NASA
At one point, NASA was very clear that Ingenuity’s mission would come to an end so that Perseverance could move on to focus on its primary mission. But obviously, Ingenuity is still flying, and still keeping up with the rover. Not only that, but we’ve heard from a rover driver how valuable it is to have Ingenuity scouting ahead. With that in mind, as Ingenuity navigates this challenging terrain, will there be any flexibility if something doesn’t go quite right, or will Perseverance just leave the helicopter behind?
Tzanetos: We have to look at the big picture. The most important thing at this point is for Perseverance to collect samples and do science. If you look at everything that needs to be done across all of the rover’s science payloads, every sol [Martian day] is precious. And the helicopter team understands that.
We’re doing our best to become more efficient, and I think that’s a big win that we don’t celebrate enough on the Ingenuity team internally—how much more efficient we are today compared to where we were two years ago. Earlier, you mentioned flying becoming routine. I think the team has succeeded in doing that, and I’m extremely proud of that accomplishment. One of the best hallmarks of technology success is when you don’t realize it, or when it becomes boring. That means the technology is working, and that’s a wonderful feeling.
There’s what’s called a tactical window that we have between the downlink of the last sol’s activity and when we need to uplink activity for the next sol, which is anywhere from five to 10 hours. A certain cadence of activities have to take place during that window, and we need to pass certain checkpoints to get our data uploaded and radiated through the Deep Space Network in time. We’ve worked very, very hard to minimize our footprint on that timeline, while also being reactive so that we can move quickly on any last-minute changes that the rover team needs us to accommodate. We have to get in, fly, and get out.
Anomalies will happen. That’s just the nature of Mars. But when those moments occur, the helicopter and rover teams back each other up. To be clear, no one on the helicopter team wants to cause a delay for the rover. We all want the rover to fulfill its mission, get its samples, and get the science done. If we have a serious anomaly, we’ll have to take that one sol at a time. We’re going to try as hard as we can to make sure we can keep pushing this little baby as far as we can while still accomplishing the core science mission.
NASA’s Ingenuity Mars Helicopter takes off and lands in this video captured on 19 April 2021 by Mastcam-Z, an imager aboard NASA’s Perseverance Mars rover. This video features only the moments of takeoff and the landing—and not footage of the helicopter hovering for about 30 seconds.NASA/JPL-Caltech/ASU/MSSS
How do you balance risk to the helicopter against exploration and science goals, or trying new things like pushing Ingenuity’s flight envelope?
Tzanetos: That’s the fun part! There’s no instruction manual. The way we do it is we have a phone call with the core people on the team, and everyone just shares their opinions. The highest priority for us is getting some good scouting imagery for the scientists and rover drivers—we jump at those opportunities. If we’re flying through a piece of terrain that isn’t particularly interesting, that’s when we start looking at the flight envelope developments, right? With flight 49, we’re going higher than we ever had before and flying faster than we ever have before. That’s not a request from the science community or the rover planners; that’s coming from our own internal team where we’re trying to release capability piece by piece as the flights go on, because every time we get that win, it’s a win for the sample recovery helicopters. So there’s that ever-present pressure to push harder, push faster, push higher. And let’s also get some wonderful scouting data along the way when we can.
What have you learned about flying helicopters on Mars from 50 flights that you would have no idea about if you’d been able to do just five flights?
Tzanetos: Tons of things, since I just talked about flying faster and flying higher, and we’ve now legitimately expanded Ingenuity’s flight envelope. There’s the lifetime argument, which is obvious—this design has lasted much longer than anyone could have expected, even just in terms of parts and workmanship. Each one of Ingenuity’s nearly 1,000 solder joints were soldered by technicians at JPL who have the most blessed, precise hands. We’d designed Ingenuity to fly in springtime on Mars, but during the Martian winter, for more than 200 sols the temperature cycled between 20 °C and –90 °C and back again. Eventually, it got so cold that Ingenuity’s battery would die every night, the heater would stop running, and everything would freeze. That was a massive curve ball that we had to contend with, but because of the workmanship of those people, Ingenuity was able to survive.
“We now have a stake in the ground to say, ‘Off-the-shelf works, we can trust these things.’”
—Teddy Tzanetos, NASA
Also, dust. We knew that dust would settle on Ingenuity’s solar panel, but we’ve shown that through the process of flying, there’s some sort of effect that’s helping us to keep our panel clean. It’s difficult to put a finger on exactly what it is—maybe the vibration of flight, or the downwash of air passing over the solar panel and into the rotors, or the oncoming air as we move forward. And it wasn’t just the dust on the panels; we also got dust in our actuators. Last year, Ingenuity weathered a big dust storm, and afterwards when we tried checking our control surfaces, things did not look good. The motor currents were way too high, and we were left scratching our heads, trying to figure out what to do. We didn’t have dust boots around the rotor system simply because we had thought, “We’re only going to be operating for 30 days, we don’t need them.”
Our partners at AeroVironment [who worked with JPL on the Mars helicopter design] had one of the swash plate mechanisms lying around, so they spoke to our geologists to figure out what kinds of dust particles might have gotten blown into the swash plate on Mars. We sent them some simulated Mars dust, and they threw it at the swash plate, and then did an experiment to figure out how many times they needed to cycle it before it started to operate properly. Seven cycles got most of the dust out, so we tried that on Mars, and it worked. So now we have a new tool in our tool belt: We know how to clean ourselves. That’s huge. And we wouldn’t have figured out any of these things had we not gone past five flights.
Looking at the Mars sample return helicopters, how much of their design has been made possible by the fact that Ingenuity has been able to fly this long and answer these questions that you might not have even thought to ask?
The entire design. I don’t think we’d be talking about sample recovery helicopters if Ingenuity didn’t fly, period, and if it hadn’t survived for as long as it has. You have to keep in mind, Ingenuity is a tech demo. These sample recovery helicopters are a real part of the mission now. If Perseverance has an anomaly in the next decade, these helicopters are the backup—they have to work. And I’m sure that Ingenuity’s two years of extended operations provided the evidence necessary to even start talking about the sample recovery helicopters. Otherwise, it would be crazy to think, “Let’s go from tech demo to part of a class B mission within a year.”
That’s amazing. It must feel really good for you folks to have completely changed what the sample return mission looks like because of how successful Ingenuity has been.
Absolutely. I personally thought to myself, “Hey, this is great, Ingenuity has been doing a great job, and this will be wonderful data for the next time we send a rotorcraft to Mars.” Which I thought was going to be like 10 years later—I thought that the Mars sample return would happen with a rover, and then maybe after that, we could throw some helicopters on Mars, maybe a hexacopter with some science payloads on it. Never in my wildest dreams did I ever think, while we’re still flying Ingenuity, that we’d be designing the next helicopter mission based on Ingenuity to go to Mars.
More broadly, how has Ingenuity influenced NASA’s approach to robotics?
From a robotics perspective, I hope one of the long-lasting impacts of Ingenuity is the adoption of commercial off-the-shelf technology into more NASA missions, and other non-NASA missions into space. This was the first time we flew a cellphone processor, not because we loved the idea about using a part that wasn’t radiation hardened but because we were forced to. We needed a high-throughput processor, and the only way to do that and be lightweight enough was to use a cellphone chip. There was a lot of concern about that—we did some initial testing, but given that we were a tech demo, which means high-risk, high reward, we could only do so much. And here we are, two years later, with this Snapdragon Qualcomm processor that’s been running for two years on the surface of Mars, not to mention all the other components like the IMU [inertial measurement unit], the camera, the battery, the solar panels. I think that’s one of the unsung victories of Ingenuity. We now have a stake in the ground to say, “Off-the-shelf works, we can trust these things.” And we can make a stronger argument for the next mission to really enable your engineers and your scientists to have much more technology on board than anything else we’ve sent into space.
Ingenuity will attempt Flight 50 anytime now, with the goal of traveling 300 meters to the other side of a ridge. The landing site may make it difficult to know whether the flight was successful until Perseverance catches up a bit, but we hope to hear the good news within the next few days.
A year has passed since the launch of the ESA’s Rosalind Franklin rover mission was put on hold, but the work has not stopped for the ExoMars teams in Europe.
In this programme, the ESA Web TV crew travel back to Turin, Italy to talk to the teams and watch as new tests are being conducted with the rover’s Earth twin Amalia while the real rover remains carefully stored in an ultra-clean room.
The 15-minute special programme gives an update on what happened since the mission was cancelled in 2022 because of the Russian invasion of Ukraine, the plan ahead, the new challenges, the latest deep drilling test and the stringent planetary protection measures in place.
ESA’s Rosalind Franklin rover has unique drilling capabilities and an on-board science laboratory unrivalled by any other mission in development. Its twin rover Amalia was back on its wheels and drilled down 1.7 metres into a martian-like ground in Italy – about 25 times deeper than any other rover has ever attempted on Mars. The rover also collected samples for analysis under the watchful eye of European science teams.
ESA, together with international and industrial partners, is reshaping the ExoMars Rosalind Franklin Mission with new European elements, including a lander, and a target date of 2028 for the trip to Mars.
The newly shaped Rosalind Franklin Mission will recover one of the original objectives of ExoMars – to create an independent European capability to access the surface of Mars with a sophisticated robotic payload.
More information: https://www.esa.int/ExoMars
A powerful trillion-watt laser shot at the sky can generate lightning rods in the air that can guide lightning strikes to keep them from causing havoc, a new study finds.
To date, the most common and effective form of protection against lightning is the lightning rod invented by Benjamin Franklin in 1752. These pointed electrically conductive metal rods intercept lightning strikes and guide their electric current safely to the ground.
However, a key drawback of a conventional lightning rod is that the radius of its area of protection is roughly equal to its height. Since there are practical limits to how tall one can build a lightning rod, this means they may not prove useful at protecting large areas, including sensitive infrastructure such as airports, rocket launchpads and nuclear power plants, says study senior author Jean-Pierre Wolf, a physicist at the University of Geneva.
“This is the first demonstration that lightning can be controlled by a laser.”
—Jean-Pierre Wolf, University of Geneva
Scientists first suggested using lasers to generate lightning rods in the air nearly 50 years ago. “The idea is to create a very long lightning rod with the laser,” Wolf says.
In the new study, researchers conducted experiments during the summer of 2021 at the top of Mount Säntis, which at 2,502 meters above sea level, is the highest mountain in the Alpstein massif of northeastern Switzerland. The laser was activated every time storms were forecast between June and September, with air traffic closed over the area during these tests.
Wolf and his colleagues sought to protect a 124-meter transmitter tower equipped with a traditional lightning rod at the summit belonging to telecommunications provider Swisscom. This tower is struck by lightning about 100 times a year, and scientists had previously equipped it with multiple sensors to analyze these strikes.
Near the tower, the researchers installed a near-infrared laser the size of a large car. It fired pulses each packing about a half-joule of energy and a picosecond (trillionth of a second) long roughly a thousand times a second, with a peak power of a terawatt (trillion watts). (It also shot a visible green beam to help show the laser’s path.)
“Imagine transporting a 10-ton laser to 2,500-meter altitude on a mountain with helicopters, making it run in very harsh conditions, tracking lightning in extreme weather like winds up to 200 kilometers per hour, heavy rain, hail, temperatures varying from -10 degrees to 20 degrees Celsius in the same day, and then, when it works, you get a massive lightning bolt some tens of meters next to you—and you’re so happy,” Wolf says.
The laser pulses can alter the refractive index of the air—the quality of a material that controls how quickly light travels within it. This can make the air behave like a series of lenses.
After crossing this lensing air, the intense, short laser pulses can rapidly ionize and heat air molecules, expelling them from the path of the beam at supersonic speeds. This leaves behind a channel of low-density air for roughly a millisecond. These “filaments” possess high electric conductivity and can thus serve as lightning rods, and can range up to 100 meters long. The researchers could adjust the laser to create filaments that appear up to a kilometer from the machine.
In experiments, the scientists created filaments above, but near, the tip of the tower’s lightning rod. This essentially boosted the rod’s height by at least 30 meters, extending its area of protection so that lightning would not strike parts of the tower otherwise outside the rod’s shelter, says study lead author Aurélien Houard, a research scientist at the Superior National School of Advanced Techniques in Paris.
The laser operated for more than six hours during thunderstorms happening within three kilometers of the tower. The tower was hit by at least 16 lightning flashes, all of which streaked upward.
Four of these flashes occurred while the laser was operating. High-speed camera footage and radio and X-ray detectors showed the laser helped guide the course of these discharges. One of these guided strikes was recorded on camera and revealed it followed the laser path for nearly 60 meters.
During tests carried out on the summit of Mt. Säntis by Jean-Pierre Wolf and Aurélien Houard’s team, the scientists noted that lightning discharges followed laser beams for several dozen meters before reaching the Swisscom telecommunications tower (in red and white).Xavier Ravinet/UNIGE
“This is the first demonstration that lightning can be controlled by a laser,” Wolf says.
Although lab experiments had suggested that lasers could help guide lightning strikes, previous experiments failed to do so in the field over the past 20 or so years. Wolf, Houard and their colleagues suggest their new work may have succeeded because of the pulse rate of their laser was hundreds of times greater than prior attempts. The more pulses are used, the greater the chance one might successfully intercept all of the activity leading up to a lightning flash. In addition, higher pulse rates are likely better at keeping filaments electrically conductive, they added.
Wolf noted their work is not geoengineering research. “We are not modifying the climate,” he says. “We deflect lightning to protect areas.”
In the long term, the scientists would like to use lasers to extend lightning rods by 500 meters. In addition, they would likely to run experiments at sites such as airports and rocket launchpads, Wolf notes.
The researchers detailed their findings 16 January in the journal Nature Photonics.
From biking adventures to city breaks, get inspiration for your next break – whether in the UK or further afield – with twice-weekly emails from the Guardian’s travel editors. You’ll also receive handpicked offers from Guardian Holidays.
From biking adventures to city breaks, get inspiration for your next break – whether in the UK or further afield – with twice-weekly emails from the Guardian’s travel editors.
You’ll also receive handpicked offers from Guardian Holidays.Continue reading...
Researchers used a model to predict how the smoke would move through the region and said it wouldn’t pose a health risk
Smoke from Canadian wildfires that has descended upon parts of the eastern US and Canada in a thick haze has drifted over Norway and is expected to hit southern Europe, Norwegian officials said on Friday.
Using a climate forecast model, atmosphere and climate scientists with the Norwegian climate and environmental research institute (NILU) predicted how the smoke would travel through the atmosphere, flowing over the Scandinavian country before moving further south. The smoke was not expected to pose a health risk there.Continue reading...
Air pollution in New York, the collapse of the Nova Kakhovka dam in Ukraine, protests in Colombo and Novak Djokovic at the French Open in Paris: the most striking images this weekContinue reading...
Satellite images captured from the International Space Station on Wednesday showed smoke from Canada's raging wildfires spreading to the US. The massive cloud of smoke was seen moving across Lake Superior, in the Great Lakes region, passing over Lake Huron and Lake Erie, and ending in Pennsylvania, which appears completely obscured. The smoke pushed further down the Atlantic seaboard on Thursday, blanketing Washington DC in an unhealthy hazeContinue reading...
Swift investment would make any Labour government a climate and economic leader – so why the dithering?
As wildfire smoke engulfs much of the east coast of the US and average global temperatures continue to rise, with the world imminently facing some of the hottest years on record, it would be an error of judgment for the Labour party to delay its green investment pledge. Doing so would not only be a mistake for our economy and the climate, but also threaten Labour’s electoral prospects, given strong public demand for bold action on this issue.
Together with its world-leading promise to end all new domestic oil and gas developments, the Labour party’s £28bn-a-year investment pledge to green industries marks the scale of climate ambition we need to see from a future British government. These commitments mark Labour out as a potential major climate leader and, like Joe Biden’s landmark Inflation Reduction Act (IRA), the investment pledge clearly demonstrates that the party is in tune with the economic realities of today’s world.
Rebecca Newsom is head of politics at GreenpeaceContinue reading...
Readers respond to Rowan Atkinson’s growing disillusionment with electric vehicles
Andrew Gould’s letter (4 June) highlights one flaw in Rowan Atkinson’s critique of electric cars (I love electric vehicles – and was an early adopter. But increasingly I feel duped, 3 June). Another serious flaw was to suggest it would be “sensible” to use electricity to produce synthetic fuels for petrol engines, rather than use electric cars.
This would be highly inefficient. A Guardian article last month (E-fuels: how big a niche can they carve out for cars?, 5 May) noted that only about 16% of the electricity used to produce synthetic fuels ends up in car-propulsion, compared with 77% for a battery-electric vehicle. To put this another way: the electricity needed to run one petrol car on synthetic fuel could run nearly five equivalent electric cars.Continue reading...
From sustainable fisheries to toxic battery waste, these images were chosen because they tell a compelling story about the state of our planetContinue reading...
Poland has a deep and historic relationship with coal, importing huge amounts despite producing yet more locally. With the energy crisis biting, fuelled by the war in Ukraine, the country’s government withdrew restrictions on burning materials and subsidised coal, creating huge air quality issues, particularly in the industrial south – reversing 10 years of hard work by air pollution campaigners in the process.
The Guardian visits southern Poland to witness first hand the impact of this decision on affected communities, meeting the ostracised miners at the front of the culture wars, and joining climate activists visiting towns in the region that are fighting back against fossil fuels and air pollutionContinue reading...
For about as long as engineers have talked about beaming solar power to Earth from space, they’ve had to caution that it was an idea unlikely to become real anytime soon. Elaborate designs for orbiting solar farms have circulated for decades—but since photovoltaic cells were inefficient, any arrays would need to be the size of cities. The plans got no closer to space than the upper shelves of libraries.
That’s beginning to change. Right now, in a sun-synchronous orbit about 525 kilometers overhead, there is a small experimental satellite called the Space Solar Power Demonstrator One (SSPD-1 for short). It was designed and built by a team at the California Institute of Technology, funded by donations from the California real estate developer Donald Bren, and launched on 3 January—among 113 other small payloads—on a SpaceX Falcon 9 rocket.
“To the best of our knowledge, this would be the first demonstration of actual power transfer in space, of wireless power transfer,” says Ali Hajimiri, a professor of electrical engineering at Caltech and a codirector of the program behind SSPD-1, the Space Solar Power Project.
The Caltech team is waiting for a go-ahead from the operators of a small space tug to which it is attached, providing guidance and attitude control. If all goes well, SSPD-1 will spend at least five to six months testing prototype components of possible future solar stations in space. In the next few weeks, the project managers hope to unfold a lightweight frame, called DOLCE (short for Deployable on-Orbit ultraLight Composite Experiment), on which parts of future solar arrays could be mounted. Another small assembly on the spacecraft contains samples of 32 different types of photovoltaic cells, intended to see which would be most efficient and robust. A third part of the vehicle contains a microwave transmitter, set up to prove that energy from the solar cells can be sent to a receiver. For this first experiment, the receivers are right there on board the spacecraft, but if it works, an obvious future step would be to send electricity via microwave to receivers on the ground.
Caltech’s Space Solar Power Demonstrator, shown orbiting Earth in this artist’s conception, was launched on 3 January.Caltech
One can dismiss the 50-kilogram SSPD-1 as yet another nonstarter, but a growing army of engineers and policymakers take solar energy from space seriously. Airbus, the European aerospace company, has been testing its own technology on the ground, and government agencies in China, Japan, South Korea, and the United States have all mounted small projects. “Recent technology and conceptual advances have made the concept both viable and economically competitive,” said Frazer-Nash, a British engineering consultancy, in a 2021 report to the U.K. government. Engineers working on the technology say microwave power transmissions would be safe, unlike ionizing radiation, which is harmful to people or other things in its path.
No single thing has happened to start this renaissance. Instead, say engineers, several advances are coming together.
For one thing, the cost of launching hardware into orbit keeps dropping, led by SpaceX and other, smaller companies such as Rocket Lab. SpaceX has a simplified calculator on its website, showing that if you want to launch a 50-kg satellite into sun-synchronous orbit, they’ll do it for US $275,000.
Meanwhile, photovoltaic technology has improved, step by step. Lightweight electronic components keep getting better and cheaper. And there is political pressure as well: Governments and major companies have made commitments to decarbonize in the battle against global climate change, committing to renewable energy sources to replace fossil fuels.
Most solar power, at least for the foreseeable future, will be Earth-based, which will be cheaper and easier to maintain than anything anyone can launch into space. Proponents of space-based solar power say that for now, they see it as best used for specialty needs, such as remote outposts, places recovering from disasters, or even other space vehicles.
But Hajimiri says don’t underestimate the advantages of space, such as unfiltered sunlight that is far stronger than what reaches the ground and is uninterrupted by darkness or bad weather—if you can build an orbiting array light enough to be practical.
Most past designs, dictated by the technology of their times, included impossibly large truss structures to hold solar panels and wiring to route power to a central transmitter. The Caltech team would dispense with all that. An array would consist of thousands of independent tiles as small as 100 square centimeters, each with its own solar cells, transmitter, and avionics. They might be loosely connected, or they might even fly in formation.
Time-lapse images show the experimental DOLCE frame for an orbiting solar array being unfolded in a clean room.Caltech
“The analogy I like to use is that it’s like an army of ants instead of an elephant,” says Hajimiri. Transmission to receivers on the ground could be by phased array—microwave signals from the tiles synchronized so that they can be aimed with no moving parts. And the parts—the photovoltaic cells with their electronics—could perhaps be so lightweight that they’re flexible. New algorithms could keep their signals focused.
“That’s the kind of thing we’re talking about,” said Harry Atwater, a coleader of the Caltech project, as SSPD-1 was being planned. “Really gossamer-like, ultralight, the limits of mass-density deployable systems.”
If it works out, in 30 years maybe there could be orbiting solar power fleets, adding to the world’s energy mix. In other words, as a recent report from Frazer-Nash concluded, this is “a potential game changer.”
This article appears in the April 2023 print issue as “Trial Run for Orbiting Solar Array.”
At IEEE, we know that the advancement of science and technology is the engine that drives the improvement of the quality of life for every person on this planet. Unfortunately, as we are all aware, today’s world faces significant challenges, including escalating conflicts, a climate crisis, food insecurity, gender inequality, and the approximately 2.7 billion people who cannot access the Internet.
The COVID-19 pandemic exposed the digital divide like never before. The world saw the need for universal broadband connectivity for remote work, online education, telemedicine, entertainment, and social networking. Those who had access thrived while those without it struggled. As millions of classrooms moved online, the lack of connectivity made it difficult for some students to participate in remote learning. Adults who could not perform their job virtually faced layoffs or reduced work hours.
The pandemic also exposed weaknesses in the global infrastructure that supports the citizens of the world. It became even more apparent that vital communications, computing, energy, and distribution infrastructure was not always equitably distributed, particularly in less developed regions.
I had the pleasure of presenting the 2023 IEEE President’s Award to Doreen Bogdan-Martin, secretary-general of the International Telecommunication Union, on 28 March, at ITU’s headquarters in Geneva. The award recognizes her distinguished leadership at the agency and her notable contributions to the global public.
It is my honor to recognize such a transformational leader and IEEE member for her demonstrated commitment to bridging the digital divide and to ensuring connectivity that is safe, inclusive, and affordable to all.
Nearly 45 percent of global households do not have access to the Internet, according to UNESCO. A report from UNICEF estimates that nearly two-thirds of the world’s schoolchildren lack Internet access at home.
This digital divide is particularly impactful on women. who are 23 percent less likely than men to use the Internet. According to the United Nations Educational, Scientific and Cultural Organization, in 10 countries across Africa, Asia, and South America, women are between 30 percent and 50 percent less likely than men to make use of the Internet.
Even in developed countries, Internet access is often lower than one might imagine. More than six percent of the U.S. population does not have a high-speed connection. In Australia, the figure is 13 percent. Globally, just over half of households have an Internet connection, according to UNESCO. In the developed world, 87 percent are connected, compared with 47 percent in developing nations and just 19 percent in the least developed countries.
As IEEE looks to lead the development of technology to tackle climate change and empower universal prosperity, it is essential that we recognize the role that meaningful connectivity and digital technology play in the organization’s goals to support global sustainability, drive economic growth, and transform health care, education, employment, gender equality, and youth empowerment.
IEEE members around the globe are continuously developing and applying technology to help solve these problems. It is that universal passion—to improve global conditions—that is at the heart of our mission, as well as our expanding partnerships and significant activities supporting the achievement of the U.N. Sustainable Development Goals.
One growing partnership is with the International Telecommunication Union, a U.N. specialized agency that helps set policy related to information and communication technologies. IEEE Member Doreen Bogdan-Martin was elected as ITU secretary-general and took office on 1 January, becoming the first woman to lead the 155-year-old organization. Bogdan-Martin is the recipient of this year’s IEEE President’s Award [see sidebar].
IEEE and ITU share the goal of bringing the benefits of technology to all of humanity. I look forward to working closely with the U.N. agency to promote meaningful connectivity, intensify cooperation to connect the unconnected, and strengthen the alignment of digital technologies with inclusive sustainable development.
I truly believe that one of the most important applications of technology is to improve people’s lives. For those in underserved regions of the world, technology can improve educational opportunities, provide better health care, alleviate suffering, and maintain human dignity.
Technology and technologists, particularly IEEE members, have a significant role to play in shaping life on this planet. They can use their skills to develop and advance technology—from green energy to reducing waste and emissions, and from transportation electrification to digital education, health, and agriculture. As a person who believes in the power of technology to benefit humanity, I find this to be a very compelling vision for our shared future.
Please share your thoughts with me: email@example.com.
IEEE president and CEO
This article appears in the June 2023 print issue as “Connecting the Unconnected.”
Here are some answers about the new social media network Bluesky that you don’t need an invite to see.
The post Is Bluesky Billionaire-Proof? appeared first on The Intercept.
Everything burns. Given the right environment, all matter can burn by adding oxygen, but finding the right mix and generating enough heat makes some materials combust more easily than others. Researchers interested in knowing more about a type of fire called discrete burning used ESA’s microgravity experiment facilities to investigate.
The 19-seater Dornier 228 propeller plane that took off into the cold blue January sky looked ordinary at first glance. Spinning its left propeller, however, was a 2-megawatt electric motor powered by two hydrogen fuel cells—the right side ran on a standard kerosene engine—making it the largest aircraft flown on hydrogen to date. Val Miftakhov, founder and CEO of ZeroAvia, the California startup behind the 10-minute test flight in Gloucestershire, England, called it a “historical day for sustainable aviation.”
Los Angeles–based Universal Hydrogen plans to test a 50-seat hydrogen-powered aircraft by the end of February. Both companies promise commercial flights of retrofitted turboprop aircraft by 2025. French aviation giant Airbus is going bigger with a planned 2026 demonstration flight of its iconic A380 passenger airplane, which will fly using hydrogen fuel cells and by burning hydrogen directly in an engine. And Rolls Royce is making headway on aircraft engines that burn pure hydrogen.
The aviation industry, responsible for some 2.5 percent of global carbon emissions, has committed to net-zero emissions by 2050. Getting there will require several routes, including sustainable fuels, hybrid-electric engines, and battery-electric aircraft.
Hydrogen is another potential route. Whether used to make electricity in fuel cells or burned in an engine, it combines with oxygen to emit water vapor. If green hydrogen scales up for trucks and ships, it could be a low-cost fuel without the environmental issues of batteries.
Flying on hydrogen brings storage and aircraft-certification challenges, but aviation companies are doing the groundwork now for hydrogen flight by 2035. “Hydrogen is headed off to the sky, and we’re going to take it there,” says Amanda Simpson, vice president for research and technology at Airbus Americas.
The most plentiful element, hydrogen is also the lightest—key for an industry fighting gravity—packing three times the energy of jet fuel by weight. The problem with hydrogen is its volume. For transport, it has to be stored in heavy tanks either as a compressed high-pressure gas or a cryogenic liquid.
ZeroAvia is using compressed hydrogen gas, since it is already approved for road transport. Its test airplane had two hydrogen fuel cells and tanks sitting inside the cabin, but the team is now thinking creatively about a compact system with minimal changes to aircraft design to speed up certification in the United States and Europe. The fuel cells’ added weight could reduce flying range, but “that’s not a problem, because aircraft are designed to fly much further than they’re used,” says vice president of strategy James McMicking.
The company has backing from investors that include Bill Gates and Jeff Bezos; partnerships with British Airways and United Airlines; and 1,500 preorders for its hydrogen-electric power-train system, half of which are for smaller, 400-kilometer-range 9- to 19-seaters.
By 2027, ZeroAvia plans to convert larger, 70-seater turboprop aircraft with twice the range, used widely in Europe. The company is developing 5-MW electric motors for those, and it plans to switch to more energy-dense liquid hydrogen to save space and weight. The fuel is novel for the aviation industry and could require a longer regulatory approval process, McMicking says.
Next will come a 10-MW power train for aircraft with 100 to 150 seats, “the workhorses of the industry,” he says. Those planes—think Boeing 737—are responsible for 60 percent of aviation emissions. Making a dent in those with hydrogen will require much more efficient fuel cells. ZeroAvia is working on proprietary high-temperature fuel cells for that, McMicking says, with the ability to reuse the large amounts of waste heat generated. “We have designs and a technology road map that takes us into jet-engine territory for power,” he says.
Universal Hydrogen, which counts Airbus, GE Aviation, and American Airlines among its strategic investors, is placing bets on liquid hydrogen. The startup, “a hydrogen supply and logistics company at our core,” wants to ensure a seamless delivery network for hydrogen aviation as it catches speed, says founder and CEO Paul Eremenko. The company sources green hydrogen, turns it into liquid, and puts it in relatively low-tech insulated aluminum tanks that it will deliver via road, rail, or ship. “We want them certified by the Federal Aviation Administration for 2025, which means they can’t be a science project,” he says.
The cost of green hydrogen is expected to be on par with kerosene by 2025, Eremenko says. But “there’s nobody out there with an incredible hydrogen-airplane solution. It’s a chicken-and-egg problem.”
To crack it, Universal Hydrogen partnered with leading fuel-cell-maker Plug Power to develop a few thousand conversion kits for regional turboprop airplanes. The kits swap the engine in its streamlined housing (also known as nacelle) for a fuel-cell stack, power electronics, and a 2-MW electric motor. While the company’s competitors use batteries as buffers during takeoff, Eremenko says Universal uses smart algorithms to manage fuel cells, so they can ramp up and respond quickly. “We are the Nespresso of hydrogen,” he says. “We buy other people’s coffee, put it into capsules, and deliver to customers. But we have to build the first coffee machine. We’re the only company incubating the chicken and egg at the same time.”
This rendering of an Airbus A380 demonstrator flight (presently slated for 2026) reveals current designs on an aircraft that’s expected to fly using fuel cells and by burning hydrogen directly in the engine. Airbus
Fuel cells have a few advantages over a large central engine. They allow manufacturers to spread out smaller propulsion motors over an aircraft, giving them more design freedom. And because there are no high-temperature moving parts, maintenance costs can be lower. For long-haul aircraft, however, the weight and complexity of high-power fuel cells makes hydrogen-combustion engines appealing.
Airbus is considering both fuel-cell and combustion propulsion for its ZEROe hydrogen aircraft system. It has partnered with German automotive fuel-cell-maker Elring Klinger and, for direct combustion engines, with CFM International, a joint venture between GE Aviation and Safran. Burning liquid hydrogen in today’s engines is still expected to require slight modifications, such as a shorter combustion chamber and better seals.
Airbus is also evaluating hybrid propulsion concepts with a hydrogen-engine-powered turbine and a hydrogen-fuel-cell-powered motor on the same shaft, says Simpson, of Airbus Americas. “Then you can optimize it so you use both propulsion systems for takeoff and climb, and then turn one off for cruising.”
The company isn’t limiting itself to simple aircraft redesign. Hydrogen tanks could be stored in a cupola on top of the plane, pods under the wings, or a large tank at the back, Simpson says. Without liquid fuel in the wings, as in traditional airplanes, she says, “you can optimize wings for aerodynamics, make them thinner or longer. Or maybe a blended-wing body, which could be very different. This opens up the opportunity to optimize aircraft for efficiency.” Certification for such new aircraft could take years, and Airbus isn’t expecting commercial flights until 2035.
Conventional aircraft made today will be around in 2050 given their 25- to 30-year life-span, says Robin Riedel, an analyst at McKinsey & Co. Sustainable fuels are the only green option for those. He says hydrogen could play a role there, through “power-to-liquid technology, where you can mix hydrogen and captured carbon dioxide to make aviation fuel.”
Even then, Riedel thinks hydrogen will likely be a small part of aviation’s sustainability solution until 2050. “By 2070, hydrogen is going to play a much bigger role,” he says. “But we have to get started on hydrogen now.” The money that Airbus and Boeing are putting into hydrogen is a small fraction of aerospace, he says, but big airlines investing in hydrogen companies or placing power-train orders “shows there is desire.”
The aviation industry has to clean up if it is to grow, Simpson says. Biofuels are a stepping-stone, because they reduce only carbon emissions, not other harmful ones. “If we’re going to move towards clean aviation, we have to rethink everything from scratch and that’s what ZEROe is doing,” she says. “This is an opportunity to make not an evolutionary change but a truly revolutionary one.”
This article appears in the April 2023 print issue as “Hydrogen-Powered Flight Cleared for Takeoff.”
This sponsored article is brought to you by COMSOL.
History teaches that the Industrial Revolution began in England in the mid-18th century. While that era of sooty foundries and mills is long past, manufacturing remains essential — and challenging. One promising way to meet modern industrial challenges is by using additive manufacturing (AM) processes, such as powder bed fusion and other emerging techniques. To fulfill its promise of rapid, precise, and customizable production, AM demands more than just a retooling of factory equipment; it also calls for new approaches to factory operation and management.
That is why Britain’s Manufacturing Technology Centre (MTC) has enhanced its in-house metal powder bed fusion AM facility with a simulation model and app to help factory staff make informed decisions about its operation. The app, built using the Application Builder in the COMSOL Multiphysics software, shows the potential for pairing a full-scale AM factory with a so-called “digital twin” of itself.
“The model helps predict how heat and humidity inside a powder bed fusion factory may affect product quality and worker safety,” says Adam Holloway, a technology manager within the MTC’s modeling team. “When combined with data feeds from our facility, the app helps us integrate predictive modeling into day-to-day decision-making.” The MTC project demonstrates the benefits of placing simulation directly into the hands of today’s industrial workforce and shows how simulation could help shape the future of manufacturing.
“We’re trying to present the findings of some very complex calculations in a simple-to-understand way. By creating an app from our model, we can empower staff to run predictive simulations on laptops during their daily shifts.”
—Adam Holloway, MTC Technology Manager
To help modern British factories keep pace with the world, the MTC promotes high-value manufacturing throughout the United Kingdom. The MTC is based in the historic English industrial city of Coventry (Figure 2), but its focus is solely on the future. That is why the team has committed significant human and technical resources to its National Centre for Additive Manufacturing (NCAM).
“Adopting AM is not just about installing new equipment. Our clients are also seeking help with implementing the digital infrastructure that supports AM factory operations,” says Holloway. “Along with enterprise software and data connectivity, we’re exploring how to embed simulation within their systems as well.”
The NCAM’s Digital Reconfigurable Additive Manufacturing for Aerospace (DRAMA) project provides a valuable venue for this exploration. Developed in concert with numerous manufacturers, the DRAMA initiative includes the new powder bed fusion AM facility mentioned previously. With that mini factory as DRAMA’s stage, Holloway and his fellow simulation specialists play important roles in making its production of AM aerospace components a success.
What makes a manufacturing process “additive”, and why are so many industries exploring AM methods? In the broadest sense, an additive process is one where objects are created by adding material layer by layer, rather than removing it or molding it. A reductive or subtractive process for producing a part may, for example, begin with a solid block of metal that is then cut, drilled, and ground into shape. An additive method for making the same part, by contrast, begins with empty space! Loose or soft material is then added to that space (under carefully controlled conditions) until it forms the desired shape. That pliable material must then be solidified into a durable finished part.
Different materials demand different methods for generating and solidifying additive forms. For example, common 3D printers sold to consumers produce objects by unspooling warm plastic filament, which bonds to itself and becomes harder as it cools. By contrast, the metal powder bed fusion process (Ref. 1) begins with, as its name suggests, a powdered metal which is then melted by applied heat and re-solidified when it cools. A part produced via the metal powder bed fusion process can be seen in Figure 3.
“The market opportunities for AM methods have been understood for a long time, but there have been many obstacles to large-scale adoption,” Holloway says. “Some of these obstacles can be overcome during the design phase of products and AM facilities. Other issues, such as the impact of environmental conditions on AM production, must be addressed while the facility is operating.”
For instance, maintaining careful control of heat and humidity is an essential task for the DRAMA team. “The metal powder used for the powder bed fusion process (Figure 4) is highly sensitive to external conditions,” says Holloway. “This means it can begin to oxidize and pick up ambient moisture even while it sits in storage, and those processes will continue as it moves through the facility. Exposure to heat and moisture will change how it flows, how it melts, how it picks up an electric charge, and how it solidifies,” he says. “All of these factors can affect the resulting quality of the parts you’re producing.”
Careless handling of powdered metal is not just a threat to product quality. It can threaten the health and safety of workers as well. “The metal powder used for AM processes is flammable and toxic, and as it dries out, it becomes even more flammable,” Holloway says. “We need to continuously measure and manage humidity levels, as well as how loose powder propagates throughout the facility.”
To maintain proper atmospheric conditions, a manufacturer could augment its factory’s ventilation with a full climate control system, but that could be prohibitively expensive. The NCAM estimated that it would cost nearly half a million English pounds to add climate control to its relatively modest facility. But what if they could adequately manage heat and humidity without adding such a complicated system?
Perhaps using multiphysics simulation for careful process management could provide a cost-effective alternative. “As part of the DRAMA program, we created a model of our facility using the computational fluid dynamics (CFD) capabilities of the COMSOL software. Our model (Figure 5) uses the finite element method to solve partial differential equations describing heat transfer and fluid flow across the air domain in our facility,” says Holloway. “This enabled us to study how environmental conditions would be affected by multiple variables, from the weather outside, to the number of machines operating, to the way machines were positioned inside the shop. A model that accounts for those variables helps factory staff adjust ventilation and production schedules to optimize conditions,” he explains.
The DRAMA team made their model more accessible by building a simulation app of it with the Application Builder in COMSOL Multiphysics (Figure 6). “We’re trying to present the findings of some very complex calculations in a simple-to-understand way,” Holloway explains. “By creating an app from our model, we can empower staff to run predictive simulations on laptops during their daily shifts.”
The app user can define relevant boundary conditions for the beginning of a factory shift and then make ongoing adjustments. Over the course of a shift, heat and humidity levels will inevitably fluctuate. Perhaps factory staff should alter the production schedule to maintain part quality, or maybe they just need to open doors and windows to improve ventilation. Users can change settings in the app to test the possible effects of actions like these. For example, Figure 8 presents isothermal surface plots that show the effect that opening the AM machines’ build chambers has on air temperature, while Figure 9 shows how airflow is affected by opening the facility doors.
While the current app is an important step forward, it does still require workers to manually input relevant data. Looking ahead, the DRAMA team envisions something more integral, and therefore, more powerful: a “digital twin” for its AM facility. A digital twin, as described by Ed Fontes in a 2019 post on the COMSOL Blog (Ref. 2), is “a dynamic, continuously updated representation of a real physical product, device, or process.” It is important to note that even the most detailed model of a system is not necessarily its digital twin.
“To make our factory environment model a digital twin, we’d first provide it with ongoing live data from the actual factory,” Holloway explains. “Once our factory model was running in the background, it could adjust its forecasts in response to its data feeds and suggest specific actions based on those forecasts.”
“We want to integrate our predictive model into a feedback loop that includes the actual factory and its staff. The goal is to have a holistic system that responds to current factory conditions, uses simulation to make predictions about future conditions, and seamlessly makes self-optimizing adjustments based on those predictions,” Holloway says. “Then we could truly say we’ve built a digital twin for our factory.”
As an intermediate step toward building a full factory-level digital twin, the DRAMA simulation app has already proven its worth. “Our manufacturing partners may already see how modeling can help with planning an AM facility, but not really understand how it can help with operation,” Holloway says. “We’re showing the value of enabling a line worker to open up the app, enter in a few readings or import sensor data, and then quickly get a meaningful forecast of how a batch of powder will behave that day.”
Beyond its practical insights for manufacturers, the overall project may offer a broader lesson as well: By pairing its production line with a dynamic simulation model, the DRAMA project has made the entire operation safer, more productive, and more efficient. The DRAMA team has achieved this by deploying the model where it can do the most good — into the hands of the people working on the factory floor.
At Moffett Field in Mountain View, Calif., Lighter Than Air (LTA) Research is floating a new approach to a technology that saw its rise and fall a century ago: airships. Although airships have long since been supplanted by planes, LTA, which was founded in 2015 by CEO Alan Weston, believes that through a combination of new materials, better construction techniques, and technological advancements, airships are poised to—not reclaim the skies, certainly—but find a new niche.
Although airships never died off entirely—the Goodyear blimps, familiar to sports fans, are proof of that—the industry was already in decline by 1937, the year of the Hindenburg disaster. By the end of World War II, airships couldn’t compete with the speed airplanes offered, and they required larger crews. Today, what airships still linger serve primarily for advertising and sightseeing.
LTA’s Pathfinder 1 carries bigger dreams than hovering over a sports stadium, however. The company sees a natural fit for airships in humanitarian and relief missions. Airships can stay aloft for long periods of time, in case ground conditions aren’t ideal, have a long range, and carry significant payloads, according to Carl Taussig, LTA’s chief technical officer.
Pathfinder’s cigar-shaped envelope is just over 120 meters in length and 20 meters in diameter. While that dwarfs Goodyear’s current, 75-meter Wingfoot One, it’s still only half the length of the Hindenburg. LTA expects Pathfinder 1 to carry approximately 4 tonnes of cargo, in addition to its crew, water ballast, and fuel. The airship will have a top speed of 65 knots, or about 120 kilometers per hour—on par with the Hindenburg—with a sustained cruise speed of 35 to 40 knots (65 to 75 km/h).
It may not seem much of an advance to be building an airship that flies no faster than the Hindenburg. But Pathfinder 1 carries a lot of new tech that LTA is betting will prove key to an airship resurgence.
For one, airships used to be constructed around riveted aluminum girders, which provided the highest strength-to-weight ratio available at the time. Instead, LTA will be using carbon-fiber tubes attached to titanium hubs. As a result, Pathfinder 1’s primary structure will be both stronger and lighter.
Pathfinder 1’s outer covering is also a step up from past generations. Airships like the 1930s’ Graf Zeppelin had coverings made out of doped cotton canvas. The dope painted on the fabric increased its strength and resiliency. But canvas is still canvas. LTA has instead built its outer coverings out of a three-layer laminate of synthetics. The outermost layer is DuPont’s Tedlar, which is a polyvinyl fluoride. The middle layer is a loose weave of fire-retardant aramid fibers. The inner layer is polyester. “It’s very similar to what’s used in a lot of racing sailboats,” says Taussig. “We needed to modify that material to make it fire resistant and change a little bit about its structural performance.”
But neither the materials science nor the manufacturing advances will take primary credit for LTA’s looked-for success, according to Taussig—instead, it’s the introduction of electronics. “Everything’s electric on Pathfinder,” he says. “All the actuation, all the propulsion, all the actual power is all electrically generated. It’s a fully electric fly-by-wire aircraft, which is not something that was possible 80 years ago.” Pathfinder 1 has 12 electric motors for propulsion, as well as four tail fins with steering rudders controlled by its fly-by-wire system. (During initial test flights, the airship will be powered by two reciprocating aircraft engines).
There’s one other piece of equipment making an appearance on Pathfinder 1 that wasn’t available 80 years ago: lidar. Installed at the top of each of Pathfinder 1’s helium gas cells is an automotive-grade lidar. “The lidar can give us a point cloud showing the entire internal hull of that gas cell,” says Taussig, which can then be used to determine the gas cell’s volume accurately. In flight, the airship’s pilots can use that information, as well as data about the helium’s purity, pressure, and temperature, to better keep the craft pitched properly and to avoid extra stress on the internal structure during flight.
Although LTA’s initial focus is on humanitarian applications, there are other areas where airships might shine one day. “An airship is kind of a ‘tweener,’ in between sea cargo and air freight,” says Taussig. Being fully electric, Pathfinder 1 is also greener than traditional air- or sea-freight options.
After completing Pathfinder 1’s construction late in 2022, LTA plans to conduct a series of ground tests on each of the airship’s systems in the first part of 2023. Once the team is satisfied with those tests, they’ll move to tethered flight tests and finally untethered flight tests over San Francisco’s South Bay later in the year.
The company will also construct an approximately 180-meter-long airship, Pathfinder 3 at its Akron Airdock facility in Ohio. Pathfinder 3 won’t be ready to fly in 2023, but its development shows LTA’s aspirations for an airship renaissance is more than just hot air.
This article appears in the January 2023 print issue as “The Return of the Airship.”
Top Tech 2023: A Special Report
Preview exciting technical developments for the coming year.
Can This Company Dominate Green Hydrogen?
Fortescue will need more electricity-generating capacity than France.
Pathfinder 1 could herald a new era for zeppelins
A New Way to Speed Up Computing
Blue microLEDs bring optical fiber to the processor.
The Personal-Use eVTOL Is (Almost) Here
Opener’s BlackFly is a pulp-fiction fever dream with wings.
Baidu Will Make an Autonomous EV
Its partnership with Geely aims at full self-driving mode.
China Builds New Breeder Reactors
The power plants could also make weapons-grade plutonium.
Economics Drives a Ray-Gun Resurgence
Lasers should be cheap enough to use against drones.
A Cryptocurrency for the Masses or a Universal ID?
What Worldcoin’s killer app will be is not yet clear.
The company’s Condor chip will boast more than 1,000 qubits.
Vagus-nerve stimulation promises to help treat autoimmune disorders.
New satellites can connect directly to your phone.
The E.U.’s first exascale supercomputer will be built in Germany.
A dozen more tech milestones to watch for in 2023.
A rocket built by Indian startup Skyroot has become the country’s first privately developed launch vehicle to reach space, following a successful maiden flight earlier today. The suborbital mission is a major milestone for India’s private space industry, say experts, though more needs to be done to nurture the fledgling sector.
The Vikram-S rocket, named after the founder of the Indian space program, Vikram Sarabhai, lifted off from the Indian Space Research Organization’s (ISRO) Satish Dhawan Space Centre, on India’s east coast, at 11:30 a.m. local time (1 a.m. eastern time). It reached a peak altitude of 89.5 kilometers (55.6 miles), crossing the 80-km line that NASA counts as the boundary of space, but falling just short of the 100 km recognized by the Fédération Aéronautique Internationale.
In the longer run, India’s space industry has ambitions of capturing a significant chunk of the global launch market.
Pawan Kumar Chandana, cofounder of the Hyderabad-based startup, says the success of the launch is a major victory for India’s nascent space industry, but the buildup to the mission was nerve-racking. “We were pretty confident on the vehicle, but, as you know, rockets are very notorious for failure,” he says. “Especially in the last 10 seconds of countdown, the heartbeat was racing up. But once the vehicle had crossed the launcher and then went into the stable trajectory, I think that was the moment of celebration.”
At just 6 meters (20 feet) long and weighing only around 550 kilograms (0.6 tonnes), the Vikram-S is not designed for commercial use. Today’s mission, called Prarambh, which means “the beginning” in Sanskrit, was designed to test key technologies that will be used to build the startup’s first orbital rocket, the Vikram I. The rocket will reportedly be capable of lofting as much as 480 kg up to an 500-km altitude and is slated for a maiden launch next October.
Skyroot cofounder Pawan Kumar Chandana standing in front of the Vikram-S rocket at the Satish Dhawan Space Centre, on the east coast of India.Skyroot
In particular, the mission has validated Skyroot’s decision to go with a novel all-carbon fiber structure to cut down on weight, says Chandana. It also allowed the company to test 3D-printed thrusters, which were used for spin stabilization in Vikram-S but will power the upper stages of its later rockets. Perhaps the most valuable lesson, though, says Chandana, was the complexity of interfacing Skyroot's vehicle with ISRO’s launch infrastructure. “You can manufacture the rocket, but launching it is a different ball game,” he says. “That was a great learning experience for us and will really help us accelerate our orbital vehicle.”
Skyroot is one of several Indian space startups looking to capitalize on recent efforts by the Indian government to liberalize its highly regulated space sector. Due to the dual-use nature of space technology, ISRO has historically had a government-sanctioned monopoly on most space activities, says Rajeswari Pillai Rajagopalan, director of the Centre for Security, Strategy and Technology at the Observer Research Foundation think tank, in New Delhi. While major Indian engineering players like Larsen & Toubro and Godrej Aerospace have long supplied ISRO with components and even entire space systems, the relationship has been one of a supplier and vendor, she says.
But in 2020, Finance Minister Nirmala Sitharaman announced a series of reforms to allow private players to build satellites and launch vehicles, carry out launches, and provide space-based services. The government also created the Indian National Space Promotion and Authorisation Centre (InSpace), a new agency designed to act as a link between ISRO and the private sector, and affirmed that private companies would be able to take advantage of ISRO’s facilities.
The first launch of a private rocket from an ISRO spaceport is a major milestone for the Indian space industry, says Rajagopalan. “This step itself is pretty crucial, and it’s encouraging to other companies who are looking at this with a lot of enthusiasm and excitement,” she says. But more needs to be done to realize the government’s promised reforms, she adds. The Space Activities Bill that is designed to enshrine the country’s space policy in legislation has been languishing in draft form for years, and without regulatory clarity, it’s hard for the private sector to justify significant investments. “These are big, bold statements, but these need to be translated into actual policy and regulatory mechanisms,” says Rajagopalan.
Skyroot’s launch undoubtedly signals the growing maturity of India’s space industry, says Saurabh Kapil, associate director in PwC’s space practice. “It’s a critical message to the Indian space ecosystem, that we can do it, we have the necessary skill set, we have those engineering capabilities, we have those manufacturing or industrialization capabilities,” he says.
The Vikram-S rocket blasting off from the Satish Dhawan Space Centre, on the east coast of India.Skyroot
However, crossing this technical milestone is only part of the challenge, he says. The industry also needs to demonstrate a clear market for the kind of launch vehicles that companies like Skyroot are building. While private players are showing interest in launching small satellites for applications like agriculture and infrastructure monitoring, he says, these companies will be able to build sustainable businesses only if they are allowed to compete for more lucrative government and defense-sector contacts.
In the longer run, though, India’s space industry has ambitions of capturing a significant chunk of the global launch market, says Kapil. ISRO has already developed a reputation for both reliability and low cost—its 2014 mission to Mars cost just US $74 million, one-ninth the cost of a NASA Mars mission launched the same week. That is likely to translate to India’s private space industry, too, thanks to a considerably lower cost of skilled labor, land, and materials compared with those of other spacefaring nations, says Kapil. “The optimism is definitely there that because we are low on cost and high on reliability, whoever wants to build and launch small satellites is largely going to come to India,” he says.
SEMrush and Ahrefs are among the most popular tools in the SEO industry. Both companies have been in business for years and have thousands of customers per month.
If you're a professional SEO or trying to do digital marketing on your own, at some point you'll likely consider using a tool to help with your efforts. Ahrefs and SEMrush are two names that will likely appear on your shortlist.
In this guide, I'm going to help you learn more about these SEO tools and how to choose the one that's best for your purposes.
What is SEMrush?
SEMrush is a popular SEO tool with a wide range of features—it's the leading competitor research service for online marketers. SEMrush's SEO Keyword Magic tool offers over 20 billion Google-approved keywords, which are constantly updated and it's the largest keyword database.
The program was developed in 2007 as SeoQuake is a small Firefox extension
Ahrefs is a leading SEO platform that offers a set of tools to grow your search traffic, research your competitors, and monitor your niche. The company was founded in 2010, and it has become a popular choice among SEO tools. Ahrefs has a keyword index of over 10.3 billion keywords and offers accurate and extensive backlink data updated every 15-30 minutes and it is the world's most extensive backlink index database.
Direct Comparisons: Ahrefs vs SEMrush
Now that you know a little more about each tool, let's take a look at how they compare. I'll analyze each tool to see how they differ in interfaces, keyword research resources, rank tracking, and competitor analysis.
Ahrefs and SEMrush both offer comprehensive information and quick metrics regarding your website's SEO performance. However, Ahrefs takes a bit more of a hands-on approach to getting your account fully set up, whereas SEMrush's simpler dashboard can give you access to the data you need quickly.
In this section, we provide a brief overview of the elements found on each dashboard and highlight the ease with which you can complete tasks.
The Ahrefs dashboard is less cluttered than that of SEMrush, and its primary menu is at the very top of the page, with a search bar designed only for entering URLs.
Additional features of the Ahrefs platform include:
When you log into the SEMrush Tool, you will find four main modules. These include information about your domains, organic keyword analysis, ad keyword, and site traffic.
You'll also find some other options like
Both Ahrefs and SEMrush have user-friendly dashboards, but Ahrefs is less cluttered and easier to navigate. On the other hand, SEMrush offers dozens of extra tools, including access to customer support resources.
When deciding on which dashboard to use, consider what you value in the user interface, and test out both.
If you're looking to track your website's search engine ranking, rank tracking features can help. You can also use them to monitor your competitors.
Let's take a look at Ahrefs vs. SEMrush to see which tool does a better job.
The Ahrefs Rank Tracker is simpler to use. Just type in the domain name and keywords you want to analyze, and it spits out a report showing you the search engine results page (SERP) ranking for each keyword you enter.
Rank Tracker looks at the ranking performance of keywords and compares them with the top rankings for those keywords. Ahrefs also offers:
You'll see metrics that help you understand your visibility, traffic, average position, and keyword difficulty.
It gives you an idea of whether a keyword would be profitable to target or not.
SEMRush offers a tool called Position Tracking. This tool is a project tool—you must set it up as a new project. Below are a few of the most popular features of the SEMrush Position Tracking tool:
All subscribers are given regular data updates and mobile search rankings upon subscribing
The platform provides opportunities to track several SERP features, including Local tracking.
Intuitive reports allow you to track statistics for the pages on your website, as well as the keywords used in those pages.
Identify pages that may be competing with each other using the Cannibalization report.
Ahrefs is a more user-friendly option. It takes seconds to enter a domain name and keywords. From there, you can quickly decide whether to proceed with that keyword or figure out how to rank better for other keywords.
SEMrush allows you to check your mobile rankings and ranking updates daily, which is something Ahrefs does not offer. SEMrush also offers social media rankings, a tool you won't find within the Ahrefs platform. Both are good which one do you like let me know in the comment.
Keyword research is closely related to rank tracking, but it's used for deciding which keywords you plan on using for future content rather than those you use now.
When it comes to SEO, keyword research is the most important thing to consider when comparing the two platforms.
The Ahrefs Keyword Explorer provides you with thousands of keyword ideas and filters search results based on the chosen search engine.
Ahrefs supports several features, including:
SEMrush's Keyword Magic Tool has over 20 billion keywords for Google. You can type in any keyword you want, and a list of suggested keywords will appear.
The Keyword Magic Tool also lets you to:
Both of these tools offer keyword research features and allow users to break down complicated tasks into something that can be understood by beginners and advanced users alike.
If you're interested in keyword suggestions, SEMrush appears to have more keyword suggestions than Ahrefs does. It also continues to add new features, like the Keyword Gap tool and SERP Questions recommendations.
Both platforms offer competitor analysis tools, eliminating the need to come up with keywords off the top of your head. Each tool is useful for finding keywords that will be useful for your competition so you know they will be valuable to you.
Ahrefs' domain comparison tool lets you compare up to five websites (your website and four competitors) side-by-side.it also shows you how your site is ranked against others with metrics such as backlinks, domain ratings, and more.
Use the Competing Domains section to see a list of your most direct competitors, and explore how many keywords matches your competitors have.
To find more information about your competitor, you can look at the Site Explorer and Content Explorer tools and type in their URL instead of yours.
SEMrush provides a variety of insights into your competitors' marketing tactics. The platform enables you to research your competitors effectively. It also offers several resources for competitor analysis including:
Traffic Analytics helps you identify where your audience comes from, how they engage with your site, what devices visitors use to view your site, and how your audiences overlap with other websites.
SEMrush's Organic Research examines your website's major competitors and shows their organic search rankings, keywords they are ranking for, and even if they are ranking for any (SERP) features and more.
The Market Explorer search field allows you to type in a domain and lists websites or articles similar to what you entered. Market Explorer also allows users to perform in-depth data analytics on These companies and markets.
SEMrush wins here because it has more tools dedicated to competitor analysis than Ahrefs. However, Ahrefs offers a lot of functionality in this area, too. It takes a combination of both tools to gain an advantage over your competition.
When it comes to keyword data research, you will become confused about which one to choose.
Consider choosing Ahrefs if you
Consider SEMrush if you:
Both tools are great. Choose the one which meets your requirements and if you have any experience using either Ahrefs or SEMrush let me know in the comment section which works well for you.
RSS Rabbit links users to publicly available RSS entries.
Vet every link before clicking! The creators accept no responsibility for the contents of these entries.
We're not prepared to take user feedback yet. Check back soon!