********** ENTERTAINMENT **********
return to top
Elliot Page on Juno, Hollywood’s dark side and coming out twice
Sat, 10 Jun 2023 06:00:14 GMT
When the feelgood movie made him an Oscar-nominated star, the strain of hiding who he was almost forced him to quit acting. He explains how opening up about being gay, then trans, saved his life
Elliot Page’s memoir is called Pageboy. At its heart is the story of his transitioning from an Oscar-nominated actress, best known for the wonderful coming-of-age comedy drama Juno, to one of the world’s most high profile trans men. He writes, rather beautifully, about gender dysphoria, top surgery and finally finding himself. But the book is so much more than a tale of transition.
Pageboy is a modern-day Hollywood Babylon, written by a sensitive soul rather than a scandalmonger. Page depicts a film industry even more rancid than we may have suspected. This is a world where it’s not only the Harvey Weinsteins at the top of the pyramid who get to abuse the young and powerless – just about everybody seems to have a go. It’s a world where most people appear to be closeted in one way or another, a world where more acting is done off set than on.Continue reading...
Lust, cruelty and a desperate need for attention ... I thought my old journals would bring back warm memories – in reality they were a document of the unique misery and painful insecurities of adolescence
I own nine of the most devastatingly embarrassing books ever written. They might be the only books that attempt to algebraically prove the existence of God 14 pages after the words, “I don’t want to touch his penis.” Actually, that’s probably not true – the books are my teenage diaries, and there’s nothing unique about the humiliating and exhilarating experience of being a 14-year-old girl.
If there’s one thing in the world I never want to be again and wish I never had to be in the first place, it’s a teenage swot with PE in the morning. When I ask female friends if they think being a teen was uniquely awful, one replies “uniquely awful and uniquely special” – but I don’t think that is something I ever felt. I love to see the magic of girlhood represented in coming-of-age movies such as Lady Bird and Booksmart, but I look back at my own adolescent self and see a floundering fish who hurt and was hurt with little meaning or beauty.Continue reading...
No one wants to see the cast naked any more, so this TV follow-up shuns stripping for comic capers and cost-of-living tragedy. Even better, it actually gives plotlines to the female characters
Television shows that remake films tend to be exercises in pointless nostalgia. Do you remember the movies Fatal Attraction, Dangerous Liaisons and American Gigolo? Yes. Would you like to watch a weird cosplay version of them that goes on for 10 hours and confusingly reshuffles the plot? Um, not really. The Full Monty (from 14 June, Disney+) is the latest entrant in an already tired genre, but it has one up on most of the competition: all the core cast are in that sweet spot where they’re successful enough to be worth rehiring but not so famous they’ve turned the reboot down. That means there’s no need to rejig the story of redundant Sheffield steelworkers who, in 1997, found solace in hard times by forming a Chippendales-style male striptease troupe. We simply return to Sheffield 26 years later, to find the same characters, played by the same actors, living the same lives.
The film had it easy, plot-wise, in that it built towards that heartwarming climactic moment when a sextet of men showed the local community their penises. Those six appendages were the pegs on which were hung serious subtexts about the misery of life in a Thatcher-ravaged, deindustrialised northern England. A quarter of a century on, however, the prospect of the old boys windmilling their hosepipes in housewives’ faces would horrify everyone. So the new Full Monty is fully clothes-on.Continue reading...
Animation has come a long way since 1900, when J. Stuart Blackton created The Enchanted Drawing, the earliest known animated film. The 90-second movie was created using stop-motion techniques, as flat characters, props, and backgrounds were drawn on an easel or made from paper.
Most modern animators rely on computer graphics and visualization techniques to create popular movies and TV shows like Finding Dory, Toy Story, and Paw Patrol. In the 1960s and ’70s, computer science pioneers David Evans and IEEE Life Member Ivan E. Sutherland led the development of many of the technologies animators now use. Their groundbreaking research, conducted at the University of Utah, in Salt Lake City, and at their company, Evans and Sutherland, helped jump-start the computer graphics industry.
A ceremony was held at the university on 24 March to recognize the computer graphics and visualization techniques with an IEEE Milestone. The IEEE Utah Section sponsored the nomination.
Computer graphics began in the 1950s with interactive games and visualization tools designed by the U.S. military to develop technologies for aviation, radar, and rocketry.
Evans and Sutherland, then computer science professors at the University of Utah, wanted to expand on the use of such tools by finding a way for computers to simulate objects and environments. In 1968 they founded Evans and Sutherland, locating the E&S headquarters in the university’s research park.
Many of today’s computer graphics luminaries—including Pixar cofounder Edwin Catmull, Adobe cofounder John Warnock, and Netscape founder Jim Clark, who also founded Silicon Graphics—got their start in the industry as E&S employees or as doctoral students working on research at the company’s facilities.
IEEE Milestone Dedication: Utah Computer Graphics youtu.be
While at E&S, the employees and students made fundamental contributions to computer graphics processes, says IEEE Fellow Christopher Johnson, a University of Utah computer science professor.
“David Evans, Ivan Sutherland, and their students and colleagues helped change the world,” Johnson says.
“The period from 1968 through 1978 was an extraordinary time for computer graphics,” adds Brian Berg, IEEE Region 6 history chair. “There was a rare confluence of faculty, students, staff, facilities, and resources to support research into computer vision algorithms and hardware that produced remarkable developments in computer graphics and visualization techniques. This research was responsible for the birth of much of continuous-tone computer graphics as we know it today.” Continuous-tone computer graphics have a virtually unlimited range of color and shades of gray.
Evans began his career in 1955 at Bendix—an aviation electronics company in Avon, Ohio—as manager of a project that aimed to develop an early personal computer. He left to join the University of California, Berkeley, as chair of its computer science department. He also headed Berkeley’s research for the Pentagon’s Advanced Research Project Agency (now known as the Defense Advanced Research Projects Agency).
In 1963 Evans became a principal investigator for ARPA’s Project Genie. He helped develop hardware techniques that enabled commercial use of time-shared computer systems.
In 1965 the University of Utah hired him to establish its computer science department after receiving an ARPA grant of US $5 million to investigate how the emerging field of computer graphics could play a role in the country’s technological competitiveness, according to Computer Graphics and Computer Animation.
In 1968 Evans asked Sutherland, a former colleague at Berkeley who was then an associate professor of electrical engineering at Harvard, to join him at the University of Utah, luring him with the promise of starting a company together. Sutherland was already famous in computer graphics circles, having created Sketchpad, the first computer-aided design program, for his Ph.D. thesis in 1963 at MIT.
The two founded E&S almost as soon as Sutherland arrived, and they began working on computer-based simulation systems.
The duo in 1969 developed the line-drawing system displays LDS-1 and LDS-2, the first graphics devices with a processing unit. They then built the E&S Picture System—the next generation of LDS displays.
Those workstations, as they were called, came to be used by most computer-generated-imagery production companies through the 1980s.
E&S also developed computer-based simulation systems for military and commercial training, including the CT5 and CT6 flight simulators.
In addition to hiring employees, E&S welcomed computer science doctoral students from the university to work on their research projects at the company.
“Almost every influential person in the modern computer-graphics community either passed through the University of Utah or came into contact with it in some way,” Robert Rivlin wrote in his book, The Algorithmic Image: Graphic Visions of the Computer Age.
One of the doctoral students was Henri Gouraud, who in 1971 developed an algorithm to simulate the differing effects of light and color across the surface of an object. The Gouraud shading method is still used by creators of video games and cartoons.
In 1974 Edwin Catmull, then also a doctoral student at the university, developed the principle of texture mapping, a method for adding complexity to a computer-generated surface. Catmull went on to help found Pixar in 1986 with computer scientist Alvy Ray Smith, an IEEE member. For his work in the industry, Catmull received the 2006 IEEE John von Neumann Medal.
Doctoral student Bui Tuong Phong in 1973 devised Phong shading, a modeling method that reflects light so computer-generated graphics can look shiny and plasticlike.
“As a group, the University of Utah contributed more to the field of knowledge in computer graphics than any of its contemporaries,” Berg wrote in the Milestone proposal. “That fact is made most apparent both in the widespread use of the techniques developed and in the body of awards the innovations garnered.” The awards include several scientific and technical Oscars, an Emmy, and many IEEE medals.
Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.
The Milestone plaque displayed on a granite obelisk outside of the University of Utah’s Merrill engineering building reads:
In 1965 the University of Utah established a Center of Excellence for computer graphics research with Advanced Research Projects Agency (ARPA) funding. In 1968 two professors founded the pioneering graphics hardware company Evans & Sutherland; by 1978, fundamental rendering and visualization techniques disclosed in doctoral dissertations included the Warnock algorithm, Gouraud shading, the Catmull-Rom spline, and the Blinn-Phong reflection model. Alumni-founded companies include Atari, Silicon Graphics, Adobe, Pixar, and Netscape.
A group of researchers from NASA, MIT, and other institutions have achieved the fastest space-to-ground laser-communication link yet, doubling the record they set last year. With data rates of 200 gigabits per second, a satellite could transmit more than 2 terabytes of data—roughly as much as 1,000 high-definition movies—in a single 5-minute pass over a ground station.
“The implications are far-reaching because, put simply, more data means more discoveries,” says Jason Mitchell, an aerospace engineer at NASA’s Space Communications and Navigation program.
The new communications link was made possible with the TeraByte InfraRed Delivery (TBIRD) system orbiting about 530 kilometers above Earth’s surface. Launched into space last May, TBIRD achieved downlink rates of up to 100 Gb/s with a ground-based receiver in California by last June. This was 100 times as fast as the quickest Internet speeds in most cities, and more than 1,000 times as fast as radio links traditionally used for communications with satellites.
The fastest data networks on Earth typically rely on laser communications over fiber optics. However, a high-speed laser-based Internet does not exist yet for satellites. Instead, space agencies and commercial satellite operators most commonly use radio to communicate with objects in space. The infrared light that laser communications can employ has a much higher frequency than radio waves, enabling much higher data rates.
“There are satellites currently in orbit limited by the amount of data they are able to downlink, and this trend will only increase as more capable satellites are launched,” says Kat Riesing, an aerospace engineer and a staff member at MIT Lincoln Laboratory on the TBIRD team. “Even a hyperspectral imager—HISUI on the International Space Station—has to send data back to Earth via storage drives on cargo ships due to limitations on downlink rates. TBIRD is a big enabler for missions that collect important data on Earth’s climate and resources, as well as astrophysics applications such as black hole imaging.”
MIT Lincoln Laboratory conceived TBIRD in 2014 as a low-cost, high-speed way to access data on spacecraft. A key way it reduced expenses was by using commercial, off-the-shelf components originally developed for terrestrial use. These include high-rate optical modems developed for fiber telecommunications and high-speed large-volume storage to hold data, Riesing says.
Located onboard NASA’s Pathfinder Technology Demonstrator 3 (PTD-3) satellite, TBIRD was carried into orbit on SpaceX’s Transporter-5 rideshare mission from Cape Canaveral Space Force Station in Florida on 25 May 2022. The PTD-3 satellite is a roughly 12-kilogram CubeSat about the size of two stacked cereal boxes, and its TBIRD payload is no larger than the average tissue box. “Industry’s drive to small, low-power, high-data-rate optical transceivers enabled us to achieve a compact form factor suitable even for small satellites,” Mitchell says.
“There are satellites currently in orbit limited by the amount of data they are able to downlink, and this trend will only increase as more-capable satellites are launched.” —Kat Riesing, aerospace engineer, MIT Lincoln Laboratory
The development of TBIRD faced a number of challenges. To start with, terrestrial components are not designed to survive the rigors of launching to and operating in space. For example, during a thermal test simulating the extreme temperatures the devices might face in space, the fibers in the optical signal amplifier melted.
The problem was that, when used as originally intended, the atmosphere could help cool the amplifier through convection. When tested in a vacuum, simulating space, the heat that the amplifier generated was trapped. To solve the issue, the researchers worked with the amplifier’s vendor to modify it so that it released heat through conduction instead.
In addition, laser beams from space to Earth can experience distortion from atmospheric effects and weather conditions. This can cause power loss, and in turn data loss, for the beams.
To compensate, the scientists developed their own version of automatic repeat request (ARQ), a protocol for controlling errors in data transmission over a communications link. In this arrangement, the ground terminal uses a low-rate uplink signal to let the satellite know that it has to retransmit any block of data, or frame, that has been lost or damaged. The new protocol lets the ground station tell the satellite which frames it received correctly, so the satellite knows which ones to retransmit and not waste time sending data it doesn’t have to.
Another challenge the scientists faced stemmed from how lasers form in much narrower beams than radio transmissions. For successful data transmission, these beams must be aimed precisely at their receivers. This is often accomplished by mounting the laser on a gimbal. Due to TBIRD’s small size, however, it instead maneuvers the CubeSat carrying it to point it at the ground, using any error signals it receives to correct the satellite’s orientation. This gimbal-less strategy also helped further shrink TBIRD, making it cheaper to launch.
TBIRD’s architecture can support multiple channels through wavelength separation to enable higher data rates, Riesing says. This is how TBIRD accomplished a 200-Gb/s downlink on 28 April—by using two 100-Gb/s channels, she explains. “This can scale further on a future mission if the link is designed to support it,” Riesing notes.
“Put simply, more data means more discoveries.” —Jason Mitchell, aerospace engineer, NASA
The research team’s next step is to explore where to apply this technology in upcoming missions. “This technology is particularly useful for science missions where collecting a lot of data can provide significant benefits,” Riesing says. “One mission concept that is enabled by this is the Event Horizon Explorer mission, which will extend the exciting work of the Event Horizon Telescope in imaging black holes with even higher resolution.”
The scientists also want to explore how to extend this technology to different scenarios, such as geostationary orbit, Riesing says. Moreover, Mitchell says, they are looking at ways to push TBIRD’s capabilities as far away as the moon, in order to support future missions there. The rates under consideration are in the 1- to 5-Gb/s range, which “may not seem like much of an improvement, but remember the moon is roughly 400,000 km away from Earth, which is quite a long distance to cover,” Mitchell says.
The new technology may also find use in high-speed atmospheric data links on the ground. “For example, from building to building, or across inhospitable terrain, such as from mountaintop to mountaintop, where the cost of laying fiber systems could be exorbitant,” Riesing says.
On a gin-clear December day, I’m sitting under the plexiglass bubble of a radically new kind of aircraft. It’s a little past noon at the Byron Airport in northern California; in the distance, a jagged line of wind turbines atop rolling hills marks the Altamont Pass, blades spinning lazily. Above me, a cloudless blue sky beckons.
The aircraft, called BlackFly, is unlike anything else on the planet. Built by a Palo Alto, Calif., startup called Opener, it’s an electric vertical take-off and landing (eVTOL) aircraft with stubby wings fore and aft of the pilot, each with four motors and propellers. Visually, it’s as though an aerial speedster from a 1930s pulp sci-fi story has sprung from the page.
There are a couple of hundred startups designing or flying eVTOLs. But only a dozen or so are making tiny, technologically sophisticated machines whose primary purpose is to provide exhilarating but safe flying experiences to people after relatively minimal training. And in that group, Opener has jumped out to an early lead, having built dozens of aircraft at its facilities in Palo Alto and trained more than a score of people to fly them.
My own route to the cockpit of a BlackFly was relatively straightforward. I contacted the company’s CEO, Ken Karklin, in September 2022, pitched him on the idea of a story and video, and three months later I was flying one of his aircraft.
Well, sort of flying it. My brief flight was so highly automated that I was more passenger than pilot. Nevertheless, I spent about a day and a half before the flight being trained to fly the machine manually, so that I could take control if anything went wrong. For this training, I wore a virtual-reality headset and sat in a chair that tilted and gyrated to simulate flying maneuvers. To “fly” this simulation I manipulated a joystick that was identical to the one in the cockpit of a BlackFly. Opener’s chief operating officer, Kristina L. Menton, and engineer Wyatt Warner took turns patiently explaining the operations of the vehicle and giving me challenging tasks to complete, such as hovering and performing virtual landings in a vicious crosswind.
The BlackFly is entirely controlled by that joystick, which is equipped with a trigger and also topped by a thumb switch. To take off, I squeeze the trigger while simultaneously pushing forward on the switch. The machine leaps into the air with the sound of a million bees, and with a surge of giddy elation I am climbing skyward.
Much more so than an airplane or helicopter, the BlackFly taps into archetypal human yearnings for flight, the kind represented by magic carpets, the flying cars in “The Jetsons,” and even those Mountain Banshees in the movie “Avatar.” I’ve had several unusual experiences in aircraft, including flying on NASA’s zero-gravity-simulating “Vomit Comet,” and being whisked around in a BlackFly was definitely the most absorbing and delightful. Gazing out over the Altamont Pass from an altitude of about 60 meters, I had a feeling of joyous release—from Earth’s gravity and from earthly troubles.
For technical details about the BlackFly and to learn more about its origin, go here.
The BlackFly is also a likely harbinger of things to come. Most of the startups developing eVTOLs are building vehicles meant to carry several passengers on commercial runs of less than 50 kilometers. Although the plan is for these to be flown by pilots initially, most of the companies anticipate a day when the flights will be completely automated. So specialized aircraft such as the BlackFly—designed to be registered and operated as “ultralight” aircraft under aviation regulations—could provide mountains of invaluable data on highly and fully automated flying and perhaps even help familiarize people with the idea of flying without a pilot. Indeed, during my flight, dozens of sensors gathered gigabytes of data, to add to the large reservoir Opener has already collected during many hundreds of test flights so far.
As of late February 2023, Opener hadn’t yet announced a retail price or an official commercial release date for the aircraft, which has been under development and testing for more than a decade. I’ll be keeping an eye out for further news of the company. Long after my flight was over I was still savoring the experience, and hoping for another one.
Special thanks to IEEE.tv for collaborating on production of this video.
Non-fungible tokens (NFTs) are the most popular digital assets today, capturing the attention of cryptocurrency investors, whales and people from around the world. People find it amazing that some users spend thousands or millions of dollars on a single NFT-based image of a monkey or other token, but you can simply take a screenshot for free. So here we share some freuently asked question about NFTs.
NFT stands for non-fungible token, which is a cryptographic token on a blockchain with unique identification codes that distinguish it from other tokens. NFTs are unique and not interchangeable, which means no two NFTs are the same. NFTs can be a unique artwork, GIF, Images, videos, Audio album. in-game items, collectibles etc.
A blockchain is a distributed digital ledger that allows for the secure storage of data. By recording any kind of information—such as bank account transactions, the ownership of Non-Fungible Tokens (NFTs), or Decentralized Finance (DeFi) smart contracts—in one place, and distributing it to many different computers, blockchains ensure that data can’t be manipulated without everyone in the system being aware.
The value of an NFT comes from its ability to be traded freely and securely on the blockchain, which is not possible with other current digital ownership solutionsThe NFT points to its location on the blockchain, but doesn’t necessarily contain the digital property. For example, if you replace one bitcoin with another, you will still have the same thing. If you buy a non-fungible item, such as a movie ticket, it is impossible to replace it with any other movie ticket because each ticket is unique to a specific time and place.
One of the unique characteristics of non-fungible tokens (NFTs) is that they can be tokenised to create a digital certificate of ownership that can be bought, sold and traded on the blockchain.
As with crypto-currency, records of who owns what are stored on a ledger that is maintained by thousands of computers around the world. These records can’t be forged because the whole system operates on an open-source network.
NFTs also contain smart contracts—small computer programs that run on the blockchain—that give the artist, for example, a cut of any future sale of the token.
Non-fungible tokens (NFTs) aren't cryptocurrencies, but they do use blockchain technology. Many NFTs are based on Ethereum, where the blockchain serves as a ledger for all the transactions related to said NFT and the properties it represents.5) How to make an NFT?
Anyone can create an NFT. All you need is a digital wallet, some ethereum tokens and a connection to an NFT marketplace where you’ll be able to upload and sell your creations
When you purchase a stock in NFT, that purchase is recorded on the blockchain—the bitcoin ledger of transactions—and that entry acts as your proof of ownership.
The value of an NFT varies a lot based on the digital asset up for grabs. People use NFTs to trade and sell digital art, so when creating an NFT, you should consider the popularity of your digital artwork along with historical statistics.
In the year 2021, a digital artist called Pak created an artwork called The Merge. It was sold on the Nifty Gateway NFT market for $91.8 million.
Non-fungible tokens can be used in investment opportunities. One can purchase an NFT and resell it at a profit. Certain NFT marketplaces let sellers of NFTs keep a percentage of the profits from sales of the assets they create.
Many people want to buy NFTs because it lets them support the arts and own something cool from their favorite musicians, brands, and celebrities. NFTs also give artists an opportunity to program in continual royalties if someone buys their work. Galleries see this as a way to reach new buyers interested in art.
There are many places to buy digital assets, like opensea and their policies vary. On top shot, for instance, you sign up for a waitlist that can be thousands of people long. When a digital asset goes on sale, you are occasionally chosen to purchase it.
To mint an NFT token, you must pay some amount of gas fee to process the transaction on the Etherum blockchain, but you can mint your NFT on a different blockchain called Polygon to avoid paying gas fees. This option is available on OpenSea and this simply denotes that your NFT will only be able to trade using Polygon's blockchain and not Etherum's blockchain. Mintable allows you to mint NFTs for free without paying any gas fees.
The answer is no. Non-Fungible Tokens are minted on the blockchain using cryptocurrencies such as Etherum, Solana, Polygon, and so on. Once a Non-Fungible Token is minted, the transaction is recorded on the blockchain and the contract or license is awarded to whoever has that Non-Fungible Token in their wallet.
You can sell your work and creations by attaching a license to it on the blockchain, where its ownership can be transferred. This lets you get exposure without losing full ownership of your work. Some of the most successful projects include Cryptopunks, Bored Ape Yatch Club NFTs, SandBox, World of Women and so on. These NFT projects have gained popularity globally and are owned by celebrities and other successful entrepreneurs. Owning one of these NFTs gives you an automatic ticket to exclusive business meetings and life-changing connections.
That’s a wrap. Hope you guys found this article enlightening. I just answer some question with my limited knowledge about NFTs. If you have any questions or suggestions, feel free to drop them in the comment section below. Also I have a question for you, Is bitcoin an NFTs? let me know in The comment section below
Royal Opera House; Bold Tendencies; Royal Festival Hall, London
Spellbinding Jamie Barton and company excel in Adele Thomas’s clear-sighted take on volatile Verdi; to the final frontier via Peckham; and kazoos at the ready for a Beano Concerto
Jealousy, obsession and ancient rivalry, churning around like toxic vapours, make Il trovatore one of Verdi’s most dangerous and combustible works. It requires exceptional singers – chorus, as well as soloists – of rare stamina and virtuosity. The story is muddling but passions are stark and raw. In a new staging for the Royal Opera House, conducted by Antonio Pappano, the director Adele Thomas has deftly lassoed the work’s wildness into a new coherence and invited us into a parallel world of storytelling. Leave behind preconceptions and rationality and travel with her, or be frustrated.
With designs by Annemarie Woods and choreography by Emma Woods, the production – first seen in Zurich in 2021 – finds echoes in the hellish fantasies of Hieronymus Bosch, as well as in the dark materials and omnipresent daemons beloved of Philip Pullman. The approach is fresh, highly professional and nothing like any other Trovatore you might have encountered. Pappano encourages the richest, most melancholy sounds from the orchestra – that ominous bass drum and timpani roll at the start; the ever loquacious woodwind, notably clarinets. Equally, he allows the heaving oompahs of the Anvil Chorus, one of the work’s many familiar tunes, to surge and swing with gusto.Continue reading...
These getaways are geared towards giving men a reboot, and include both the spiritual – breathwork and midfulness – and physical, through activities such as mountain walking and coasteering
One definition of wellness is “the state of being healthy, especially when it is something that you actively try to achieve”. However, in today’s society, wellness seems to be a catch-all term for anything remotely health-related, fitness, dietary or cosmetic. You name it: biometric facials, sleep syncing, orgasmic meditation. Perhaps that’s why wellness is a word that’s still taboo among many men, particularly those from more traditional, working-class backgrounds.
Aside from a dilution of the term by fads and crazes, such as the ones listed above and those floating around on TikTok and Instagram, another issue may be a lack of retreats for men. We’re not talking the Ant Middleton-style “man-up or go home” military fitness camps, nor Wim Hof woo-woo, just solid, safe spaces for men to talk, improve physical and mental wellbeing and escape the pressures of everyday life to develop meaningful connections. Thankfully, there are some pioneers leading the way in male wellness, ranging from hiking escapes in Snowdonia and breathwork getaways in southern Spain to a GBTQ pop-up retreat in Europe, promoting communication and emotional connection.Continue reading...
Dutiful German generosity revealed in analysis of gratuity habits in six EU countries, the UK and US
In Germany it seems to be pretty much automatic, pretty much all the time. In France and Spain it all depends – presumably on social subtleties that you have to be French or Spanish to understand. In Italy, why would you even bother?
When, and how much, to tip is a question that has been vexing visitors to Europe for as long as people have been travelling around the continent. Outside their own country, it seems even Europeans don’t know the answer.Continue reading...
Plane can accommodate up to 16 people and is aimed at super-rich who ‘want to enjoy the money while still alive’
Forget partying on the ground: the super-rich are being told to do it at 33,000ft by a multimillionaire Dubai hotelier launching a £10,000-an-hour “five-star party jet”.
Kabir Mulchandani, the founder and chair of the luxury hotel group FIVE, has bought a Airbus ACJ TwoTwenty and claims to have transformed it into a “boundary-breaking fusion of hospitality and private aviation”, complete with dancing area, king-size bed and shower.Continue reading...
The Indigenous children – one of whom was just 11 months old – are thought to have eaten food dropped by rescuers and used their own ancestral knowledge
Malnourished and covered in insect bites, four Indigenous children were rescued alive from the Colombian Amazon on Friday afternoon, 40 days after the plane they were travelling in crashed into the jungle.
In a remarkable feat of resilience, the children survived heavy storms in one of the most inhospitable parts of the country, home to predatory animals and armed groups.Continue reading...
Researchers used a model to predict how the smoke would move through the region and said it wouldn’t pose a health risk
Smoke from Canadian wildfires that has descended upon parts of the eastern US and Canada in a thick haze has drifted over Norway and is expected to hit southern Europe, Norwegian officials said on Friday.
Using a climate forecast model, atmosphere and climate scientists with the Norwegian climate and environmental research institute (NILU) predicted how the smoke would travel through the atmosphere, flowing over the Scandinavian country before moving further south. The smoke was not expected to pose a health risk there.Continue reading...
Shares of Delta Air Lines Inc. DAL approach the end of the regular session Friday poised to achieve their longest winning streak on record, up for an 11th trading day. The stock has gained more than 13% in the period, and a close at Friday’s current levels would be Delta stock’s highest since March 8, when it closed at $39.73. Delta and other major U.S. airlines were in the black on Friday, with the U.S. Global Jets ETF looking at weekly gains of nearly 4%. U.S. airlines are bracing for a busy summer travel season
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
There are more than 400 fires burning across Canada, with many out of control, and as smoke travels south it is prompting air quality alerts in the US
There are more than 400 wildfires burning across Canada, with many out of control, according to officials. The fires are unusual in their timing, size and location. The “fire season”, when weather conditions are ripe for conflagrations, has only just begun. A third of the fires are in the boreal forest in the eastern province of Quebec, a place not used to dealing with large blazes.Continue reading...
The powerful lights mounted on the border wall threaten the dark skies that make southern Arizona a biodiversity hotspot.
The post The Feds Have Thousands of Stadium Lights on the Border. Switching Them On Would Devastate Desert Ecosystems. appeared first on The Intercept.
Mo-Shing Chen, a world-renowned power engineering educator and researcher, died on 1 May at the age of 91.
The IEEE Fellow was a professor at the University of Texas at Arlington for more than 40 years. He founded the university’s Energy Systems Research Center in 1968 and served as its director until he retired in 2003.
Chen created UTA’s first Ph.D. program in electrical engineering in 1969, and it quickly became one of the nation’s largest and top-rated graduate programs in power systems engineering.
Chen’s research included the modeling of electrical loads, the effect of voltage control in energy savings, real-time testing to improve power system efficiency, computer representation of cogeneration systems, reducing efficiency losses in transmission lines, and voltage stability.
Through his work, he solved complex problems engineers were facing with power networks, from small, rural electric cooperatives to ones that serve large metropolitan areas including New York City’s Consolidated Edison Co.
He taught his students not only how to solve such problems but also how to identify and understand what caused the troubles.
Born in the village of Wuxing in China, Chen and his family moved to Taiwan in 1949 when he was a teenager. After Chen earned a bachelor’s degree in electrical engineering in 1954 from National Taiwan University in Taipei, he joined the Taiwan Power Co. as a power engineer in Wulai. There he became fascinated by difficult, real-world problems of power systems, such as frequent blackouts and sudden spikes of electric loads.
Deciding he wanted to pursue master’s and doctoral degrees in electrical engineering, Chen moved to the United States to do so at the University of Texas at Austin under the mentorship of Edith Clarke, an EE professor there. She had invented an early graphing calculator and worked on the design and construction of hydroelectric power systems including the Hoover Dam, located on the Nevada-Arizona border.
Clarke and Chen had lively discussions about their work, and they had mutual respect for one another. He studied under Clarke until she retired in 1957.
Chen earned his master’s degree in 1958 and his Ph.D. in 1962.
He joined UTA—then known as Arlington State College—in 1962 as an assistant professor of electrical engineering.
As a professor, Chen observed that electrical engineering programs at universities around the country were not meeting the needs of industry, so he founded UTA’s Power Systems Research Center. It was later renamed the Energy Systems Research Center.
He gained global recognition in the power industry through his intensive, two-week continuing-education course, Modeling and Analysis of Modern Power Systems, which he began teaching in 1967. Attendees learned how to design, operate, and stabilize systems. The course became the power industry’s hub for continuing education, attended by 1,500 participants from academia and industry. The attendees came from more than 750 universities and companies worldwide. Chen also traveled to more than 40 companies and universities to teach the course.
He mentored UTA’s first Ph.D. graduate, Howard Daniels, who became an IEEE life member and vice president of a multinational power company based in Switzerland. Chen went on to mentor more than 300 graduate students.
Chen this year was awarded one of UTA’s first College of Engineering Legacy Awards. The honor is designed to recognize a faculty member’s career-long performance and dedication to the university.
In 1968 he founded the Transmission and Substation Design and Operation Symposium. The event, still held today, serves as a forum for utility companies, engineers, contractors, and consultants to present and discuss trends and challenges.
He also created a distinguished-lecturer series at UTA and invited students, faculty, and industry engineers to campus to listen to speeches by power systems engineers including IEEE Fellow Charles Concordia and IEEE Life Fellow W.F. Tinney.
Chen said he was always cognizant that the primary purpose of a university was education, so before making any decision, he asked himself, “How will my students benefit?”
By the mid-1970s, the U.S. National Science Foundation consistently ranked UTA as one of the top power engineering programs in the country.
Chen said he believed any faculty member could teach top students, who generally need little help. A professor’s real service to society, he said, was turning average students into top-quality graduates who could compete with anyone.
Part of that process was recruiting, motivating, and mentoring students. Chen insisted that his graduate students have an office near his so he could be readily available for discussions.
Chen’s contagious enthusiasm and thorough understanding of power systems— along with a knack for communicating difficult concepts clearly, simply, and humorously—made him a popular professor. In 1976 he received the first Edison Electric Institute Power Engineering Educator Award. More than 50 of Chen’s students and colleagues endorsed him for the honor.
Chen founded the university’s first international visiting-scholars program in 1968. Through the program, more than 50 power systems researchers have spent a year at UTA, teaching and conducting research. Participants have come from China, Israel, Japan, Korea, Latvia, Macedonia, Spain, and Russia.
Chen was the principal investigator for more than 40 research projects at the Energy Systems Research Center. Many of them were supported by Consolidated Edison (ConEd) of New York and the Electric Power Research Institute, in Washington, D.C.
One of his first research projects involved creating a computer representation of an operational power system with Daniels. Running a computer was expensive in the late 1960s, and Chen and Daniels’ research helped decrease data acquisition costs from between US $10,000 and $20,000 to only 1 cent.
With that project, Chen quickly demonstrated his research value to the power industry.
In the first project Chen led for ConEd, he and his team created a computer representation of New York City’s underground electric power system. It was one of Chen’s favorite projects, he said, and he enjoyed looking back at his experiences with it.
“Before this study, computers were used to represent balanced systems, not unbalanced underground systems,” he once told me. “New York City is fundamentally a distribution system, not a transmission system. ConEd had paid $2 million to a different, very famous university to do this study, but it couldn’t deliver the results after two years. We bid $250,000 and delivered the results in nine months.”
ConEd’s CEO at the time said, “We asked for a Ford, and you delivered a Cadillac.” It was the beginning of a nearly 30-year relationship between Chen and the utility company.
Chen and his colleagues designed and built a small supervisory control and data acquisition system in the mid-1980s for a group of power companies in Texas. Such systems gather and analyze real-time data from power systems to monitor and control their equipment. Chen’s invention proved valuable when he and his team were modeling electric loads for analyzing power system stability, resulting in the reduction of blackouts.
He published more than 100 peer-reviewed papers, most of them in IEEE Transactions on Power Systems.
His awards included the 1984 IEEE Centennial Medal, an honorary professorship by eight universities in China and Taiwan, and an honorary EE doctorate in 1997 from the Universidad Autonoma de Nuevo Leon, in Mexico.
He was a member of the Texas Society of Professional Engineers, the American Society of Engineering Education, IEEE–Eta Kappa Nu, Tau Beta Pi, the New York Academy of Sciences, and Sigma Xi.
In 2013 and 2014, I wrote extensively about new revelations regarding NSA surveillance based on the documents provided by Edward Snowden. But I had a more personal involvement as well.
I wrote the essay below in September 2013. The New Yorker agreed to publish it, but the Guardian asked me not to. It was scared of UK law enforcement, and worried that this essay would reflect badly on it. And given that the UK police would raid its offices in July 2014, it had legitimate cause to be worried.
Now, ten years later, I offer this as a time capsule of what those early months of Snowden were like...
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
Does your robot know where it is right now? Does it? Are you sure? And what about all of its robot friends—do they know where they are too? This is important. So important, in fact, that some would say that multirobot simultaneous localization and mapping (SLAM) is a crucial capability to obtain timely situational awareness over large areas. Those some would be a group of MIT roboticists who just won the IEEE Transactions on Robotics Best Paper Award for 2022, presented at this year’s IEEE International Conference on Robotics and Automation (ICRA 2023), in London. Congratulations!
Out of more than 200 papers published in Transactions on Robotics last year, reviewers and editors voted to present the 2022 IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award to Yulun Tian, Yun Chang, Fernando Herrera Arias, Carlos Nieto-Granda, Jonathan P. How, and Luca Carlone from MIT for their paper Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems.
“The editorial board, and the reviewers, were deeply impressed by the theoretical elegance and practical relevance of this paper and the open-source code that accompanies it. Kimera-Multi is now the gold standard for distributed multirobot SLAM.”
—Kevin Lynch, editor in chief, IEEE Transactions on Robotics
Robots rely on simultaneous localization and mapping to understand where they are in unknown environments. But unknown environments are a big place, and it takes more than one robot to explore all of them. If you send a whole team of robots, each of them can explore their own little bit, and then share what they’ve learned with one another to make a much bigger map that they can all take advantage of. Like most things robot, this is much easier said than done, which is why Kimera-Multi is so useful and important. The award-winning researchers say that Kimera-Multi is a distributed system that runs locally on a bunch of robots all at once. If one robot finds itself in communications range with another robot, they can share map data, and use those data to build and improve a globally consistent map that includes semantic annotations.
Since filming the above video, the researchers have done real-world tests with Kimera-Multi. Below is an example of the map generated by three robots as they travel a total of more than 2 kilometers. You can easily see how the accuracy of the map improves significantly as the robots talk to each other:
More details and code are available on GitHub.
Transactions on Robotics also selected some excellent Honorable Mentions for 2022:
Stabilization of Complementarity Systems via Contact-Aware Controllers, by Alp Aydinoglu, Philip Sieg, Victor M. Preciado, and Michael Posa
Autonomous Cave Surveying With an Aerial Robot, by Wennie Tabib, Kshitij Goel, John Yao, Curtis Boirum, and Nathan Michael
Prehensile Manipulation Planning: Modeling, Algorithms and Implementation, by Florent Lamiraux and Joseph Mirabel
Rock-and-Walk Manipulation: Object Locomotion by Passive Rolling Dynamics and Periodic Active Control, by Abdullah Nazir, Pu Xu, and Jungwon Seo
Origami-Inspired Soft Actuators for Stimulus Perception and Crawling Robot Applications, by Tao Jin, Long Li, Tianhong Wang, Guopeng Wang, Jianguo Cai, Yingzhong Tian, and Quan Zhang
For more than a century, utility companies have used electromechanical relays to protect power systems against damage that might occur during severe weather, accidents, and other abnormal conditions. But the relays could neither locate the faults nor accurately record what happened.
Then, in 1977, Edmund O. Schweitzer III invented the digital microprocessor-based relay as part of his doctoral thesis. Schweitzer’s relay, which could locate a fault within the radius of 1 kilometer, set new standards for utility reliability, safety, and efficiency.
Schweitzer Engineering Laboratories
President and CTO
Purdue University, West Lafayette, Ind.; Washington State University, Pullman
To develop and manufacture his relay, he launched Schweitzer Engineering Laboratories in 1982 from his basement in Pullman, Wash. Today SEL manufactures hundreds of products that protect, monitor, control, and automate electric power systems in more than 165 countries.
Schweitzer, an IEEE Life Fellow, is his company’s president and chief technology officer. He started SEL with seven workers; it now has more than 6,000.
The 40-year-old employee-owned company continues to grow. It has four manufacturing facilities in the United States. Its newest one, which opened in March in Moscow, Idaho, fabricates printed circuit boards.
Schweitzer has received many accolades for his work, including the 2012 IEEE Medal in Power Engineering. In 2019 he was inducted into the U.S. National Inventors Hall of Fame.
Power system faults can happen when a tree or vehicle hits a power line, a grid operator makes a mistake, or equipment fails. The fault shunts extra current to some parts of the circuit, shorting it out.
If there is no proper scheme or device installed with the aim of protecting the equipment and ensuring continuity of the power supply, an outage or blackout could propagate throughout the grid.
Overcurrent is not the only damage that can occur, though. Faults also can change voltages, frequencies, and the direction of current.
A protection scheme should quickly isolate the fault from the rest of the grid, thus limiting damage on the spot and preventing the fault from spreading to the rest of the system. To do that, protection devices must be installed.
That’s where Schweitzer’s digital microprocessor-based relay comes in. He perfected it in 1982. It later was commercialized and sold as the SEL-21 digital distance relay/fault locator.
Schweitzer says his relay was, in part, inspired by an event that took place during his first year of college.
“Back in 1965, when I was a freshman at Purdue University, a major blackout left millions without power for hours in the U.S. Northeast and Ontario, Canada,” he recalls. “It was quite an event, and I remember it well. I learned many lessons from it. One was how difficult it was to restore power.”
He says he also was inspired by the book Protective Relays: Their Theory and Practice. He read it while an engineering graduate student at Washington State University, in Pullman.
“I bought the book on the Thursday before classes began and read it over the weekend,” he says. “I couldn’t put it down. I was hooked.
“I realized that these solid-state devices were special-purpose signal processors. They read the voltage and current from the power systems and decided whether the power systems’ apparatuses were operating correctly. I started thinking about how I could take what I knew about digital signal processing and put it to work inside a microprocessor to protect an electric power system.”
The 4-bit and 8-bit microprocessors were new at the time.
“I think this is how most inventions start: taking one technology and putting it together with another to make new things,” he says. “The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.”
He says he was introduced to signal processing, signal analysis, and how to use digital techniques in 1968 while at his first job, working for the U.S. Department of Defense at Fort Meade, in Maryland.
Faster ways to clear faults and improve cybersecurity
Schweitzer continues to invent ways of protecting and controlling electric power systems. In 2016 his company released the SEL-T400L, which samples a power system every microsecond to detect the time between traveling waves moving at the speed of light. The idea is to quickly detect and locate transmission line faults.
The relay decides whether to trip a circuit or take other actions in 1 to 2 milliseconds. Previously, it would take a protective relay on the order of 16 ms. A typical circuit breaker takes 30 to 40 ms in high-voltage AC circuits to trip.
“The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.”
“I like to talk about the need for speed,” Schweitzer says. “In this day and age, there’s no reason to wait to clear a fault. Faster tripping is a tremendous opportunity from a point of view of voltage and angle stability, safety, reducing fire risk, and damage to electrical equipment.
“We are also going to be able to get a lot more out of the existing infrastructure by tripping faster. For every millisecond in clearing time saved, the transmission system stability limits go up by 15 megawatts. That’s about one feeder per millisecond. So, if we save 12 ms, all of the sudden we are able to serve 12 more distribution feeders from one part of one transmission system.”
The time-domain technology also will find applications in transformer and distribution protection schemes, he says, as well as have a significant impact on DC transmission.
What excites Schweitzer today, he says, is the concept of energy packets, which he and SEL have been working on. The packets measure energy exchange for all signals including distorted AC systems or DC networks.
“Energy packets precisely measure energy transfer, independent of frequency or phase angle, and update at a fixed rate with a common time reference such as every millisecond,” he says. “Time-domain energy packets provide an opportunity to speed up control systems and accurately measure energy on distorted systems—which challenges traditional frequency-domain calculation methods.”
He also is focusing on improving the reliability of critical infrastructure networks by improving cybersecurity, situational awareness, and performance. Plug-and-play and best-effort networking aren’t safe enough for critical infrastructure, he says.
“SEL OT SDN technology solves some significant cybersecurity problems,” he says, “and frankly, it makes me feel comfortable for the first time with using Ethernet in a substation.”
Schweitzer didn’t start off planning to launch his own company. He began a successful career in academia in 1977 after joining the electrical engineering faculty at Ohio University, in Athens. Two years later, he moved to Pullman, Wash., where he taught at Washington State’s Voiland College of Engineering and Architecture for the next six years. It was only after sales of the SEL-21 took off that he decided to devote himself to his startup full time.
It’s little surprise that Schweitzer became an inventor and started his own company, as his father and grandfather were inventors and entrepreneurs.
His grandfather, Edmund O. Schweitzer, who held 87 patents, invented the first reliable high-voltage fuse in collaboration with Nicholas J. Conrad in 1911, the year the two founded Schweitzer and Conrad—today known as S&C Electric Co.—in Chicago.
Schweitzer’s father, Edmund O. Schweitzer Jr., had 208 patents. He invented several line-powered fault-indicating devices, and he founded the E.O. Schweitzer Manufacturing Co. in 1949. It is now part of SEL.
Schweitzer says a friend gave him the best financial advice he ever got about starting a business: Save your money.
“I am so proud that our 6,000-plus-person company is 100 percent employee-owned,” Schweitzer says. “We want to invest in the future, so we reinvest our savings into growth.”
He advises those who are planning to start a business to focus on their customers and create value for them.
“Unleash your creativity,” he says, “and get engaged with customers. Also, figure out how to contribute to society and make the world a better place.”
Stephen Cass: Welcome to Fixing the Future, an IEEE Spectrum podcast. This episode is brought to you by IEEE Xplore, the digital library with over 6 million technical documents and free search. I’m senior editor Stephen Cass, and today I’m talking with a former Spectrum editor, Sally Adee, about her new book, We Are Electric: The New Science of Our Body’s Electrome. Sally, welcome to the show.
Sally Adee: Hi, Stephen. Thank you so much for having me.
Cass: It’s great to see you again, but before we get into exactly what you mean by the body’s electrome and so on, I see that in researching this book, you actually got yourself zapped quite a bit in a number of different ways. So I guess my first question is: are you okay?
Adee: I mean, as okay as I can imagine being. Unfortunately, there’s no experimental sort of condition and control condition. I can’t see the self I would have been in the multiverse version of myself that didn’t zap themselves. So I think I’m saying yes.
Cass: The first question I have then is what is an electrome?
Adee: So the electrome is this word, I think, that’s been burbling around the bioelectricity community for a number of years. The first time it was committed to print is a 2016 paper by this guy called Arnold De Loof, a researcher out in Europe. But before that, a number of the researchers I spoke to for this book told me that they had started to see it in papers that they were reviewing. And I think it wasn’t sort of defined consistently always because there’s this idea that seems to be sort of bubbling to the top, bubbling to the surface, that there are these electrical properties that the body has, and they’re not just epiphenomena, and they’re not just in the nervous system. They’re not just action potentials, but that there are electrical properties in every one of our cells, but also at the organ level, potentially at the sort of entire system level, that people are trying to figure out what they actually do.
And just as action potentials aren’t just epiphenomena, but actually our control mechanisms, they’re looking at how these electrical properties work in the rest of the body, like in the cells, membrane voltages and skin cells, for example, are involved in wound healing. And there’s this idea that maybe these are an epigenetic variable that we haven’t been able to conscript yet. And there’s such promise in it, but a lot of the research, the problem is that a lot of the research is being done across really far-flung scientific communities, some in developmental biology, some of it in oncology, a lot of it in neuroscience, obviously. But what this whole idea of the electrome is— I was trying to pull this all together because the idea behind the book is I really want people to just develop this umbrella of bioelectricity, call it the electrome, call it bioelectricity, but I kind of want the word electrome to do for bioelectricity research what the word genome did for molecular biology. So that’s basically the spiel.
Cass: So I want to surf back to a couple points you raised there, but first off, just for people who might not know, what is an action potential?
Adee: So the action potential is the electrical mechanism by which the nervous signal travels, either to actuate motion at the behest of your intent or to gain sensation and sort of perceive the world around you. And that’s the electrical part of the electrochemical nervous impulse. So everybody knows about neurotransmitters at the synapse and— well, not everybody, but probably Spectrum listeners. They know about the serotonin that’s released and all these other little guys. But the thing is you wouldn’t be able to have that release without the movement of charged particles called ions in and out of the nerve cell that actually send this impulse down and allow it to travel at a rate of speed that’s fast enough to let you yank your hand away from a hot stove when you’ve touched it, before you even sort of perceive that you did so.
Cass: So that actually brings me to my next question. So you may remember in some Spectrum‘s editorial meetings when we were deciding if a tech story was for us or not, that literally, we would often ask, “Where is the moving electron? Where is the moving electron?” But bioelectricity is not really based on moving electrons. It’s based on these ions.
Yeah. So let’s take the neuron as an example. So what you’ve got is— let me do like a— imagine a spherical cow for a neuron, okay? So you’ve got a blob and it’s a membrane, and that separates the inside of your cell from the outside of your cell. And this membrane is studded with tens of thousands, I think, little pores called ion channels. And the pores are not just sieve pores. They’re not inert. They’re really smart. And they decide which ions they like. Now, let’s go to the ions. Ions are suffusing your extracellular fluid, all the stuff that bathes you. It’s basically the reason they say you’re 66 percent water or whatever. This is like sieve water. It’s got sodium, potassium, calcium, etc., and these ions are charged particles.
So when you’ve got a cell, it likes potassium, the neuron, it likes potassium, it lets it in. It doesn’t really like sodium so much. It’s got very strong preferences. So in its resting state, which is its happy place, those channels allow potassium ions to enter. And those are probably where the electrons are, actually, because an ion, it’s got a plus-one charge or a minus-one charge based on— but let’s not go too far into it. But basically, the cell allows the potassium to come inside, and its resting state, which is its happy place, the separation of the potassium from the sodium causes, for all sorts of complicated reasons, a charge inside the cell that is minus 70 degree— sorry, minus 70 millivolts with respect to the extracellular fluid.
Cass: Before I read your book, I kind of had the idea that how neurons use electricity was, essentially, settled science, very well understood, all kind of squared away, and this was how the body used electricity. But even when it came to neurons, there’s a lot of fundamentals, kind of basic things about how neurons use electricity that we really only established relatively recently. Some of the research you’re talking about is definitely not a century-old kind of basic science about how these things work.
Adee: No, not at all. In fact, there was a paper released in 2018 that I didn’t include, which I’m really annoyed by. I just found it recently. Obviously, you can’t find all the papers. But it’s super interesting because it blends that whole sort of ionic basis of the action potential with another thing in my book that’s about how cell development is a little bit like a battery getting charged. Do you know how cells assume an electrical identity that may actually be in charge of the cell fate that they meet? And so we know abou— sorry, the book goes into more detail, but it’s like when a cell is stem or a fertilized egg, it’s depolarized. It’s at zero. And then when it becomes a nerve cell, it goes to that minus 70 that I was talking about before. If it becomes a fat cell, it’s at minus 50. If it’s musculoskeletal tissue, it goes to minus 90. Liver cells are like around minus 40. And so you’ve got real identitarian diversity, electrical diversity in your tissues, which has something to do with what they end up doing in the society of cells. So this paper that I was talking about, the 2018 paper, they actually looked at neurons. This was work from Denis Jabaudon at the University of Geneva, and they were looking at how neurons actually differentiate. Because when baby neurons are born-- your brain is made of all kinds of cells. It’s not just cortical cells. There’s staggering variety of classes of neurons. And as cells actually differentiate, you can watch their voltage change, just like you can do in the rest of the body with these electrosensitive dyes. So that’s an aspect of the brain that we hadn’t even realized until 2018.
Cass: And that all leads me to my next point, which is if you think bioelectricity, we think, okay, nerves zapping around. But neurons are not the only bioelectric network in the body. So talk about some of the other sorts of electrical networks we have, completely, or are largely separate from our neural networks?
Adee: Well, so Michael Levin is a professor at Tufts University. He does all kinds of other stuff, but mainly, I guess, he’s like the Paul Erdos of bioelectricity, I like to call him, because he’s sort of the central node. He’s networked into everybody, and I think he’s really trying to, again, also assemble this umbrella of bioelectricity to study this all in the aggregate. So his idea is that we are really committed to this idea of bioelectricity being in charge of our sort of central communications network, the way that we understand the environment around us and the way that we understand our ability to move and feel within it. But he thinks that bioelectricity is also how— that the nervous system kind of hijacked this mechanism, which is way older than any nervous system. And he thinks that we have another underlying network that is about our shape, and that this is bioelectrically mediated in really important ways, which impacts development, of course, but also wound healing. Because if you think about the idea that your body understands its own shape, what happens when you get a cut? How does it heal it? It has to go back to some sort of memory of what its shape is in order to heal it over. In animals that regenerate, they have a completely different electrical profile after they’ve been—so after they’ve had an arm chopped off.
So it’s a very different electrical— yeah, it’s a different electrical process that allows a starfish to regrow a limb than the one that allows us to scar over. So you’ve got this thing called a wound current. Your skin cells are arranged in this real tight wall, like little soldiers, basically. And what’s important is that they’re polarized in such a way that if you cut your skin, all the sort of ions flow out in a certain way, which creates this wound current, which then generates an electric field, and the electric field acts like a beacon. It’s like a bat signal, right? And it guides in these little helper cells, the macrophages that come and gobble up the mess and the keratinocytes and the guys who build it back up again and scar you over. And it starts out strong, and as you scar over, as the wound heals, it very slowly goes away. By the time the wound is healed, there’s no more field. And what was super interesting is this guy, Richard Nuccitelli, invented this thing called the Dermacorder that’s able to sense and evaluate the electric field. And he found that in people over the age of 65, the wound field is less than half of what it is in people under 25. And that actually goes in line with another weird thing about us, which is that our bioelectricity— or sorry, our regeneration capabilities are time-dependent and tissue-dependent.
So you probably know that the intestinal tissue regenerates all the time. You’re going to digest next week’s food with totally different cells than this morning’s food. But also, we’re time-dependent because when we’re just two cells, if you cleave that in half, you get identical twins. Later on during fetal development, it’s totally scarless, which is something we found out, because when we started being able to do fetal surgery in the womb, it was determined that we heal, basically, scarlessly. Then we’re born, and then between the ages of 7 and 11— until we are between the ages of 7 and 11, you chop off a fingertip, it regenerates perfectly, including the nail, but we lose that ability. And so it seems like the older we get, the less we regenerate. And so they’re trying to figure out now how— various programs are trying to figure out how to try to take control of various aspects of our sort of bioelectrical systems to do things like radically accelerate healing, for example, or how to possibly re-engage the body’s developmental processes in order to regenerate preposterous things like a limb. I mean, it sounds preposterous now. Maybe in 20 years, it’ll just be.
Cass: I want to get into some of the technologies that people are thinking of building on this sort of new science. Part of it is that the history of this field, both scientifically and technologically, has really been plagued by the shadow of quackery. And can you talk a little bit about this and how, on the one hand, there’s been some things we’re very glad that we stopped doing some very bad ideas, but it’s also had this shadow on sort of current research and trying to get real therapies to patients?
Adee: Yeah, absolutely. That was actually one of my favorite chapters to write, was the spectacular pseudoscience one, because, I mean, that is so much fun. So it can be boiled down to the fact that we were trigger happy because we see this electricity, we’re super excited about it. We start developing early tools to start manipulating it in the 1700s. And straight away, it’s like, this is an amazing new tool, and there’s all these sort of folk cures out there that we then decide that we’re going to take— not into the clinic. I don’t know what you’d call it, but people just start dispensing this stuff. This is separate from the discovery of endogenous electrical activity, which is what Luigi Galvani famously discovered in the late 1700s. He starts doing this. He’s an anatomist. He’s not an electrician. Electrician, by the way, is what they used to call the sort of literati who were in charge of discovery around electricity. And it had a really different connotation at the time, that they were kind of like the rocket scientists of their day.
But Galvani’s just an anatomist, and he starts doing all of these experiments using these new tools to zap frogs in various ways and permutations. And he decides that he has answered a whole different old question, which is how does man’s will animate his hands and let him feel the world around him? And he says, “This is electrical in nature.” This is a long-standing mystery. People have been bashing their heads against it for the past 100, 200 years. But he says that this is electrical, and there’s a big, long fight. I won’t get into too much between Volta, the guy who invented the battery, and Galvani. Volta says, “No, this is not electrical.” Galvani says, “Yes, it is.” But owing to events, when Volta invents the battery, he basically wins the argument, not because Galvani was wrong, but because Volta had created something useful. He had created a tool that people could use to advance the study of all kinds of things. Galvani’s idea that we have an endogenous electrical sort of impulse, it didn’t lead to anything that anybody could use because we didn’t have tools sensitive enough to really measure it. We only sort of had indirect measurements of it.
And his nephew, after he dies in ignominy, his nephew decides to bring it on himself to rescue, single-handedly, his uncle’s reputation. The problem is, the way he does it is with a series of grotesque, spectacular experiments. He very famously reanimated— well, zapped until they shivered, the corpses of all these dead guys, dead criminals, and he was doing really intense things like sticking electrodes connected to huge voltaic piles, Proto batteries, into the rectums of dead prisoners, which would make them sit up halfway and point at the people who are assembled, this very titillating stuff. Many celebrities of the time would crowd around these demonstrations.
Anyway, so Galvani basically—or sorry, Aldini, the nephew, basically just opens the door to everyone to be like, “Look what we can do with electricity.” Then in short order, there’s a guy who creates something called the Celestial Bed, which is a thing— they’ve got rings, they’ve got electric belts for stimulating the nethers. The Celestial Bed is supposed to help infertile couples. This is how sort of just wild electricity is in those days. It’s kind of like— you know how everybody went crazy for crypto scams last year? Electricity was like the crypto of 1828 or whatever, 1830s. And the Celestial Bed, so people would come and they would pay £9,000 to spend a night in it, right? Well, not at the time. That’s in today’s money. And it didn’t even use electricity. It used the idea of electricity. It was homeopathy, but electricity. You don’t even know where to start. So this is the sort of caliber of pseudoscience, and this is really echoed down through the years. That was in the 1800s. But when people submit papers or grant applications, I heard more than one researchers say to me— people would look at this electric stuff, and they’d be like, “Does anyone still believe this shit?” And it’s like, this is rigorous science, but it’s been just tarnished by the association with this.
Cass: So you mentioned wound care, and the book talks about some of the ways [inaudible] would care. But we’re also looking at other really ambitious ideas like regenerating limbs as part of this extension of wound care. And also, you make the point of certainly doing diagnostics and then possibly treatments for things like cancer. In thinking about cancer in a very different way than the really very, very tightly-focused genetic view we have of cancer now, and thinking about it kind of literally in a wider context. So can you talk about that a little bit?
Adee: Sure. And I want to start by saying that I went to a lot of trouble to be really careful in the book. I think cancer is one of those things that— I’ve had cancer in my family, and it’s tough to talk about it because you don’t want to give people the idea that there’s a cure for cancer around the corner when this is basic research and intriguing findings because it’s not fair. And I sort of struggled. I thought for a while, like, “Do I even bring this up?” But the ideas behind it are so intriguing, and if there were more research dollars thrown at it or pounds or whatever, Swiss francs, you might be able to really start moving the needle on some of this stuff. The idea is, there are two electrical— oh God, I don’t want to say avenues, but it is unfortunately what I have to do. There are two electrical avenues to pursue in cancer. The first one is something that a researcher called Mustafa Djamgoz at Imperial College here in the UK, he has been studying this since the ‘90s. Because he used to be a neurobiologist. He was looking at vision. And he was talking to some of his oncologist Friends, and they gave him some cancer cell lines, and he started looking at the behavior of cancer cells, the electrical behavior of cancer cells, and he started finding some really weird behaviors.
Cancer cells that should not have had anything to do with action potentials, like from prostate cancer lines, when he looked at them, they were oscillating like crazy, as if they were nerves. And then he started looking at other kinds of cancer cells, and they were all oscillating, and they were doing this oscillating behavior. So he spent like seven years sort of bashing his head against the wall. Nobody wanted to listen to him. But now, way more people are now investigating this. There’s going to be an ion channel at Cancer Symposium I think later this month, actually, in Italy. And he found, and a lot of other researchers like this woman, Annarosa Arcangeli, they have found that the reason that cancer cells may have these oscillating properties is that this is how they communicate with each other that it’s time to leave the nest of the tumor and start invading and metastasizing. Separately, there have been very intriguing-- this is really early days. It’s only a couple of years that they’ve started noticing this, but there have been a couple of papers now. People who are on certain kinds of ion channel blockers for neurological conditions like epilepsy, for example, they have cancer profiles that are slightly different from normal, which is that if they do get cancer, they are slightly less likely to die of it. In the aggregate. Nobody should be starting to eat ion channel blockers.
But they’re starting to zero in on which particular ion channels might be responsible, and it’s not just one that you and I have. These cancer kinds, they are like a expression of something that normally only exists when we’re developing in the womb. It’s part of the reason that we can grow ourselves so quickly, which of course, makes sense because that’s what cancer does when it metastasizes, it grows really quickly. So there’s a lot of work right now trying to identify how exactly to target these. And it wouldn’t be a cure for cancer. It would be a way to keep a tumor in check. And this is part of a strategy that has been proposed in the UK a little bit for some kinds of cancer, like the triple-negative kind that just keep coming back. Instead of subjecting someone to radiation and chemo, especially when they’re older, sort of just really screwing up their quality of life while possibly not even giving them that much more time. What if instead you sort of tried to treat cancer more like a chronic disease, keep it managed, and maybe that gives a person like 10 or 20 years? That’s a huge amount of time. And while not messing up with their quality of life.
This is a whole conversation that’s being had, but that’s one avenue. And there’s a lot of research going on in this right now that may yield fruit sort of soon. The much more sci-fi version of this, the studies have mainly been done in tadpoles, but they’re so interesting. So Michael Levin, again, and his postdoc at the time, I think, Brook Chernet, they were looking at what happens— so it’s uncontroversial that as a cancer cell-- so let’s go back to that society of cells thing that I was talking about. You get fertilized egg, it’s depolarized, zero, but then its membrane voltage charges, and it becomes a nerve cell or skin cell or a fat cell. What’s super interesting is that when those responsible members of your body’s society decide to abscond and say, “Screw this. I’m not participating in society anymore. I’m just going to eat and grow and become cancer,” their membrane voltage also changes. It goes much closer to zero again, almost like it’s having a midlife crisis or whatever.
So what they found, what Levin and Chernet found is that you can manipulate those cellular electrics to make the cell stop behaving cancerously. And so they did this in tadpoles. They had genetically engineered the tadpoles to express tumors, but when they made sure that the cells could not depolarize, most of those tadpoles did not express the tumors. And when they later took tadpoles that already had the tumors and they repolarized the voltage, those tumors, that tissue started acting like normal tissue, not like cancer tissue. But again, this is the sci-fi stuff, but the fact that it was done at all is so fascinating, again, from that epigenetic sort of body pattern perspective, right?
Cass: So sort of staying with that sci-fi stuff, except this one, even more closer to reality. And this goes back to some of these experiments which you zapped yourself. Can you talk a little bit about some of these sort of device that you can wear which appear to really enhance certain mental abilities? And some of these you [inaudible].
Adee: So the kit that I wore, I actually found out about it while I was at Spectrum, when I was a DARPATech. And this program manager told me about it, and I was really stunned to find out that just by running two milliamps of current through your brain, you would be able to improve your-- well, it’s not that your ability is improved. It was that you could go from novice to expert in half the time that it would take you normally, according to the papers. And so I really wanted to try it. I was trying to actually get an expert feature written for IEEE Spectrum, but they kept ghosting me, and then by the time I got to New Scientist, I was like, fine, I’m just going to do it myself. So they let me come over, and they put this kit on me, and it was this very sort of custom electrodes, these things, they look like big daisies. And this guy had brewed his own electrolyte solution and sort of smashed it onto my head, and it was all very slimy.
So I was doing this video game called DARWARS Ambush!, which is just like a training— it’s a shooter simulation to help you with shooting. So it was a Gonzo stunt. It was not an experiment. But he was trying to replicate the conditions of me not knowing whether the electricity was on as much as he could. So he had it sort of behind my back, and he came in a couple of times and would either pretend to turn it on or whatever. And I was practicing and I was really bad at it. That is not my game. Let’s just put it that way. I prefer driving games. But it was really frustrating as well because I never knew when the electricity was on. So I was just like, “There’s no difference. This sucks. I’m terrible.” And that sort of inner sort of buzz kept getting stronger and stronger because I’d also made bad choices. I’d taken a red-eye flight the night before. And I was like, “Why would I do that? Why wouldn’t I just give myself one extra day to recover before I go in and do this really complicated feature where I have to learn about flow state and electrical stimulation?” And I was just getting really tense and just angrier and angrier. And then at one point, he came in after my, I don’t know, 5th or 6th, I don’t know, 400th horrible attempt where I just got blown up every time. And then he turned on the electricity, and I could totally feel that something had happened because I have a little retainer in my mouth just at the bottom. And I was like, “Whoa.” But then I was just like, “Okay. Well, now this is going to suck extra much because I know the electricity is on, so it’s not even a freaking sham condition.” So I was mad.
But then the thing started again, and all of a sudden, all the sort of buzzing little angry voices just stopped, and it was so profound. And I’ve talked about it quite a bit, but every time I remember it, I get a little chill because it was the first time I’d ever realized, number one, how pissy my inner voices are and just how distracting they are and how abusive they are. And I was like, “You guys suck, all of you.” But somebody had just put a bell jar between me and them, and that feeling of being free from them was profound. At first, I didn’t even notice because I was just busy doing stuff. And all of a sudden, I was amazing at this game and I dispatched all of the enemies and whatnot, and then afterwards, when they came in, I was actually pissed because I was just like, “Oh, now I get it right and you come in after three minutes. But the last times when I was screwing it up, you left me in there to cook for 20 minutes.” And they were like, “No, 20 minutes has gone by,” which I could not believe. But yeah, it was just a really fairly profound experience, which is what led me down this giant rabbit hole in the first place. Because when I wrote the feature afterwards, all of a sudden I started paying attention to the whole TDCS thing, which I hadn’t yet. I had just sort of been focusing [crosstalk].
Cass: And that’s transcranial—?
Adee: Oh sorry, transcranial direct current stimulation.
Cass: There you go. Thank you. Sorry.
Adee: No. Yeah, it’s a mouthful. But then that’s when I started to notice that quackery we were talking about before. All that history was really informing the discussion around it because people were just like, “Oh, sure. Why don’t you zap your brain with some electricity and you become super smart.” And I was like, “Oh, did I like fall for the placebo effect? What happened here?” And there was this big study from Australia where the guy was just like, “When we average out all of the effects of TDCS, we find that it does absolutely nothing.” Other guys stimulated a cadaver to see if it would even reach the brain tissue and included it wouldn’t. But that’s basically what started me researching the book, and I was able to find answers to all those questions. But of course, TDCS, I mean, it’s finicky just like the electrome. It’s like your living bone is conductive. So when you’re trying to put an electric field on your head, basically, you have to account for things like how thick is that person’s skull in the place that you want to stimulate. They’re still working out the parameters.
There have been some really good studies that show sort of under which particular conditions they’ve been able to make it work. It does not work for all conditions for which it is claimed to work. There is some snake oil. There’s a lot left to be done, but a better understanding of how this affects the different layers of the sort of, I guess, call it, electrome, would probably make it something that you could use replicability. Is that a word? But also, that applies to things like deep brain stimulation, which, also, for Parkinson’s, it’s fantastic. But they’re trying to use it for depression, and in some cases, it works so—I want to use a bad word—amazingly. Just Helen Mayberg, who runs these trials, she said that for some people, this is an option of last resort, and then they get the stimulation, and they just get back on the bus. That’s her quote. And it’s like a switch that you flip. And for other people, it doesn’t work at all.
Cass: Well the book is packed with even more fantastic stuff, and I’m sorry we don’t have time to go through it, because literally, I could sit here and talk to you all day about this.
Adee: I didn’t even get into the frog battery, but okay, that’s fine. Fine, fine skip the frog. Sorry, I’m just kidding. I’m kidding, I’m kidding.
Cass: And thank you so much, Sally, for chatting with us today.
Adee: Oh, thank you so much. I really love talking about it, especially with you.
Cass: Today on Fixing the Future, we’re talking with Sally Adee about her new book on the body’s electrome. For IEEE Spectrum I’m Stephen Cass.
Inside today’s computers, phones, and other mobile devices, more and more sensors, processors, and other electronics are fighting for space. Taking up a big part of this valuable real estate are the cameras—just about every gadget needs a camera, or two, three, or more. And the most space-consuming part of the camera is the lens.
The lenses in our mobile devices typically collect and direct incoming light by refraction, using a curve in a transparent material, usually plastic, to bend the rays. So these lenses can’t shrink much more than they already have: To make a camera small, the lens must have a short focal length; but the shorter the focal length, the greater the curvature and therefore the thickness at the center. These highly curved lenses also suffer from all sorts of aberrations, so camera-module manufacturers use multiple lenses to compensate, adding to the camera’s bulk.
With today’s lenses, the size of the camera and image quality are pulling in different directions. The only way to make lenses smaller and better is to replace refractive lenses with a different technology.
That technology exists. It’s the metalens, a device developed at Harvard and commercialized at Metalenz, where I am an applications engineer. We create these devices using traditional semiconductor-processing techniques to build nanostructures onto a flat surface. These nanostructures use a phenomenon called metasurface optics to direct and focus light. These lenses can be extremely thin—a few hundred micrometers thick, about twice the thickness of a human hair. And we can combine the functionality of multiple curved lenses into just one of our devices, further addressing the space crunch and opening up the possibility of new uses for cameras in mobile devices.
Before I tell you how the metalens evolved and how it works, consider a few previous efforts to replace the traditional curved lens.
Conceptually, any device that manipulates light does so by altering its three fundamental properties: phase, polarization, and intensity. The idea that any wave or wave field can be deconstructed down to these properties was proposed by Christiaan Huygens in 1678 and is a guiding principle in all of optics.
In this single metalens [between tweezers], the pillars are less than 500 nanometers in diameter. The black box at the bottom left of the enlargement represents 2.5 micrometers. Metalenz
In the early 18th century, the world’s most powerful economies placed great importance on the construction of lighthouses with larger and more powerful projection lenses to help protect their shipping interests. However, as these projection lenses grew larger, so did their weight. As a result, the physical size of a lens that could be raised to the top of a lighthouse and structurally supported placed limitations on the power of the beam that could be produced by the lighthouse.
French physicist Augustin-Jean Fresnel realized that if he cut a lens into facets, much of the central thickness of the lens could be removed but still retain the same optical power. The Fresnel lens represented a major improvement in optical technology and is now used in a host of applications, including automotive headlights and brake lights, overhead projectors, and—still—for lighthouse projection lenses. However, the Fresnel lens has limitations. For one, the flat edges of facets become sources of stray light. For another, faceted surfaces are more difficult to manufacture and polish precisely than continuously curved ones are. It’s a no-go for camera lenses, due to the surface accuracy requirements needed to produce good images.
Another approach, now widely used in 3D sensing and machine vision, traces its roots to one of the most famous experiments in modern physics: Thomas Young’s 1802 demonstration of diffraction. This experiment showed that light behaves like a wave, and when the waves meet, they can amplify or cancel one another depending on how far the waves have traveled. The so-called diffractive optical element (DOE) based on this phenomenon uses the wavelike properties of light to create an interference pattern—that is, alternating regions of dark and light, in the form of an array of dots, a grid, or any number of shapes. Today, many mobile devices use DOEs to convert a laser beam into “structured light.” This light pattern is projected, captured by an image sensor, then used by algorithms to create a 3D map of the scene. These tiny DOEs fit nicely into small gadgets, yet they can’t be used to create detailed images. So, again, applications are limited.
Enter the metalens. Developed at Harvard by a team led by professor Federico Capasso, then-graduate student Rob Devlin, research associates Reza Khorasaninejad, Wei Ting Chen, and others, metalenses work in a way that’s fundamentally different from any of these other approaches.
A metalens is a flat glass surface with a semiconductor layer on top. Etched in the semiconductor is an array of pillars several hundred nanometers high. These nanopillars can manipulate light waves with a degree of control not possible with traditional refractive lenses.
Imagine a shallow marsh filled with seagrass standing in water. An incoming wave causes the seagrass to sway back and forth, sending pollen flying off into the air. If you think of that incoming wave as light energy, and the nanopillars as the stalks of seagrass, you can picture how the properties of a nanopillar, including its height, thickness, and position next to other nanopillars, might change the distribution of light emerging from the lens.
A 12-inch wafer can hold up to 10,000 metalenses, made using a single semiconductor layer.Metalenz
We can use the ability of a metalens to redirect and change light in a number of ways. We can scatter and project light as a field of infrared dots. Invisible to the eye, these dots are used in many smart devices to measure distance, mapping a room or a face. We can sort light by its polarization (more on that in a moment). But probably the best way to explain how we are using these metasurfaces as a lens is by looking at the most familiar lens application—capturing an image.
The process starts by illuminating a scene with a monochromatic light source—a laser. (While using a metalens to capture a full-color image is conceptually possible, that is still a lab experiment and far from commercialization.) The objects in the scene bounce the light all over the place. Some of this light comes back toward the metalens, which is pointed, pillars out, toward the scene. These returning photons hit the tops of the pillars and transfer their energy into vibrations. The vibrations—called plasmons—travel down the pillars. When that energy reaches the bottom of a pillar, it exits as photons, which can be then captured by an image sensor. Those photons don’t need to have the same properties as those that entered the pillars; we can change these properties by the way we design and distribute the pillars.
Researchers around the world have been exploring the concept of metalenses for decades.
In a paper published in 1968 in Soviet Physics Uspekhi, Russian physicist Victor Veselago put the idea of metamaterials on the map, hypothesizing that nothing precluded the existence of a material that exhibits a negative index of refraction. Such a material would interact with light very differently than a normal material would. Where light ordinarily bounces off a material in the form of reflection, it would pass around this type of metamaterial like water going around a boulder in a stream.
It took until 2000 before the theory of metamaterials was implemented in the lab. That year, Richard A. Shelby and colleagues at the University of California, San Diego, demonstrated a negative refractive index metamaterial in the microwave region. They published the discovery in 2001 in Science, causing a stir as people imagined invisibility cloaks. (While intriguing to ponder, creating such a device would require precisely manufacturing and assembling thousands of metasurfaces.)
The first metalens to create high-quality images with visible light came out of Federico Capasso’s lab at Harvard. Demonstrated in 2016, with a description of the research published in Science, the technology immediately drew interest from smartphone manufacturers. Harvard then licensed the foundational intellectual property exclusively to Metalenz, where it has now been commercialized.
A single metalens [right] can replace a stack of traditional lenses [left], simplifying manufacturing and dramatically reducing the size of a lens package.Metalenz
Since then, researchers at Columbia University, Caltech, and the University of Washington, working with Tsinghua University, in Beijing, have also demonstrated the technology.
Much of the development work Metalenz does involves fine-tuning the way the devices are designed. In order to translate image features like resolution into nanoscale patterns, we developed tools to help calculate the way light waves interact with materials. We then convert those calculations into design files that can be used with standard semiconductor processing equipment.
The first wave of optical metasurfaces to make their way into mobile imaging systems have on the order of 10 million silicon pillars on a single flat surface only a few millimeters square, with each pillar precisely tuned to accept the correct phase of light, a painstaking process even with the help of advanced software. Future generations of the metalens won’t necessarily have more pillars, but they’ll likely have more sophisticated geometries, like sloped edges or asymmetric shapes.
Metalenz came out of stealth mode in 2021, announcing that it was getting ready to scale up production of devices. Manufacturing was not as big a challenge as design because the company manufactures metasurfaces using the same materials, lithography, and etching processes that it uses to make integrated circuits.
In fact, metalenses are less demanding to manufacture than even a very simple microchip because they require only a single lithography mask as opposed to the dozens required by a microprocessor. That makes them less prone to defects and less expensive. Moreover, the size of the features on an optical metasurface are measured in hundreds of nanometers, whereas foundries are accustomed to making chips with features that are smaller than 10 nanometers.
And, unlike plastic lenses, metalenses can be made in the same foundries that produce the other chips destined for smartphones. This means they could be directly integrated with the CMOS camera chips on site rather than having to be shipped to another location, which reduces their costs still further.
A single meta-optic, in combination with an array of laser emitters, can be used to create the type of high-contrast, near-infrared dot or line pattern used in 3D sensing. Metalenz
In 2022, ST Microelectronics announced the integration of Metalenz’s metasurface technology into its FlightSense modules. Previous generations of FlightSense have been used in more than 150 models of smartphones, drones, robots, and vehicles to detect distance. Such products with Metalenz technology inside are already in consumer hands, though ST Microelectronics isn’t releasing specifics.
Indeed, distance sensing is a sweet spot for the current generation of metalens technology, which operates at near-infrared wavelengths. For this application, many consumer electronics companies use a time-of-flight system, which has two optical components: one that transmits light and one that receives it. The transmitting optics are more complicated. These involve multiple lenses that collect light from a laser and transform it to parallel light waves—or, as optical engineers call it, a collimated beam. These also require a diffraction grating that turns the collimated beam into a field of dots. A single metalens can replace all of those transmitting and receiving optics, saving real estate within the device as well as reducing cost.
And a metalens does the field-of-dots job better in difficult lighting conditions because it can illuminate a broader area using less power than a traditional lens, directing more of the light to where you want it.
Conventional imaging systems, at best, gather information only about the spatial position of objects and their color and brightness.But the light carries another type of information: the orientation of the light waves as they travel through space—that is, the polarization. Future metalens applications will take advantage of the technology’s ability to detect polarized light.
The polarization of light reflecting off an object conveys all sorts of information about that object, including surface texture, type of surface material, and how deeply light penetrates the material before bouncing back to the sensor. Prior to the development of the metalens, a machine vision system would require complex optomechanical subsystems to gather polarization information. These typically rotate a polarizer—structured like a fence to allow only waves oriented at a certain angle to pass through—in front of a sensor. They then monitor how the angle of rotation impacts the amount of light hitting the sensor.
Metasurface optics are capable of capturing polarization information from light, revealing a material’s characteristics and providing depth information.Metalenz
A metalens, by contrast, doesn’t need a fence; all the incoming light comes through. Then it can be redirected to specific regions of the image sensor based on its polarization state, using a single optical element. If, for example, light is polarized along the X axis, the nanostructures of the metasurface will direct the light to one section of the image sensor. However, if it is polarized at 45 degrees to the X axis, the light will be directed to a different section. Then software can reconstruct the image with information about all its polarization states.
Using this technology, we can replace previously large and expensive laboratory equipment with tiny polarization-analysis devices incorporated into smartphones, cars, and even augmented-reality glasses. A smartphone-based polarimeter could let you determine whether a stone in a ring is diamond or glass, whether concrete is cured or needs more time, or whether an expensive hockey stick is worth buying or contains micro cracks. Miniaturized polarimeters could be used to determine whether a bridge’s support beam is at risk of failure, whether a patch on the road is black ice or just wet, or if a patch of green is really a bush or a painted surface being used to hide a tank. These devices could also help enable spoof-proof facial identification, since light reflects off a 2D photo of a person at different angles than a 3D face and from a silicone mask differently than it does from skin. Handheld polarizers could improve remote medical diagnostics—for example, polarization is used in oncology to examine tissue changes.
But like the smartphone itself, it’s hard to predict where metalenses will take us. When Apple introduced the iPhone in 2008, no one could have predicted that it would spawn companies like Uber. In the same way, perhaps the most exciting applications of metalenses are ones we can’t even imagine yet.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
LATTICE is an undergrad project from Caltech that’s developing a modular robotic transportation system for the lunar surface that uses autonomous rovers to set up a sort of cable car system to haul things like ice out of deep craters to someplace more useful. The prototype is fully functional, and pretty cool to watch in action.
We’re told that the team will be targeting a full system demonstration deploying across a “crater” on Earth this time next year. As to what those quotes around “crater” mean, your guess is as good as mine.
[ Caltech ]
Happy World Cocktail Day from Flexiv!
[ Flexiv ]
Here’s what Optimus has been up to lately.
As per usual, the robot is moderately interesting, but it’s probably best to mostly just ignore Musk.
[ Tesla ]
The INSECT tarsus-inspired compliant robotic grippER with soft adhesive pads (INSECTER) uses only one single electric actuator with a cable-driven mechanism. It can be easily controlled to perform a gripping motion akin to an insect tarsus (i.e., wrapping around the object) for handling various objects.
[ Paper ]
Congratulations to ANYbotics on their $50 million Series B!
And from 10 years ago (!) at ICRA 2013, here is video I took of StarlETH, one of ANYmal’s ancestors.
[ ANYbotics ]
In this video we present results from the recent field-testing campaign of the DigiForest project at Evo, Finland. The DigiForest project started in September 2022 and runs up to February 2026. It brings together diverse partners working on aerial robots, walking robots, autonomous lightweight harvesters, as well as forestry decision makers and commercial companies with the goal to create a full data pipeline for digitized forestry.
[ DigiForest ]
The Robotics and Perception Group at UZH will be presenting some new work on agile autonomous high-speed flight through cluttered environments at ICRA 2023.
[ Paper ]
Robots who lift together, stay together.
[ Sanctuary AI ]
The next CYBATHLON competition, which will take place again in 2024, breaks down barriers between the public, people with disabilities, researchers and technology developers. The initiative promotes the inclusion and participation of people with disabilities and improves assistance systems for use in everyday life by the end users.
[ Cybathlon ]
Russia’s invasion of Ukraine in 2022 put Ukrainian communications in a literal jam: Just before the invasion, Russian hackers knocked out Viasat satellite ground receivers across Europe. Then entrepreneur Elon Musk swept in to offer access to Starlink, SpaceX’s growing network of low Earth orbit (LEO) communications satellites. Musk soon reported that Starlink was suffering from jamming attacks and software countermeasures.
In March, the U.S. Department of Defense (DOD) concluded that Russia was still trying to jam Starlink, according to documents leaked by U.S. National Guard airman Ryan Teixeira and seen by the Washington Post. Ukrainian troops have likewise blamed problems with Starlink on Russian jamming, the website Defense One reports. If Russia is jamming a LEO constellation, it would be a new layer in the silent war in space-ground communications.
“There is really not a lot of information out there on this,” says Brian Weeden, the director of program planning for the Secure World Foundation, a nongovernmental organization that studies space governance. But, Weeden adds, “my sense is that it’s much harder to jam or interfere with Starlink [than with GPS satellites].”
Regardless of their altitude or size, communications satellites transmit more power and therefore require more power to jam than navigational satellites. However, compared with large geostationary satellites, LEO satellites—which orbit Earth at an altitude of 2,000 kilometers or lower—have frequent handovers that “introduce delays and opens up more surface for interference,” says Mark Manulis, a professor of privacy and applied cryptography at the University of the Federal Armed Forces’ Cyber Defense Research Institute (CODE) in Munich, Germany.
Security and communications researchers are working on defenses and countermeasures, mostly behind closed doors, but it is possible to infer from a few publications and open-source research how unprepared many LEO satellites are for direct attacks and some of the defenses that future LEO satellites may need.
For years, both private companies and government agencies have been planning LEO constellations, each numbering thousands of satellites. The DOD, for example, has been designing its own LEO satellite network to supplement its more traditional geostationary constellations for more than a decade and has already begun issuing contracts for the constellation’s construction. University research groups are also launching tiny, standardized cube satellites (CubeSats) into LEO for research and demonstration purposes. This proliferation of satellite constellations coincides with the emergence of off-the-shelf components and software-defined radio—both of which make the satellites more affordable, but perhaps less secure.
Russia’s defense agencies commissioned a system called Tobol that’s designed to counter jammers that might interfere with their own satellites, reported journalist and author Bart Hendrickx. That implies that Russia either can transmit jamming signals up to satellites, or suspects that adversaries can.
Many of the agencies and organizations launching the latest generation of low-cost satellites haven’t addressed the biggest security issues they face, researchers wrote in one review of LEO security in 2022. That may be because one of the temptations of LEO is the ability of relatively cheap new hardware to do smaller jobs.
“Satellites are becoming smaller. They are very purpose-specific,” says Ijaz Ahmad, a telecoms security researcher at the VTT Technical Research Centre in Espoo, Finland. “They have less resources for computing, processing, and also memory.” Less computing power means fewer encryption capabilities, as well as less ability to detect and respond to jamming or other active interference.
The rise of software-defined radio (SDR) has also made it easier to get hardware to accomplish new things, including allowing small satellites to cover many frequency bands. “When you make it programmable, you provide that hardware with some sort of remote connectivity so you can program it. But if the security side is overlooked, it will have severe consequences,” Ahmad says.
“At the moment there are no good standards focused on communications for LEO satellites.”
—Mark Manulis, professor of privacy and applied cryptography, University of the Federal Armed Forces
Among those consequences are organized criminal groups hacking and extorting satellite operators or selling information they have captured.
One response to the risks of software-defined radio and the fact that modern low-cost satellites require firmware updates is to include some simple physical security. Starlink did not respond to requests for comments on its security, but multiple independent researchers said they doubt today’s commercial satellites match military-grade satellite security countermeasures, or even meet the same standards as terrestrial communications networks. Of course, physical security can be defeated with a physical attack, and state actors have satellites capable of changing their orbits and grappling with, and thus perhaps physically hacking, communications satellites, the Secure World Foundation stated in an April report.
Despite that vulnerability, LEO satellites do bring certain advantages in a conflict: There are more of them, and they cost less per satellite. Attacking or destroying a satellite “might have been useful against an adversary who only has a few high-value satellites, but if the adversary has hundreds or thousands, then it’s a lot less of an impact,” Weeden says. LEO also offers a new option: sending a message to multiple satellites for later confirmation. That wasn’t possible when only a handful of GEO satellites covered Earth, but it is a way for cooperating transmitters and receivers to ensure that a message gets through intact. According to a 2021 talk by Vijitha Weerackody, a communications engineer at Johns Hopkins University, as few as three LEO satellites may be enough for such cooperation.
Even working together, future LEO constellation designers may need to respond with improved antennas, radio strategies that include spread spectrum modulation, and both temporal and transform-domain adaptive filtering. These strategies come at a cost to data transmission and complexity. But such measures may still be defeated by a strong enough signal that covers the satellite’s entire bandwidth and saturates its electronics.
“There’s a need to introduce a strong cryptographic layer,” says Manulis. “At the moment there are no good standards focused on communications for LEO satellites. Governments should push for standards in that area relying on cryptography.” The U.S. National Institute of Standards and Technology does have draft guidelines for commercial satellite cybersecurity that satellite operator OneWeb took into account when designing its LEO constellation, says OneWeb principal cloud-security architect Wendy Ng: “Hats off to them, they do a lot of work speaking to different vendors and organizations to make sure they’re doing the right thing.”
OneWeb uses encryption in its control channels, something a surprising number of satellite operators fail to do, says Johannes Willbold, a doctoral student at Ruhr University, in Bochum, Germany. Willbold is presenting his analysis of three research satellites’ security on 22 May 2023 at the IEEE Symposium on Security and Privacy. “A lot of satellites had straight-up no security measures to protect access in the first place,” he says.
Securing the growing constellations of LEO satellites matters to troops in trenches, investors in any space endeavor, anyone traveling into Earth orbit or beyond, and everyone on Earth who uses satellites to navigate or communicate. “I’m hoping there will be more initiatives where we can come together and share best practices and resources,” says OneWeb’s Ng. Willbold, who cofounded an academic workshop on satellite security, is optimistic that there will be: “It’s surprising to me how many people are now in the field, and how many papers they submitted.”
The Jet Propulsion Laboratory’s Ingenuity helicopter is preparing for the 50th flight of its five-flight mission to Mars. Flight 49, which took place last weekend, was its fastest and highest yet—the little helicopter flew 282 meters at an altitude of 16 meters, reaching a top speed of 6.50 meters per second. Not a bad performance for a tech demo that was supposed to be terminated two years ago.
From here, things are only going to get more difficult for Ingenuity. As the Perseverance rover continues its climb up the Jezero crater’s ancient river delta, Ingenuity is trying its best to scout ahead. But the winding hills and valleys make it difficult for the helicopter to communicate with the rover, and through the rover to its team back on Earth. And there isn’t a lot of time or room to spare, because Ingenuity isn’t allowed to fly too close to Perseverance, meaning that if the rover ever catches up to the helicopter, the helicopter may have to be left behind for the rover’s own safety. This high-stakes race between the helicopter scout and the science rover will continue for kilometers.
“Two years in, 10 kilometers flown, and we’re well over an hour now in the skies of Mars.”
—Teddy Tzanetos, NASA
For the Ingenuity team, this new mode of operation was both a challenge and an opportunity. This was nothing new for folks who have managed to keep this 30-day technology demo alive and healthy and productive for years, all from a couple hundred million kilometers away. IEEE Spectrum spoke with Ingenuity team lead Teddy Tzanetos at JPL last week about whether flying on Mars is ever routine, how they upgraded Ingenuity for its extended mission, and what the helicopter’s success means for the future of airborne exploration and science on Mars.
IEEE Spectrum: Is 50 flights on Mars a milestone for you folks, or are things routine enough now that you’re looking at it as just another flight?
Teddy Tzanetos: It’s hugely meaningful. We’ll come back to the routine question in a second, but it’s very meaningful for all of us. When we hit 10 and then 25 it was big, but 50 is a pretty serious number now that we’re 10 times our initial flight count. Two years in, 10 kilometers flown, and we’re well over an hour now in the skies of Mars. So hitting flight 50, it’s a big thing—we’re probably going to set up a happy hour and have a big party for the team.
Can you talk about some of the new challenges that Ingenuity has been facing as it makes its way up Jezero crater’s river delta along with the Perseverance rover?
Tzanetos: The core of the challenge here is that the paradigm has changed. When you look at the first year of Ingenuity’s extended operations, we were still in the Three Forks area, where the ground was flat. We could get line of sight from the helicopter to the rover from hundreds and hundreds of meters away. Our longest link that we established was 1.2 kilometers—a massive distance.
And then we started to realize that the rover was going to enter the river delta in like six months. It’s going to start climbing up through dozens and dozens of meters of elevation change and passing through ravines, and that’s going to start presenting a telecom issue for us. We knew that it couldn’t be business as usual anymore—if we still wanted to keep this helicopter mission going, not only did we need to change the way we were operating, we also had to change the helicopter itself.
“We owe it to everyone who worked on Ingenuity and everyone who will continue to work on rotorcraft on Mars to try and get everything out of this little spacecraft that we can.”
—Teddy Tzanetos, NASA
This realization culminated in the most challenging flight software upgrade we’ve ever done with Ingenuity, which happened last December. We went into the guts of our algorithms and added two new features. One was the ability to detect and react to landing hazards from the air, which involved handing over a little bit of autonomy back to Ingenuity, with the ability to tell it, “Fly to your terminal waypoint and try and land where we think is good, based off of orbital imagery. But if you have better information from your images than what we humans had here on Earth, and you see a hazard, pick a safer site and land there instead.” So that’s one huge change in what’s happening now. And we need that at the river delta because we’re no longer flying in a parking lot—besides the challenge of the elevation change, the terrain is different as well, with more and larger rocks that Ingenuity needs to avoid.
The second feature that we added was to include information about the terrain to Ingenuity’s navigation filter. When we designed Ingenuity, we assumed we were only going to be deployed on the flat terrain of Three Forks. Therefore, any change in the laser altimeter measurement we could trust to be a real change in the motion of the helicopter, or we could at least filter that into our altitude data. But that’s no longer the case. Now, as Ingenuity flies, if the altimeter sees a big decrease in elevation, that could be because the ground is rising to meet us rather than because we’re moving down. So since December, we’ve been telling Ingenuity about the elevation profile across its intended flight so that it knows what the ground is doing underneath it.
Now that both the rover and the helicopter have begun the river delta climb, we’re also paying very close attention to our telecom-link budget maps. You can imagine every hill or rise that could occlude the line of sight between the helicopter antenna and the rover antenna will have a big impact on your telecom link, and we have wonderful maps from orbit where we can pick a potential landing point and propagate our radio-link budget calculation across that point.
We’re trying to plan these flights as aggressively as we can to make sure that we stay ahead of Perseverance. We don’t want to run the risk of having a situation where the rover may need to wait for Ingenuity—that’s not a good thing for anybody. But we also want to provide value for the rover by scouting ahead, and what we hope to do on flight 50 is to get some imagery of the Belva crater, which is this beautiful massive crater to the north of where Ingenuity currently is. We’re going to get perspectives that the rover team would not be able to provide for the science team, and it’s really exciting for us when there are these moments that are uniquely driven by Ingenuity’s capability. We want to go after those, because we want to provide that value while she’s still healthy. While we still can. We owe it to everyone who worked on Ingenuity and everyone who will continue to work on rotorcraft on Mars to try and get everything out of this little spacecraft that we can.
“One of the best hallmarks of technology success is when you don’t realize it, or when it becomes boring. That means the technology is working, and that’s a wonderful feeling.”
—Teddy Tzanetos, NASA
At one point, NASA was very clear that Ingenuity’s mission would come to an end so that Perseverance could move on to focus on its primary mission. But obviously, Ingenuity is still flying, and still keeping up with the rover. Not only that, but we’ve heard from a rover driver how valuable it is to have Ingenuity scouting ahead. With that in mind, as Ingenuity navigates this challenging terrain, will there be any flexibility if something doesn’t go quite right, or will Perseverance just leave the helicopter behind?
Tzanetos: We have to look at the big picture. The most important thing at this point is for Perseverance to collect samples and do science. If you look at everything that needs to be done across all of the rover’s science payloads, every sol [Martian day] is precious. And the helicopter team understands that.
We’re doing our best to become more efficient, and I think that’s a big win that we don’t celebrate enough on the Ingenuity team internally—how much more efficient we are today compared to where we were two years ago. Earlier, you mentioned flying becoming routine. I think the team has succeeded in doing that, and I’m extremely proud of that accomplishment. One of the best hallmarks of technology success is when you don’t realize it, or when it becomes boring. That means the technology is working, and that’s a wonderful feeling.
There’s what’s called a tactical window that we have between the downlink of the last sol’s activity and when we need to uplink activity for the next sol, which is anywhere from five to 10 hours. A certain cadence of activities have to take place during that window, and we need to pass certain checkpoints to get our data uploaded and radiated through the Deep Space Network in time. We’ve worked very, very hard to minimize our footprint on that timeline, while also being reactive so that we can move quickly on any last-minute changes that the rover team needs us to accommodate. We have to get in, fly, and get out.
Anomalies will happen. That’s just the nature of Mars. But when those moments occur, the helicopter and rover teams back each other up. To be clear, no one on the helicopter team wants to cause a delay for the rover. We all want the rover to fulfill its mission, get its samples, and get the science done. If we have a serious anomaly, we’ll have to take that one sol at a time. We’re going to try as hard as we can to make sure we can keep pushing this little baby as far as we can while still accomplishing the core science mission.
NASA’s Ingenuity Mars Helicopter takes off and lands in this video captured on 19 April 2021 by Mastcam-Z, an imager aboard NASA’s Perseverance Mars rover. This video features only the moments of takeoff and the landing—and not footage of the helicopter hovering for about 30 seconds.NASA/JPL-Caltech/ASU/MSSS
How do you balance risk to the helicopter against exploration and science goals, or trying new things like pushing Ingenuity’s flight envelope?
Tzanetos: That’s the fun part! There’s no instruction manual. The way we do it is we have a phone call with the core people on the team, and everyone just shares their opinions. The highest priority for us is getting some good scouting imagery for the scientists and rover drivers—we jump at those opportunities. If we’re flying through a piece of terrain that isn’t particularly interesting, that’s when we start looking at the flight envelope developments, right? With flight 49, we’re going higher than we ever had before and flying faster than we ever have before. That’s not a request from the science community or the rover planners; that’s coming from our own internal team where we’re trying to release capability piece by piece as the flights go on, because every time we get that win, it’s a win for the sample recovery helicopters. So there’s that ever-present pressure to push harder, push faster, push higher. And let’s also get some wonderful scouting data along the way when we can.
What have you learned about flying helicopters on Mars from 50 flights that you would have no idea about if you’d been able to do just five flights?
Tzanetos: Tons of things, since I just talked about flying faster and flying higher, and we’ve now legitimately expanded Ingenuity’s flight envelope. There’s the lifetime argument, which is obvious—this design has lasted much longer than anyone could have expected, even just in terms of parts and workmanship. Each one of Ingenuity’s nearly 1,000 solder joints were soldered by technicians at JPL who have the most blessed, precise hands. We’d designed Ingenuity to fly in springtime on Mars, but during the Martian winter, for more than 200 sols the temperature cycled between 20 °C and –90 °C and back again. Eventually, it got so cold that Ingenuity’s battery would die every night, the heater would stop running, and everything would freeze. That was a massive curve ball that we had to contend with, but because of the workmanship of those people, Ingenuity was able to survive.
“We now have a stake in the ground to say, ‘Off-the-shelf works, we can trust these things.’”
—Teddy Tzanetos, NASA
Also, dust. We knew that dust would settle on Ingenuity’s solar panel, but we’ve shown that through the process of flying, there’s some sort of effect that’s helping us to keep our panel clean. It’s difficult to put a finger on exactly what it is—maybe the vibration of flight, or the downwash of air passing over the solar panel and into the rotors, or the oncoming air as we move forward. And it wasn’t just the dust on the panels; we also got dust in our actuators. Last year, Ingenuity weathered a big dust storm, and afterwards when we tried checking our control surfaces, things did not look good. The motor currents were way too high, and we were left scratching our heads, trying to figure out what to do. We didn’t have dust boots around the rotor system simply because we had thought, “We’re only going to be operating for 30 days, we don’t need them.”
Our partners at AeroVironment [who worked with JPL on the Mars helicopter design] had one of the swash plate mechanisms lying around, so they spoke to our geologists to figure out what kinds of dust particles might have gotten blown into the swash plate on Mars. We sent them some simulated Mars dust, and they threw it at the swash plate, and then did an experiment to figure out how many times they needed to cycle it before it started to operate properly. Seven cycles got most of the dust out, so we tried that on Mars, and it worked. So now we have a new tool in our tool belt: We know how to clean ourselves. That’s huge. And we wouldn’t have figured out any of these things had we not gone past five flights.
Looking at the Mars sample return helicopters, how much of their design has been made possible by the fact that Ingenuity has been able to fly this long and answer these questions that you might not have even thought to ask?
The entire design. I don’t think we’d be talking about sample recovery helicopters if Ingenuity didn’t fly, period, and if it hadn’t survived for as long as it has. You have to keep in mind, Ingenuity is a tech demo. These sample recovery helicopters are a real part of the mission now. If Perseverance has an anomaly in the next decade, these helicopters are the backup—they have to work. And I’m sure that Ingenuity’s two years of extended operations provided the evidence necessary to even start talking about the sample recovery helicopters. Otherwise, it would be crazy to think, “Let’s go from tech demo to part of a class B mission within a year.”
That’s amazing. It must feel really good for you folks to have completely changed what the sample return mission looks like because of how successful Ingenuity has been.
Absolutely. I personally thought to myself, “Hey, this is great, Ingenuity has been doing a great job, and this will be wonderful data for the next time we send a rotorcraft to Mars.” Which I thought was going to be like 10 years later—I thought that the Mars sample return would happen with a rover, and then maybe after that, we could throw some helicopters on Mars, maybe a hexacopter with some science payloads on it. Never in my wildest dreams did I ever think, while we’re still flying Ingenuity, that we’d be designing the next helicopter mission based on Ingenuity to go to Mars.
More broadly, how has Ingenuity influenced NASA’s approach to robotics?
From a robotics perspective, I hope one of the long-lasting impacts of Ingenuity is the adoption of commercial off-the-shelf technology into more NASA missions, and other non-NASA missions into space. This was the first time we flew a cellphone processor, not because we loved the idea about using a part that wasn’t radiation hardened but because we were forced to. We needed a high-throughput processor, and the only way to do that and be lightweight enough was to use a cellphone chip. There was a lot of concern about that—we did some initial testing, but given that we were a tech demo, which means high-risk, high reward, we could only do so much. And here we are, two years later, with this Snapdragon Qualcomm processor that’s been running for two years on the surface of Mars, not to mention all the other components like the IMU [inertial measurement unit], the camera, the battery, the solar panels. I think that’s one of the unsung victories of Ingenuity. We now have a stake in the ground to say, “Off-the-shelf works, we can trust these things.” And we can make a stronger argument for the next mission to really enable your engineers and your scientists to have much more technology on board than anything else we’ve sent into space.
Ingenuity will attempt Flight 50 anytime now, with the goal of traveling 300 meters to the other side of a ridge. The landing site may make it difficult to know whether the flight was successful until Perseverance catches up a bit, but we hope to hear the good news within the next few days.
A year has passed since the launch of the ESA’s Rosalind Franklin rover mission was put on hold, but the work has not stopped for the ExoMars teams in Europe.
In this programme, the ESA Web TV crew travel back to Turin, Italy to talk to the teams and watch as new tests are being conducted with the rover’s Earth twin Amalia while the real rover remains carefully stored in an ultra-clean room.
The 15-minute special programme gives an update on what happened since the mission was cancelled in 2022 because of the Russian invasion of Ukraine, the plan ahead, the new challenges, the latest deep drilling test and the stringent planetary protection measures in place.
ESA’s Rosalind Franklin rover has unique drilling capabilities and an on-board science laboratory unrivalled by any other mission in development. Its twin rover Amalia was back on its wheels and drilled down 1.7 metres into a martian-like ground in Italy – about 25 times deeper than any other rover has ever attempted on Mars. The rover also collected samples for analysis under the watchful eye of European science teams.
ESA, together with international and industrial partners, is reshaping the ExoMars Rosalind Franklin Mission with new European elements, including a lander, and a target date of 2028 for the trip to Mars.
The newly shaped Rosalind Franklin Mission will recover one of the original objectives of ExoMars – to create an independent European capability to access the surface of Mars with a sophisticated robotic payload.
More information: https://www.esa.int/ExoMars
A powerful trillion-watt laser shot at the sky can generate lightning rods in the air that can guide lightning strikes to keep them from causing havoc, a new study finds.
To date, the most common and effective form of protection against lightning is the lightning rod invented by Benjamin Franklin in 1752. These pointed electrically conductive metal rods intercept lightning strikes and guide their electric current safely to the ground.
However, a key drawback of a conventional lightning rod is that the radius of its area of protection is roughly equal to its height. Since there are practical limits to how tall one can build a lightning rod, this means they may not prove useful at protecting large areas, including sensitive infrastructure such as airports, rocket launchpads and nuclear power plants, says study senior author Jean-Pierre Wolf, a physicist at the University of Geneva.
“This is the first demonstration that lightning can be controlled by a laser.”
—Jean-Pierre Wolf, University of Geneva
Scientists first suggested using lasers to generate lightning rods in the air nearly 50 years ago. “The idea is to create a very long lightning rod with the laser,” Wolf says.
In the new study, researchers conducted experiments during the summer of 2021 at the top of Mount Säntis, which at 2,502 meters above sea level, is the highest mountain in the Alpstein massif of northeastern Switzerland. The laser was activated every time storms were forecast between June and September, with air traffic closed over the area during these tests.
Wolf and his colleagues sought to protect a 124-meter transmitter tower equipped with a traditional lightning rod at the summit belonging to telecommunications provider Swisscom. This tower is struck by lightning about 100 times a year, and scientists had previously equipped it with multiple sensors to analyze these strikes.
Near the tower, the researchers installed a near-infrared laser the size of a large car. It fired pulses each packing about a half-joule of energy and a picosecond (trillionth of a second) long roughly a thousand times a second, with a peak power of a terawatt (trillion watts). (It also shot a visible green beam to help show the laser’s path.)
“Imagine transporting a 10-ton laser to 2,500-meter altitude on a mountain with helicopters, making it run in very harsh conditions, tracking lightning in extreme weather like winds up to 200 kilometers per hour, heavy rain, hail, temperatures varying from -10 degrees to 20 degrees Celsius in the same day, and then, when it works, you get a massive lightning bolt some tens of meters next to you—and you’re so happy,” Wolf says.
The laser pulses can alter the refractive index of the air—the quality of a material that controls how quickly light travels within it. This can make the air behave like a series of lenses.
After crossing this lensing air, the intense, short laser pulses can rapidly ionize and heat air molecules, expelling them from the path of the beam at supersonic speeds. This leaves behind a channel of low-density air for roughly a millisecond. These “filaments” possess high electric conductivity and can thus serve as lightning rods, and can range up to 100 meters long. The researchers could adjust the laser to create filaments that appear up to a kilometer from the machine.
In experiments, the scientists created filaments above, but near, the tip of the tower’s lightning rod. This essentially boosted the rod’s height by at least 30 meters, extending its area of protection so that lightning would not strike parts of the tower otherwise outside the rod’s shelter, says study lead author Aurélien Houard, a research scientist at the Superior National School of Advanced Techniques in Paris.
The laser operated for more than six hours during thunderstorms happening within three kilometers of the tower. The tower was hit by at least 16 lightning flashes, all of which streaked upward.
Four of these flashes occurred while the laser was operating. High-speed camera footage and radio and X-ray detectors showed the laser helped guide the course of these discharges. One of these guided strikes was recorded on camera and revealed it followed the laser path for nearly 60 meters.
During tests carried out on the summit of Mt. Säntis by Jean-Pierre Wolf and Aurélien Houard’s team, the scientists noted that lightning discharges followed laser beams for several dozen meters before reaching the Swisscom telecommunications tower (in red and white).Xavier Ravinet/UNIGE
“This is the first demonstration that lightning can be controlled by a laser,” Wolf says.
Although lab experiments had suggested that lasers could help guide lightning strikes, previous experiments failed to do so in the field over the past 20 or so years. Wolf, Houard and their colleagues suggest their new work may have succeeded because of the pulse rate of their laser was hundreds of times greater than prior attempts. The more pulses are used, the greater the chance one might successfully intercept all of the activity leading up to a lightning flash. In addition, higher pulse rates are likely better at keeping filaments electrically conductive, they added.
Wolf noted their work is not geoengineering research. “We are not modifying the climate,” he says. “We deflect lightning to protect areas.”
In the long term, the scientists would like to use lasers to extend lightning rods by 500 meters. In addition, they would likely to run experiments at sites such as airports and rocket launchpads, Wolf notes.
The researchers detailed their findings 16 January in the journal Nature Photonics.
From biking adventures to city breaks, get inspiration for your next break – whether in the UK or further afield – with twice-weekly emails from the Guardian’s travel editors. You’ll also receive handpicked offers from Guardian Holidays.
From biking adventures to city breaks, get inspiration for your next break – whether in the UK or further afield – with twice-weekly emails from the Guardian’s travel editors.
You’ll also receive handpicked offers from Guardian Holidays.Continue reading...
RSS Rabbit links users to publicly available RSS entries.
Vet every link before clicking! The creators accept no responsibility for the contents of these entries.
We're not prepared to take user feedback yet. Check back soon!