Did you know that ESA is researching human hibernation for long distance spaceflight to Mars or beyond?
Hibernating astronauts could be the best way to save mission costs, reduce the size of spacecraft by a third and keep crew healthy on their way to Mars. An ESA-led investigation suggests that human hibernation goes beyond the realm of science-fiction and may become a game-changing technique for space travel.
When packing for a return flight to the Red Planet, space engineers account for around two years’ worth of food and water for the crew.
Torpor during hibernation is an induced state that reduces the metabolic rate of an organism. This ‘suspended animation’ is a common mechanism in animals who wish to preserve energy.
Reducing the metabolic rate of a crew en route to Mars down to 25% of the normal state would dramatically cut down the amount of supplies and habitat size, making long-duration exploration more feasible.
Mimicking therapeutic torpor, the idea of putting human into a state of hibernation, has been around in hospitals since the 1980s – doctors can induce hypothermia to reduce metabolism during long and complex surgeries. However, it is not an active reduction of energy and misses most of the advantages of torpor. Studies on hibernation to visit other planets could offer new potential applications for patient care on Earth.
Animals hibernate to survive periods of cold and food or water scarcity, reducing their heart rate, breathing and other vital functions to a fraction of their normal life, while body temperature lowers close to ambient temperature. Tardigrades, frogs and reptiles are very good at it.
Lower testosterone levels seem to aid long hibernation in mammals, estrogens in humans strongly regulate energy metabolism.
With the crew at rest for long periods, artificial intelligence will come into play during anomalies and emergencies.
The possibilities of hibernation for medical use is of particular interest to the European research community and could transform how we approach many severe illnesses.
Inducing torpor is already used in some medical environments such as surgical theathers to replace anesthesia in those patients allergic to anesthetic drugs.
Match ID: 0 Score: 85.00 source: www.esa.int age: 49 days qualifiers: 71.43 space travel, 10.71 space travel, 2.86 planets
Spotted a UFO? There’s an App for That Tue, 31 Jan 2023 20:19:41 +0000 Enigma Labs launches a project to crowdsource and quantify data about “unidentified aerial phenomena.” Match ID: 1 Score: 80.00 source: www.wired.com age: 2 days qualifiers: 65.00 nasa, 15.00 aliens
Joe Acabá, de la NASA, será el astronauta jefe de la agencia Thu, 02 Feb 2023 09:58 EST La NASA ha nombrado al astronauta veterano Joe Acabá jefe de la Oficina de Astronautas del Centro Espacial Johnson de la agencia. Veterano condecorado de múltiples vuelos espaciales, así como ex marine estadounidense y exeducador, Acabá es la primera persona de ascendencia hispana seleccionada para dirigir la oficina. Match ID: 2 Score: 65.00 source: www.nasa.gov age: 0 days qualifiers: 65.00 nasa
NASA’s Joe Acaba to Serve as Agency’s Chief Astronaut Thu, 02 Feb 2023 09:27 EST NASA has appointed veteran astronaut Joe Acaba as chief of the Astronaut Office at the agency’s Johnson Space Center in Houston. A decorated veteran of multiple spaceflights, as well a former U.S. Marine and former educator, Acaba is the first person of Hispanic heritage selected to lead the office. Match ID: 3 Score: 65.00 source: www.nasa.gov age: 0 days qualifiers: 65.00 nasa
ISS Daily Summary Report – 2/01/2023 Wed, 01 Feb 2023 16:00:32 +0000 Payloads: Fluid Dynamics in Space (FLUIDICS): The crew disconnected and stowed the Fluidics HDD and its cable from the CMAU Z-book laptop. The measurement of liquid displacement within a sphere in microgravity relates to a given kinematic representation of a spacecraft’s fuel tank. The FLUIDICS investigation evaluates the Center of Mass (CoM) position regarding a … Match ID: 4 Score: 65.00 source: blogs.nasa.gov age: 1 day qualifiers: 65.00 nasa
ESA’s geology training course PANGAEA has come of age with the publication of a paper that describes the quest for designing the best possible geology training for the next astronauts to walk on the surface of the Moon.
Match ID: 5 Score: 65.00 source: www.esa.int age: 1 day qualifiers: 65.00 nasa
VP Awards Former NASA Astronauts Congressional Space Medal of Honor Tue, 31 Jan 2023 16:03 EST On behalf of President Joe Biden, Vice President Kamala Harris awarded former NASA astronauts Douglas Hurley and Robert Behnken the Congressional Space Medal of Honor Tuesday for their bravery in NASA’s SpaceX Demonstration Mission-2 (Demo-2) to the International Space Station in 2020. Match ID: 6 Score: 65.00 source: www.nasa.gov age: 2 days qualifiers: 65.00 nasa
NASA Spinoffs Bolster Climate Resilience, Improve Medical Care, More Tue, 31 Jan 2023 11:57 EST When it comes to NASA, most people look to the skies as rockets, rovers, and astronauts push the boundaries of space exploration. But the benefits of going above and beyond can be found here on Earth through products and services born from NASA innovation. Match ID: 7 Score: 65.00 source: www.nasa.gov age: 2 days qualifiers: 65.00 nasa
NASA Extends Goddard Logistics, Technical Services Contract Tue, 31 Jan 2023 11:27 EST NASA has awarded a modification to extend the period of performance of the Goddard Logistics and Technical Information II (GLTI II) Services Contract with TRAX International Corporation of Las Vegas. Match ID: 8 Score: 65.00 source: www.nasa.gov age: 2 days qualifiers: 65.00 nasa
ISS Daily Summary Report – 1/31/2023 Tue, 31 Jan 2023 16:00:46 +0000 Payloads: ADvanced Space Experiment Processor (ADSEP): The ADSEP hardware was installed and activated. A Tissue Cassette was inserted for equipment check out. ADSEP is a thermally controlled single-middeck-locker equivalent that accommodates up to three cassette-based experiments that can be independently operated. Its companion hardware consists of a collection of several experiment cassettes, each doubly or … Match ID: 9 Score: 65.00 source: blogs.nasa.gov age: 2 days qualifiers: 65.00 nasa
Former NASA Astronauts to Receive Congressional Space Medal of Honor Mon, 30 Jan 2023 15:13 EST Vice President Kamala Harris will award former NASA astronauts Douglas Hurley and Robert Behnken the Congressional Space Medal of Honor at 4:25 p.m. EST on Tuesday, Jan. 31. Hurley and Behnken will receive the award for bravery in NASA’s SpaceX Demonstration Mission-2 (Demo-2) to the International Space Station in 2020. Match ID: 10 Score: 55.71 source: www.nasa.gov age: 3 days qualifiers: 55.71 nasa
NASA to Air Live Coverage of Spacewalk for Power System Upgrades Mon, 30 Jan 2023 12:31 EST Two astronauts aboard the International Space Station will conduct a spacewalk Thursday, Feb. 2, to continue the installation of hardware for future power system upgrades. The spacewalk is scheduled to begin at 8:15 a.m. EST and last about six and a half hours. Match ID: 11 Score: 55.71 source: www.nasa.gov age: 3 days qualifiers: 55.71 nasa
ISS Daily Summary Report – 1/30/2023 Mon, 30 Jan 2023 16:00:10 +0000 Payloads: Actiwatch-Plus (AWP): The crew connected the AWP devices to a Human Research Facility (HRF) rack Universal Serial Bus (USB) hub to charge them and transfer data for subsequent downlink. The Actiwatch-Plus is a waterproof, non-intrusive, sleep-wake activity monitor worn on the wrist of a crewmember and contains a miniature uniaxial accelerometer that produces a … Match ID: 12 Score: 55.71 source: blogs.nasa.gov age: 3 days qualifiers: 55.71 nasa
Apptronik, a Texas-based robotics company with its roots in the Human Centered Robotics Lab at University of Texas at Austin, has spent the last few years working toward a practical, general-purpose humanoid robot. By designing its robot (called Apollo) completely from the ground up, including electronics and actuators, Apptronik is hoping that it’ll be able to deliver something affordable, reliable, and broadly useful. But at the moment, the most successful robots are not generalized systems—they’re uni-taskers, robots that can do one specific task very well but more or less nothing else. A general purpose robot, especially one in a human form factor, would have enormous potential. But the challenge is enormous, too.
So why does Apptronik believe that it has the answer to general-purpose humanoid robots with Apollo? To find out, we spoke with Apptronik’s founders, CEO Jeff Cardenas and CTO Nick Paine.
IEEE Spectrum: Why are you developing a general-purpose robot when the most successful robots in the supply chain focus on specific tasks?
Nick Paine: It’s about our level of ambition. A specialized tool is always going to beat a general tool at one task, but if you’re trying to solve 10 tasks, or 100 tasks, or 1,000 tasks, it’s more logical to put your effort into a single versatile hardware platform with specialized software that solves a myriad of different problems.
How do you know that you’ve reached an inflection point where building a general-purpose commercial humanoid is now realistic, when it wasn’t before?
Paine: There are a number of different things. For one, Moore’s Law has slowed down, but computers are evolving in a way that has helped advance the complexity of algorithms that can be deployed on mobile systems. Also, there are new algorithms that have been developed recently that have enabled advancements in legged locomotion, machine vision, and manipulation. And along with algorithmic improvements, there have been sensing improvements. All of this has influenced the ability to design these types of legged systems for unstructured environments.
Jeff Cardenas: I think it’s taken decades for it to be the right time. After many, many iterations as a company, we’ve gotten to the point where we’ve said, “Okay, we see all the pieces to where we believe we can build a robust, capable, affordable system that can really go out and do work.” It’s still the beginning, but we’re now at an inflection point where there’s demand from the market, and we can get these out into the world.
The reason that I got into robotics is that I was sick of seeing robots just dancing all the time. I really wanted to make robots that could be useful in the world. —Nick Paine, CTO of Apptronik
Why did you need to develop and test 30 different actuators for Apollo, and how did you know that the 30th actuator was the right one?
Paine: The reason for the variety was that we take a first-principles approach to designing robotic systems. The way you control the system really impacts how you design the system, and that goes all the way down to the actuators. A certain type of actuator is not always the silver bullet: Every actuator has its strengths and weaknesses, and we’ve explored that space to understand the limitations of physics to guide us toward the right solutions.
With your focus on making a system that’s affordable, how much are you relying on software to help you minimize hardware costs?
Paine: Some groups have tried masking the deficiencies of cheap, low-quality hardware with software. That’s not at all the approach we’re taking. We are leaning on our experience building these kinds of systems over the years from a first-principles approach. Building from the core requirements for this type of system, we’ve found a solution that hits our performance targets while also being far more mass producible compared to anything we’ve seen in this space previously. We’re really excited about the solution that we’ve found.
How much effort are you putting into software at this stage? How will you teach Apollo to do useful things?
Paine: There are some basic applications that we need to solve for Apollo to be fundamentally useful. It needs to be able to walk around, to use its upper body and its arms to interact with the environment. Those are the core capabilities that we’re working on, and once those are at a certain level of maturity, that’s where we can open up the platform for third-party application developers to build on top of that.
Cardenas: If you look at Willow Garage with the PR2, they had a similar approach, which was to build a solid hardware platform, create a powerful API, and then let others build applications on it. But then you’re really putting your destiny in the hands of other developers. One of the things that we learned from that is if you want to enable that future, you have to prove that initial utility. So what we’re doing is handling the full-stack development on the initial applications, which will be targeting supply chain and logistics.
NASA officials have expressed their interest in Apptronik developing “technology and talent that will sustain us through the Artemis program and looking forward to Mars.”
“In robotics, seeing is believing. You can say whatever you want, but you really have to prove what you can do, and that’s been our focus. We want to show versus tell.” —Jeff Cardenas, CEO of Apptronik
Apptronik plans for the alpha version of Apollo to be ready in March, in time for a sneak peak for a small audience at SXSW. From there, the alpha Apollos will go through pilots as Apptronik collects feedback to develop a beta version that will begin larger deployments. The company expects these programs to lead to full a gamma version and full production runs by the end of 2024.
Match ID: 13 Score: 51.43 source: spectrum.ieee.org age: 5 days qualifiers: 37.14 nasa, 14.29 mit
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Simulation-based reinforcement learning approaches are leading the next innovations in legged robot control. However, the resulting control policies are still not applicable on soft and deformable terrains, especially at high speed. To this end, we introduce a versatile and computationally efficient granular media model for reinforcement learning. We applied our techniques to the Raibo robot, a dynamic quadrupedal robot developed in-house. The trained networks demonstrated high-speed locomotion capabilities on deformable terrains.
Arm prostheses are becoming smarter, more customized, and more versatile. We’re closer to replicating everyday movements than ever before, but we’re not there yet. Can you do better? Join teams to revolutionize prosthetics and build a world without barriers.
RB-VOGUI is the robot developed for this success story and is mainly responsible for the navigation and collection of high-quality data, which is transferred in real time to the relevant personnel. After the implementation of the fleet of autonomous mobile robots, only one operator is needed to monitor the fleet from a control centre.
This GRASP on Robotics talk is by Frank Dellaert at Georgia Tech: “Factor Graphs for Perception and Action.”
Factor graphs have been very successful in providing a lingua franca in which to phrase robotics perception and navigation problems. In this talk I will revisit some of those successes, also discussed in depth in a recent review article. However, I will focus on our more recent work in the talk, centered on using factor graphs for action. I will discuss our efforts in motion planning, trajectory optimization, optimal control, and model-predictive control, highlighting SCATE, our recent work on collision avoidance for autonomous spacecraft.
There’s a handful of robotics companies currently working on what could be called general-purpose humanoid robots. That is, human-size, human-shaped robots with legs for mobility and arms for manipulation that can (or, may one day be able to) perform useful tasks in environments designed primarily for humans. The value proposition is obvious—drop-in replacement of humans for dull, dirty, or dangerous tasks. This sounds a little ominous, but the fact is that people don’t want to be doing the jobs that these robots are intended to do in the short term, and there just aren’t enough people to do these jobs as it is.
We tend to look at claims of commercializable general-purpose humanoid robots with some skepticism, because humanoids are really, really hard. They’re still really hard in a research context, which is usually where things have to get easier before anyone starts thinking about commercialization. There are certainly companies out there doing some amazing work toward practical legged systems, but at this point, “practical” is more about not falling over than it is about performance or cost effectiveness. The overall approach toward solving humanoids in this way tends to be to build something complex and expensive that does what you want, with the goal of cost reduction over time to get it to a point where it’s affordable enough to be a practical solution to a real problem.
Apptronik, based in Austin, Texas, is the latest company to attempt to figure out how to make a practical general-purpose robot. Its approach is to focus on things like cost and reliability from the start, developing (for example) its own actuators from scratch in a way that it can be sure will be cost effective and supply-chain friendly. Apptronik’s goal is to develop a platform that costs well under US $100,000 of which it hopes to be able to deliver a million by 2030, although the plan is to demonstrate a prototype early this year. Based on what we’ve seen of commercial humanoid robots recently, this seems like a huge challenge. And in part two of this story (to be posted tomorrow), we will be talking in depth to Apptronik’s cofounders to learn more about how they’re going to make general-purpose humanoids happen.
But Apptronik has by no means abandoned its NASA roots. In 2019, NASA had plans for what was essentially going to be a Valkyrie 2, which was to be a ground-up redesign of the Valkyrie platform. As with many of the coolest NASA projects, the potential new humanoid didn’t survive budget prioritization for very long, but even at the time it wasn’t clear to us why NASA wanted to build its own humanoid rather than asking someone else to build one for it considering how much progress we’ve seen with humanoid robots over the last decade. Ultimately, NASA decided to move forward with more of a partnership model, which is where Apptronik fits in—a partnership between Apptronik and NASA will help accelerate commercialization of Apollo.
“We recognize that Apptronik is building a production robot that’s designed for terrestrial use,” says NASA’s Shaun Azimi, who leads the Dexterous Robotics Team at NASA’s Johnson Space Center. “From NASA’s perspective, what we’re aiming to do with this partnership is to encourage the development of technology and talent that will sustain us through the Artemis program and looking forward to Mars.”
Apptronik is positioning Apollo as a high-performance, easy-to-use, and versatile system. It is imagining an “iPhone of robots.”
“Apollo is the robot that we always wanted to build,” says Jeff Cardenas, Apptronik cofounder and CEO. This new humanoid is the culmination of an astonishing amount of R&D, all the way down to the actuator level. “As a company, we’ve built more than 30 unique electric actuators,” Cardenas explains. “You name it, we’ve tried it. Liquid cooling, cable driven, series elastic, parallel elastic, quasi-direct drive…. And we’ve now honed our approach and are applying it to commercial humanoids.”
Apptronik’s emphasis on commercialization gives it a much different perspective on robotics development than you get when focusing on pure research the way that NASA does. To build a commercial product rather than a handful of totally cool but extremely complex bespoke humanoids, you need to consider things like minimizing part count, maximizing maintainability and robustness, and keeping the overall cost manageable. “Our starting point was figuring out what the minimum viable humanoid robot looked like,” explains Apptronik CTO Nick Paine. “Iteration is then necessary to add complexity as needed to solve particular problems.”
This robot is called Astra. It’s only an upper body, and it’s Apptronik’s first product, but (not having any legs) it’s designed for manipulation rather than dynamic locomotion. Astra is force controlled, with series-elastic torque-controlled actuators, giving it the compliance necessary to work in dynamic environments (and particularly around humans). “Astra is pretty unique,” says Paine. “What we were trying to do with the system is to approach and achieve human-level capability in terms of manipulation workspace and payload. This robot taught us a lot about manipulation and actually doing useful work in the world, so that’s why it’s where we wanted to start.”
While Astra is currently out in the world doing pilot projects with clients (mostly in the logistics space), internally Apptronik has moved on to robots with legs. The following video, which Apptronik is sharing publicly for the first time, shows a robot that the company is calling its Quick Development Humanoid, or QDH:
QDH builds on Astra by adding legs, along with a few extra degrees of freedom in the upper body to help with mobility and balance while simplifying the upper body for more basic manipulation capability. It uses only three different types of actuators, and everything (from structure to actuators to electronics to software) has been designed and built by Apptronik. “With QDH, we’re approaching minimum viable product from a usefulness standpoint,” says Paine, “and this is really what’s driving our development, both in software and hardware.”
“What people have done in humanoid robotics is to basically take the same sort of architectures that have been used in industrial robotics and apply those to building what is in essence a multi-degree-of-freedom industrial robot,” adds Cardenas. “We’re thinking of new ways to build these systems, leveraging mass manufacturing techniques to allow us to develop a high-degree-of-freedom robot that’s as affordable as many industrial robots that are out there today.”
Cardenas explains that a major driver for the cost of humanoid robots is the number of different parts, the precision machining of some specific parts, and the resulting time and effort it then takes to put these robots together. As an internal-controls test bed, QDH has helped Apptronik to explore how it can switch to less complex parts and lower the total part count. The plan for Apollo is to not use any high-precision or proprietary components at all, which mitigates many supply-chain issues and will help Apptronik reach its target price point for the robot.
Apollo will be a completely new robot, based around the lessons Apptronik has learned from QDH. It’ll be average human size: about 1.75 meters tall, weighing around 75 kilograms, with the ability to lift 25 kg. It’s designed to operate untethered, either indoors or outdoors. Broadly, Apptronik is positioning Apollo as a high-performance, easy-to-use, and versatile robot that can do a bunch of different things. It is imagining an “iPhone of robots,” where apps can be created for the robot to perform specific tasks. To extend the iPhone metaphor, Apptronik itself will make sure that Apollo can do all of the basics (such as locomotion and manipulation) so that it has fundamental value, but the company sees versatility as the way to get to large-scale deployments and the cost savings that come with them.
“I see the Apollo robot as a spiritual successor to Valkyrie. It’s not Valkyrie 2—Apollo is its own platform, but we’re working with Apptronik to adapt it as much as we can to space use cases.” —Shaun Azimi, NASA Johnson Space Center
The challenge with this app approach is that there’s a critical mass that’s required to get it to work—after all, the primary motivation to develop an iPhone app is that there are a bajillion iPhones out there already. Apptronik is hoping that there are enough basic manipulation tasks in the supply-chain space that Apollo can leverage to scale to that critical-mass point. “This is a huge opportunity where the tasks that you need a robot to do are pretty straightforward,” Cardenas tells us. “Picking single items, moving things with two hands, and other manipulation tasks where industrial automation only gets you to a certain point. These companies have a huge labor challenge—they’re missing labor across every part of their business.”
While Apptronik’s goal is for Apollo to be autonomous, in the short to medium term, its approach will be hybrid autonomy, with a human overseeing first a few and eventually a lot of Apollos with the ability to step in and provide direct guidance through teleoperation when necessary. “That’s really where there’s a lot of business opportunity,” says Paine. Cardenas agrees. “I came into this thinking that we’d need to make Rosie the robot before we could have a successful commercial product. But I think the bar is much lower than that. There are fairly simple tasks that we can enter the market with, and then as we mature our controls and software, we can graduate to more complicated tasks.”
Apptronik is still keeping details about Apollo’s design under wraps, for now. We were shown renderings of the robot, but Apptronik is understandably hesitant to make those public, since the design of the robot may change. It does have a firm date for unveiling Apollo for the first time: SXSW, which takes place in Austin in March.
Match ID: 15 Score: 38.57 source: spectrum.ieee.org age: 6 days qualifiers: 27.86 nasa, 10.71 mit
Oklahoma Students to Hear from NASA Astronaut Aboard Space Station Fri, 27 Jan 2023 12:33 EST Students from Choctaw Nation Head Start, Jones Academy Elementary, and seven area public schools in Durant, Oklahoma, will have an opportunity this week to hear from a NASA astronaut aboard the International Space Station. Match ID: 16 Score: 27.86 source: www.nasa.gov age: 6 days qualifiers: 27.86 nasa
ISS Daily Summary Report – 1/27/2023 Fri, 27 Jan 2023 16:00:59 +0000 Payloads: Muscle Tone in Space (Myotones): Crewmember and operator performed the skin marking and Myotones Device measurements. The Myotones investigation observes the biochemical properties of muscles (e.g. muscle tone, stiffness, elasticity) during long-term exposure spaceflight environment. Results from this investigation can provide a better understanding of the principles of human resting muscle tone. This could … Match ID: 17 Score: 27.86 source: blogs.nasa.gov age: 6 days qualifiers: 27.86 nasa
La NASA lanza páginas web en español sobre aeronáutica Fri, 27 Jan 2023 10:55 EST Como parte de su empeño por proporcionar más recursos e información a nuevos públicos, la NASA ha lanzado nuevas páginas web con información sobre aeronáutica en español. El objetivo de estas páginas es hacer más accesible el contenido aeronáutico a la comunidad hispanohablante. Match ID: 18 Score: 27.86 source: www.nasa.gov age: 6 days qualifiers: 27.86 nasa
NASA Launches Aeronautics Spanish-Language Webpages Fri, 27 Jan 2023 09:47 EST As part of its effort to provide more resources and information to new audiences, NASA has launched new webpages featuring aeronautics information in Spanish. The webpages aim to make aeronautics content more accessible to the Spanish-language community. Match ID: 19 Score: 27.86 source: www.nasa.gov age: 6 days qualifiers: 27.86 nasa
The British high commissioner to Australia, Vicki Treadell, says Britain is “relaxed” about the prospect of not having King Charles III on the $5 note.
The Reserve Bank announced on Thursday that it will not replace the image of Queen Elizabeth on the $5 note with King Charles, but with an image that honours the culture and history of First Australians.
John Barilaro has been given 24 hours to explain his office’s involvement in a $100m bushfire recovery grants scheme before the New South Wales opposition leader, Chris Minns, refers the matter to the corruption watchdog.
There will be new faces aplenty while the title contenders face fascinating tests
As the eighth permanent manager of the calamitous Farhad Moshiri era takes his bow at Everton, accompanied by further protests against a board who may or may not be present at Goodison Park, the sight of Mikel Arteta patrolling the opposition technical area and leading a stylish Arsenal team with designs on the title will bring fresh torment for the home crowd. There really is an endless supply at present. The former Everton midfielder was hired by Arsenal when Moshiri opted for the Hollywood appointment of Carlo Ancelotti in December 2019. Some at Goodison felt the then Manchester City assistant coach would be a more suitable fit but, had they got their way, there is little chance Arteta would have been allowed to transform Everton. Arsenal possess the patience, recruitment strategy and organisational expertise needed for a manager to flourish. All are absent at Everton, as Sean Dyche may have already discovered. Andy Hunter
Everton v Arsenal, Saturday 12.30pm (all times GMT)
Tottenham v Manchester City, Sunday 4.30pm
Newcastle v West Ham, Saturday 5.30pm
Wolves v Liverpool, Saturday 3pm
Manchester United v Crystal Palace, Saturday 3pm
Continue reading... Match ID: 27 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
New head coach has done away with ‘finishers’ term yet he still needs to square the Marcus Smith and Owen Farrell circle
If this is a new chapter of English rugby – as Maro Itoje told us more than once – then the first page of it was less notable for what it said, more so for what it did not. Steve Borthwick’s first team announcement as England head coach and not a single mention of the World Cup, now just seven months away, was uttered. That “finishers” has been done away with – Borthwick preferring the more prosaic “replacements” – only underlined how the Eddie Jones era has been consigned to history.
Borthwick’s team selection is eye-catching in so far as Manu Tuilagi has been dropped – Joe Marchant has been preferred – Ben Curry and Ollie Hassell-Collins have been handed first starts and first caps respectively and most significantly, Marcus Smith and Owen Farrell continue as the 10-12 axis. Borthwick may be wielding a new broom but just like his predecessor, that is a circle he has to square.
Continue reading... Match ID: 32 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
Match ID: 33 Score: 25.00 source: www.reddit.com age: 0 days qualifiers: 25.00 mit
Trump declines to say if he’ll support eventual 2024 GOP presidential nominee Thu, 2 Feb 2023 16:39:59 EST Former President Donald Trump is refusing to say whether he’ll commit to backing the 2024 GOP presidential candidate if it’s not him, injecting uncertainty into Republican hopes of reclaiming the White House next year. Match ID: 34 Score: 25.00 source: www.washingtonpost.com age: 0 days qualifiers: 25.00 mit
The House voted along party lines as it ousted Democratic representative Ilhan Omar from the Foreign Affairs Committee while Democrats defended her.
The vote was divided 218 to 211, CBS reports. One GOP member voted “present.”
“This debate today, it’s about who gets to be an American? What opinions do we get to have, do we have to have to be counted as American?… That is what this debate is about, Madam Speaker. There is this idea that you are suspect if you are an immigrant. Or if you are from a certain part of the world, of a certain skin tone or a Muslim.
Well, I am Muslim. I am an immigrant, and interestingly, from Africa. Is anyone surprised that I’m being targeted? Is anyone surprised that I am somehow deemed unworthy to speak about American foreign policy?” she said.
“A blatant double standard is being applied here. Something just doesn’t add up. And what is the difference between Rep. Omar and these members? Could it be the way that she looks? Could it be her religious practices?” he said.
Continue reading... Match ID: 37 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
Netanyahu’s new far-right government has made changing the legal system a centrepiece of its legislative agenda and despite mounting public criticism, has charged ahead with steps to weaken the supreme court and grant politicians less judicial oversight in their policymaking.
Continue reading... Match ID: 42 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
Minnesota Democrat accuses Republicans of trying to silence her because she is Muslim and vows to ‘advocate for a better world’
Republicans voted to expel Minnesota Democrat Ilhan Omar from the House foreign affairs committee on Thursday as punishment for her past remarks on Israel. Democrats objected, saying the move was about revenge after Democrats removed far-right extremists in the last Congress.
A majority of 218 GOP lawmakers supported Omar’s expulsion from the committee, which is tasked with handling legislation and holding hearings affecting America’s diplomatic relations. One Republican lawmaker voted “present”.
Continue reading... Match ID: 45 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
Some observers say the metaverse is an expanded set of digital worlds that will grow out of the online environments that people are already familiar with, such as enhancing the extended-reality (XR) experience used in online gaming. The world they imagine is expected to offer new features and capabilities that accelerate society’s digital transformation and enhance sustainability by reducing the need for people to travel to meetings and perform resource-intensive activities.
Others say the metaverse will usher in a decentralized ecosystem that empowers users to create digital assets of their own choosing and engage in digital commerce. Because the architecture would be open, decentralized, and without gatekeepers, this version is expected to democratize the Internet by making it transparent, accessible, and interoperable to everyone.
However the metaverse evolves, one thing is certain: It has tremendous potential to fundamentally transform the ways we work, learn, play, and live. But there will be issues to deal with along the way.
That is why the IEEE Standards Association (IEEE SA) is working to help define, develop, and deploy the technologies, applications, and governance practices needed to help turn metaverse concepts into practical realities, and to drive new markets.
Technical and societal challenges
The technical and societal challenges that come with designing and building metaverse environments include:
Better user interfaces.
Lower system latency.
More tightly integrated, interoperable XR technologies.
Better 3D modeling and volumetric video rendering.
Improved ways to acquire, render, store, and protect geospatial data.
Lower power consumption.
Interacting with the Internet.
Consensus is needed to address the wide variety of views held on technosocial issues such as user identity, credentialing, privacy, openness, ethics, accessibility, and user safety.
New technical standards
IEEE SA recently formed its metaverse standards committee, the first committee of a major worldwide standards development organization designed to advance metaverse-related technologies and applications. It will do so by developing and maintaining technical standards, creating recommended practices, and writing guides.
In addition, technical standards and activities are incubating new ideas on topics that are expected to be of great interest to industry.
The IEEE P7016 Standard for Ethically Aligned Design and Operation of Metaverse Systems will provide a high-level overview of the technosocial aspects of metaverse systems and specify an ethical assessment methodology for use in their design and operation. The standard will include guidance to developers on how to adapt their processes to prioritize ethically aligned design. In addition, IEEE P7016 will help define ethical system content on accessibility and functional safety. Also included will be guidance on how to promote ethically aligned values and robust public engagement in the research, implementation, and proliferation of metaverse systems to increase human well-being and environmental sustainability.
Two industry-focused initiatives
IEEE SA also recently launched two Industry Connections activities specifically for the metaverse. The IC program facilitates collaboration and consensus-building among participants. It also provides IEEE resources to help produce standards proposals; white papers and other reports; events; software tools; and Web services.
The Decentralized Metaverse Initiative has identified a goal of developing and providing guidelines for implementing decentralized metaverses, which not only could capitalize on intellectual property and virtual assets in decentralized ways but also could benefit from other potential features of decentralized architectures.
The Persistent Computing for Metaverse Initiative will focus on the technologies needed to build, operate, and upgrade metaverse experiences. It includes computation, storage, communications, data structures, and artificial intelligence. This group will facilitate discussions and collaborations on persistent computing, steer and give advice on research and development, and provide technical guidelines and references.
Webinars with experts
The IEEE Metaverse Congress offers a series of webinars that provide a comprehensive, global view from experts who are involved with the technology’s development, design, and governance.
Rep. Ilhan Omar kicked off committee after party-line vote in House Thu, 2 Feb 2023 13:00:19 EST House Republicans remove the Minnesota Democrat after she made what House Speaker Kevin McCarthy recently described as “repeated antisemitic and anti-American remarks.” Match ID: 54 Score: 25.00 source: www.washingtonpost.com age: 0 days qualifiers: 25.00 mit
Six-Word Sci-Fi: Stories Written by You Thu, 02 Feb 2023 17:30:00 +0000 Here's this month's prompt, how to submit, and an illustrated archive of past favorites. Match ID: 55 Score: 25.00 source: www.wired.com age: 0 days qualifiers: 25.00 mit
Smith’s family have called the new band ‘extremely offensive’. Martin Bramah, a founding former member of the original Fall lineup, defends his new outfit
Last week, it was announced that five former members of revered Manchester post-punk group the Fall would be releasing an album under the name House of All – without the original band’s late frontman and only constant member Mark E Smith, who died in January 2018 aged 60. Almost immediately, they incurred the wrath of the famously irascible singer’s family, who strongly disavowed the project.
“The family and estate of Mark E Smith in no way endorse or wish to be associated with House of All,” they wrote in a statement. “Furthermore, we do not like or permit the use of Mark E Smith’s name, images and/or band name to be used in any kind of exploiting way. Not only do we find this extremely offensive and very misleading to the wider audience of Mark E Smith and the Fall, but it also causes us much distress and discomfort.”
Continue reading... Match ID: 57 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
The Bank of England raised interest rates for a tenth consecutive time on Thursday from 3.5% to 4%, but said inflation may have peaked and a recession in the UK would be shorter and shallower than previously feared.
Piling more pressure on mortgage payers and businesses struggling to pay off their loans, the Bank’s monetary policy committee (MPC) said the 0.5-percentage point rise was needed after private sector wages had risen more than the central bank’s previous forecasts.
Continue reading... Match ID: 58 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
Match ID: 60 Score: 25.00 source: www.reddit.com age: 0 days qualifiers: 25.00 mit
Enter the hunter satellites preparing for space war Thu, 02 Feb 2023 15:17:46 +0000 Startup plans to launch prototype pursuit satellites on a SpaceX flight later this year. Match ID: 61 Score: 25.00 source: arstechnica.com age: 0 days qualifiers: 25.00 mit
Your kindness has been a wonderful gift, writes advice columnist Eleanor Gordon-Smith. Recognise this is not an ordinary friendship and they will show their gratitude in time
The daughter of our friends has leukaemia. It was initially diagnosed 18 months ago but she recently had a relapse. When she was first diagnosed my husband and I did everything we could to help where we could – by cooking meals and running errands, because naturally they were afraid of catching Covid while shopping.
When their daughter’s chemotherapy came to an end last year, we were so relieved and happy for them, but it was as if they disappeared. They would only get in touch when they needed something and seemed to spend most of their free time with the families of their children’s friends. We understood that they were trying to make up for the time lost while their daughter was in treatment but we couldn’t help feeling used. Then the relapse happened.
Continue reading... Match ID: 65 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
Powerhouses, levelling up, taking back control: the Labour leader says he is done with such ‘sticking plaster’ solutions
Alex Niven is a Newcastle University lecturer and author
When the post-punk hero Mark E Smith intoned, more than 40 years ago, “the north will rise again”, he probably didn’t have in mind a constitutional commission chaired by Gordon Brown. But a lot has changed since 1980. Now even Britain’s political class in Westminster seems to have realised that the gaping socioeconomic divide between England’s north and south can be tackled only with root and branch reform.
Labour’s Report of the Commission on the UK’s Future is a genuinely radical set of proposals for combating regional inequality, and contrasts sharply with the Tories’ rather pitiful strategy for levelling up. Replacing the House of Lords with a democratic alternative, devolving control of transport, infrastructure and housing to local government, plans for moving large numbers of civil servants outside London – all these ideas suggest that Keir Starmer might just be serious about thoroughgoing reform of the most regionally unbalanced advanced economy in the world.
Continue reading... Match ID: 68 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
Hacker “Capture the Flag” has been a mainstay at hacker gatherings since the mid-1990s. It’s like the outdoor game, but played on computer networks. Teams of hackers defend their own computers while attacking other teams’. It’s a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others’. It’s the software vulnerability lifecycle.
These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. If you’re into this sort of thing, it’s pretty much the most fun you can possibly have on the Internet without committing multiple felonies...
Match ID: 70 Score: 25.00 source: www.schneier.com age: 0 days qualifiers: 25.00 mit
The energy industry is turning waste from dairy farms into renewable natural gas – but will it actually reduce emissions?
On an early August afternoon at Pinnacle Dairy, a farm located near the middle of California’s long Central Valley, 1,300 Jersey cows idle in the shade of open-air barns. Above them whir fans the size of satellites, circulating a breeze as the temperature pushes 100F (38C). Underfoot, a wet layer of feces emits a thick stench that hangs in the air. Just a tad unpleasant, the smell represents a potential goldmine.
The energy industry is transforming mounds of manure into a lucrative “carbon negative fuel” capable of powering everything from municipal buses to cargo trucks. To do so, it’s turning to dairy farms, which offer a reliable, long-term supply of the material. Pinnacle is just one of hundreds across the state that have recently sold the rights to their manure to energy producers.
Continue reading... Match ID: 72 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
A year ago, most teachers had never heard of the ex-kickboxer and social media influencer. Now, his toxic machismo is the talk of the playground – and the staffroom
Daniel is 10. He likes football, Fifa, the gaming website Poki, coding and basketball. Last year, he asked his dad if he had ever heard of Andrew Tate. “I hadn’t,” admits his father, Nick, who went away, did some research and was horrified at what he found.
Today, it seems as if virtually every parent in Britain has heard of the ex-kickboxer, social media influencer and self-professed misogynist, whose videos have been watched millions of times and whose recent arrest in Romania on suspicion of human trafficking, rape and forming an organised crime group to exploit women has kept him in the headlines.
Continue reading... Match ID: 73 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
In 2021, a security guard in Spain stormed into his workplace and shot four people. He was caught, badly injured, and a trial was set – but his victims would never get to see him punished
At 11.09am on 14 December 2021, a man wearing a black baseball cap and a long auburn wig rang the bell at the Securitas offices in the Spanish city of Tarragona. It was a poor disguise, and when he entered the reception area on the first floor, staff quickly recognised Marin Eugen Sabau, a burly 45-year-old security guard who had been on sick leave for the previous six months.
Securitas is one of the world’s biggest security companies, with 345,000 employees worldwide, but this local office was nothing fancy – grey floor tiles, white laminated furniture, corporate advertising on the walls. “We help make your world a safer place,” read one slogan. In the cluttered main office, Luisa Rico, a 58-year-old junior manager with cropped silver hair and green eyes, was printing out documents. She recognised Sabau’s voice but was not alarmed that he had dropped by unexpectedly. He sounded calm as he talked to a colleague in the reception area. She did not know he was carrying a pistol, or that he planned to shoot her.
Continue reading... Match ID: 75 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 mit
Disabled woman offered £10k for energy bills but refuses Thu, 02 Feb 2023 03:06:27 GMT Anne Vivian-Smith is disabled and struggles to pay her bills - her story prompted a generous donation from one reader. Match ID: 76 Score: 25.00 source: www.bbc.co.uk age: 0 days qualifiers: 25.00 mit
Match ID: 79 Score: 25.00 source: www.reddit.com age: 1 day qualifiers: 25.00 mit
Ukraine launches corruption probes ahead of E.U. summit Wed, 1 Feb 2023 17:17:35 EST The premises of a billionaire oligarch and a former interior minister were searched, while several officials in the Defense Ministry were put under investigation. Match ID: 80 Score: 25.00 source: www.washingtonpost.com age: 1 day qualifiers: 25.00 mit
Match ID: 86 Score: 25.00 source: www.reddit.com age: 1 day qualifiers: 25.00 mit
Layoffs Broke Big Tech’s Elite College Hiring Pipeline Wed, 01 Feb 2023 12:00:00 +0000 Students from top schools used to waltz from Silicon Valley internships into lucrative jobs. Now, some are reconsidering their options. Match ID: 87 Score: 25.00 source: www.wired.com age: 1 day qualifiers: 25.00 mit
Leaked tax records suggest subsidiaries of international gas field contractors continued to make millions after the coup
In the two years since a murderous junta launched a coup in Myanmar, some of the world’s biggest oil and gas service companies continued to make millions of dollars from operations that have helped prop up the military regime, tax documents seen by the Guardian suggest.
US oil services giant Halliburton’s Singapore-based subsidiaryMyanmar Energy Services reported pre-tax profits of $6.3m in Myanmar in the year to September 2021, which includes eight months while the junta was in power.
Houston-headquartered oil services company Baker Hughes branch in Yangon reported pre-tax profits of $2.64m in the country in the six months to March 2022.
US firm Diamond Offshore Drilling reported $37m in fees to the Myanmar tax authority during the year to September 2021 and another $24.2m from then until March 2022.
Schlumberger Logelco (Yangon Branch), the Panama-based subsidiary of the US-listed world’s largest offshore drilling company, earned revenues of $51.7m in the year to September 2021 in Myanmar and as late as September 2022 was owed $200,000 in service fees from the junta’s energy ministry.
Continue reading... Match ID: 89 Score: 25.00 source: www.theguardian.com age: 1 day qualifiers: 25.00 mit
IEEE Life Fellow Vinton “Vint” Cerf, widely known as the “Father of the Internet,” is the recipient of the 2023 IEEE Medal of Honor. He is being recognized “for co-creating the Internet architecture and providing sustained leadership in its phenomenal growth in becoming society’s critical infrastructure.”
While working as a program manager at the U.S. Defense Advanced Research Projects Agency (DARPA) Information Processing Techniques Office in 1974, Cerf and IEEE Life Fellow Robert Kahn designed the Transmission Control Protocol and the Internet Protocol. TCP manages data packets sent over the Internet, making sure they don’t get lost, are received in the proper order, and are reassembled at their destination correctly. IP manages the addressing and forwarding of data to and from its proper destinations. Together they make up the Internet’s core architecture and enable computers to connect and exchange traffic.
“Cerf’s tireless commitment to the Internet’s evolution, improvement, oversight, and evangelism throughout its history has made an indelible impact on the world,” said one of the endorsers of the award. “It is largely due to his efforts that we even have the Internet, which has changed the way society lives.
“The Internet has enabled a large part of the world to receive instant access to news, brought us closer to friends and loved ones, and made it easier to purchase products online,” the endorser said. “It’s improved access to education and scientific discourse, made smartphones useful, and opened the door for social media, cloud computing, video conferencing, and streaming. Cerf also saw early on the importance of decentralized control, with no one company or government completely in charge.”
Since 2005, Cerf has been vice president and chief Internet evangelist at Google in Reston, Va., spreading the word about adopting the Internet in service to public good. He is responsible for identifying new technologies and enabling policies that support the development of advanced, Internet-based products and services.
Enhancing the World Wide Web
Cerf left DARPA in 1982 to join Microwave Communications Inc. (now part of WorldCom), headquartered in Washington, D.C., as vice president of its digital information services division. A year later, he led the development of MCI Mail, the first commercial email service on the Internet.
In 1986 he left the company to become vice president of the newly formed Corporation for National Research Initiatives, also in Reston. He worked alongside Kahn at the not-for-profit organization, developing digital libraries, gigabit speed networks, and knowledge robots (mobile software agents used in computer networks).
He returned to MCI in 1994 and served as a senior vice president for 11 years before joining Google.
“It is largely due to Cerf’s efforts that we even have the Internet.”
Together with Kahn, Cerf founded the nonprofit Internet Society in 1992. The organization helps set technical standards, develops Internet infrastructure, and helps lawmakers set policy.
Cerf served as its president from 1992 to 1995 and was chairman of the board of the Internet Corp. for Assigned Names and Numbers from 2000 to 2007. ICANN works to ensure a stable, secure, and interoperable Internet by managing the assignment of unique IP addresses and domain names. It also maintains tables of registered parameters needed for the protocol standards developed by the Internet Engineering Task Force.
Exclusive: Rami Ranger criticised over comments about Pakistani journalists
A Conservative peer has apologised and withdrawn comments that were criticised for being “racially charged”, as a second referral about his conduct was made to the House of Lords standards watchdog.
Rami Ranger, a major Conservative party donor, admitted that remarks unearthed by the Guardian that he made in a letter regarding Pakistani journalists and a later TV interview about grooming and drug dealing had “caused offence”.
Continue reading... Match ID: 92 Score: 25.00 source: www.theguardian.com age: 2 days qualifiers: 25.00 mit
Two weeks on from the death of government critic John Williams Ntwali, police have failed to answer questions over the alleged road accident in which they say he was killed
Calls are growing for an investigation into the apparent accidental death two weeks ago of a prominent Rwandan journalist and government critic.
John Williams Ntwali, a regular critic of the authorities, was found dead on 18 January. According to reported police accounts, he was killed when a speeding vehicle rammed a motorcycle on which he was riding pillion in the capital, Kigali. A US senate committee said he had been “silenced”. Human rights organisations have joined other activists in raising doubts about the cause of the death of the 44 year-old editor of The Chronicles newspaper.
Continue reading... Match ID: 94 Score: 25.00 source: www.theguardian.com age: 2 days qualifiers: 25.00 mit
Match ID: 95 Score: 21.43 source: theintercept.com age: 3 days qualifiers: 21.43 mit
Patricia Highsmith’s New York Years Mon, 30 Jan 2023 20:35:56 +0000 Disdain for conformity and a complicated hunger for money are insistent motifs across Highsmith’s early notebooks and diaries. Match ID: 96 Score: 21.43 source: www.newyorker.com age: 3 days qualifiers: 21.43 mit
Patti Smith Remembers Tom Verlaine Mon, 30 Jan 2023 18:45:05 +0000 Patti Smith remembers her friend, who possessed the child’s gift of transforming a drop of water into a poem that somehow begat music. Match ID: 97 Score: 21.43 source: www.newyorker.com age: 3 days qualifiers: 21.43 mit
On 20 April 1939, David Sarnoff, president of the Radio Corporation of America, addressed a small crowd outside the RCA pavilion at the New York World’s Fair. “Today we are on the eve of launching a new industry, based on imagination, on scientific research and accomplishment,” he proclaimed. That industry was television.
RCA president David Sarnoff’s speech at the 1939 World’s Fair was broadcast live.
Sarnoff’s speech was unusual at that time for the United States simply because it was the first time a news event was broadcast live for television. Although television technology had been in development for decades, and the BBC had been airing live programs since 1929 in the United Kingdom, competing technologies and licensing disputes kept the U.S. television market from taking off. With the World’s Fair and its theme of the World of Tomorrow, Sarnoff aimed to change that. Ten days after Sarnoff’s speech, the National Broadcasting Corporation (NBC), a fully owned subsidiary of RCA, began a regular slate of television programming, beginning with President Franklin Delano Roosevelt’s speech officially opening the fair.
RCA’s Phantom Teleceiver was the TV of tomorrow
The architecture of RCA’s pavilion at the fair was a nod to the company’s history. Designed by Skidmore & Owens, it was shaped like a radio vacuum tube. But the inside held a vision of the future.
Entering the pavilion, fairgoers encountered the Phantom Teleceiver, RCA’s latest technological wonder. This special model of the TRK-12 television receiver, which today we would call a television set or simply a TV, was housed in a cabinet constructed from DuPont’s new clear plastic, Lucite. The transparent case allowed visitors to inspect the inner workings from all sides.
An unusual aspect of the TRK-12 was its vertically positioned cathode-ray tube, which projected the image upward onto a 30.5-centimeter (12-inch) mirror on the underside of the cabinet lid. Industrial designer John Vassos, who was responsible for creating the shape of RCA’s televisions, found the size of that era’s tubes to be a unique challenge. Had the CRT been positioned horizontally, the television cabinet would have pushed out almost a meter into the room. As it was, the set was a heavyweight, standing 102 cm tall and weighing more than 91 kilograms.The image in the mirror was the reverse of that projected by the CRT, but Vassos must have decided it wasn’t a deal breaker.
According to art historian Danielle Shapiro, the author of John Vassos: Industrial Designer for Modern Life, Vassos drew on the modernist principles of streamlining to design the cabinetry for the TRK-12. In addition to contending with the size of the tube, he had to find a way to dissipate its extreme heat. He chose to integrate vents throughout the cabinet, creating a louver as a design motif. Production sets (meaning all the ones not made out of Lucite for the fair) were crafted from different shades and patterns of walnut with stripes of walnut veneer, so the overall look was of an elegant wooden box.
The Lucite-encased TRK-12 was introduced at the 1939 World’s Fair.RCA
(If you want to see the original World’s Fair TV, it now resides at the MZTV Museum of Television, in Toronto. A clever replica, built by the Early Television Museum with an LCD screen instead of a vintage cathode-ray tube, is at the ACMI in Melbourne, Australia.)
The TRK-12 wasn’t just a TV. It was the first multimedia center. The cabinet housed the television as well as a three-band, all-wave radio and a Victrola switch to attach an optional phonograph, the sound from which would play through the radio speaker. A fidelity selector knob allowed users to switch easily among the different entertainment options, and a single knob controlled the power and volume for all settings. On the left-hand side of the console were two radio knobs (range selector and tuning control), and on the right were three dual-control knobs for the television (vertical and horizontal hold; station selection and fine tuning; and contrast and brightness).
In 1939, TV was still so novel that the owner’s manual for the TRK-12 devoted a section to explaining “How You Receive Television Pictures.”
Although the home user could select any of five different television stations and fiddle with the picture quality, a bold-faced warning in the owner’s manual cautioned that only a competent television technician should install the receiver because it had the ability to produce high voltages and electrical shocks. TV was then so novel that the manual devoted a section to explaining “How You Receive Television Pictures”: “Television reception follows the laws governing high frequency wave transmission and reception. Television waves act in many respects like light waves.” So long as you knew how light waves behaved, you were good.
In addition to designing the television sets for the fair, Vassos created two exhibits to help new users envision how these machines could fit into their homes. When David Sarnoff gave his dedication speech, for example, only a few hundred people were able to watch it live simply because so few people owned TV sets. Shapiro argues that Vassos was one of the earliest modern designers to focus on the user experience and try to alleviate the anxiety and frenzy caused by the urban environment. His design for the Radio Living Room of Today blended the latest RCA technology, including a facsimile machine, with contemporary furnishings.
In 1940, Vassos added the Radio Living Room of Tomorrow. This exhibit, dubbed the Musicorner, included dimmable fluorescent lights to allow for ideal television-watching conditions. Foreshadowing cassette recorders and CD burners was a device for recording and producing phonographs. Tasteful modular cabinets concealed the television and radio receivers, not unlike some style trends today.
RCA designer John Vassos’s stylish Musicorner room incorporated cutting-edge technology for watching TV and recording phonographs. Archives of American Art
Each day, thousands of visitors to the RCA pavilion encountered television, often for the first time, and watched programming on 13 TRK-12 receivers. But if television really was going to be the future, RCA had to convince consumers to buy sets. Throughout the fair’s 18-month run, the company arranged to have four models of television receivers, all designed by Vassos, available for sale at various department stores in the New York metropolitan region.
The smallest of these was the TT-5 tabletop television, which only provided a picture. It plugged into an existing radio to receive sound. The TT-5 was considered the “everyman’s version” and had a starting price of $199 ($4,300 today). Next biggest was the TRK-5, then the TRK-9, and finally the TRK-12, which sold for $600 (nearly $13,000 today). Considering that the list price of a modest new automobile in 1939 was $700 and the average annual income was $1,368, even the everyman’s television remained beyond the reach of most families.
Part of a continuing serieslooking at historical artifacts that embrace the boundless potential of technology.
An abridged version of this article appears in the February 2023 print issue as “Yesterday’s TV of Tomorrow.”
Match ID: 99 Score: 21.43 source: spectrum.ieee.org age: 3 days qualifiers: 21.43 mit
ISS Daily Summary Report – 1/26/2023 Thu, 26 Jan 2023 16:00:15 +0000 Payloads: BioNutrients-1: BioNutrients-1 Production Packs were hydrated, incubated, and agitated. BioNutrients demonstrates a technology that enables on-demand production of human nutrients during long-duration space missions. The process uses engineered microbes, like yeast, to generate carotenoids from an edible media to supplement potential vitamin losses from food that is stored for very long periods. Specially designed … Match ID: 102 Score: 18.57 source: blogs.nasa.gov age: 7 days qualifiers: 18.57 nasa
The mission to return martian samples back to Earth will see a European 2.5 metre-long robotic arm pick up tubes filled with precious soil from Mars and transfer them to a rocket for an historic interplanetary delivery.
The Sample Transfer Arm is conceived to be autonomous, highly reliable and robust. The robot can perform a large range of movements with seven degrees of freedom, assisted by two cameras and a myriad of sensors. It features a gripper – akin to a hand – that can capture and handle the sample tubes at different angles.
The robotic arm will land on Mars to retrieve the sample tubes NASA’s Perseverance rover is currently collecting from the surface. Able to “see”, “feel” and take autonomous decisions, its high level of dexterity allows the arm to extract the tubes from the rover, pick them up from the martian ground, insert them into a container and close the lid before lifting-off from Mars.
ESA’s Earth Return Orbiter (ERO) will rendezvous with the container filled with martian samples and bring the material back to Earth.
The joint endeavour between NASA and ESA aims to bring back martian samples to the best labs in our planet by 2033.
This is the Bi-Weekly /r/Technology Tech Support / General Discussion Thread.
All questions must be submitted as top comments (direct replies to this post).
As always, we ask that you keep it civil, abide by the rules of reddit and mind your reddiquette. Please hit the report button on any activity that you feel may be in violation of any of the guidelines listed above.
What could you do with an extra limb? Consider a surgeon performing a delicate operation, one that needs her expertise and steady hands—all three of them. As her two biological hands manipulate surgical instruments, a third robotic limb that’s attached to her torso plays a supporting role. Or picture a construction worker who is thankful for his extra robotic hand as it braces the heavy beam he’s fastening into place with his other two hands. Imagine wearing an exoskeleton that would let you handle multiple objects simultaneously, like Spiderman’s Dr. Octopus. Or contemplate the out-there music a composer could write for a pianist who has 12 fingers to spread across the keyboard.
We think that extra robotic limbs could be a new form of human augmentation, improving people’s abilities on tasks they can already perform as well as expanding their ability to do things they simply cannot do with their natural human bodies. If humans could easily add and control a third arm, or a third leg, or a few more fingers, they would likely use them in tasks and performances that went beyond the scenarios mentioned here, discovering new behaviors that we can’t yet even imagine.
Levels of human augmentation
Robotic limbs have come a long way in recent decades, and some are already used by people to enhance their abilities. Most are operated via a joystick or other hand controls. For example, that’s how workers on manufacturing lines wield mechanical limbs that hold and manipulate components of a product. Similarly, surgeons who perform robotic surgery sit at a console across the room from the patient. While the surgical robot may have four arms tipped with different tools, the surgeon’s hands can control only two of them at a time. Could we give these surgeons the ability to control four tools simultaneously?
Robotic limbs are also used by people who have amputations or paralysis. That includes people in powered wheelchairs
controlling a robotic arm with the chair’s joystick and those who are missing limbs controlling a prosthetic by the actions of their remaining muscles. But a truly mind-controlled prosthesis is a rarity.
If humans could easily add and control a third arm, they would likely use them in new behaviors that we can’t yet even imagine.
The pioneers in brain-controlled prosthetics are people with
tetraplegia, who are often paralyzed from the neck down. Some of these people have boldly volunteered for clinical trials of brain implants that enable them to control a robotic limb by thought alone, issuing mental commands that cause a robot arm to lift a drink to their lips or help with other tasks of daily life. These systems fall under the category of brain-machine interfaces (BMI). Other volunteers have used BMI technologies to control computer cursors, enabling them to type out messages, browse the Internet, and more. But most of these BMI systems require brain surgery to insert the neural implant and include hardware that protrudes from the skull, making them suitable only for use in the lab.
Augmentation of the human body can be thought of as having three levels. The first level increases an existing characteristic, in the way that, say, a powered exoskeleton can
give the wearer super strength. The second level gives a person a new degree of freedom, such as the ability to move a third arm or a sixth finger, but at a cost—if the extra appendage is controlled by a foot pedal, for example, the user sacrifices normal mobility of the foot to operate the control system. The third level of augmentation, and the least mature technologically, gives a user an extra degree of freedom without taking mobility away from any other body part. Such a system would allow people to use their bodies normally by harnessing some unused neural signals to control the robotic limb. That’s the level that we’re exploring in our research.
Deciphering electrical signals from muscles
Third-level human augmentation can be achieved with invasive BMI implants, but for everyday use, we need a noninvasive way to pick up brain commands from outside the skull. For many research groups, that means relying on tried-and-true
electroencephalography (EEG) technology, which uses scalp electrodes to pick up brain signals. Our groups are working on that approach, but we are also exploring another method: using electromyography (EMG) signals produced by muscles. We’ve spent more than a decade investigating how EMG electrodes on the skin’s surface can detect electrical signals from the muscles that we can then decode to reveal the commands sent by spinal neurons.
Electrical signals are the language of the nervous system. Throughout the brain and the peripheral nerves, a neuron “fires” when a certain voltage—some tens of millivolts—builds up within the cell and causes an action potential to travel down its axon, releasing neurotransmitters at junctions, or synapses, with other neurons, and potentially triggering those neurons to fire in turn. When such electrical pulses are generated by a motor neuron in the spinal cord, they travel along an axon that reaches all the way to the target muscle, where they cross special synapses to individual muscle fibers and cause them to contract. We can record these electrical signals, which encode the user’s intentions, and use them for a variety of control purposes.
How the Neural Signals Are Decoded
A training module [orange] takes an initial batch of EMG signals read by the electrode array [left], determines how to extract signals of individual neurons, and summarizes the process mathematically as a separation matrix and other parameters. With these tools, the real-time decoding module [green] can efficiently extract individual neurons’ sequences of spikes, or “spike trains” [right], from an ongoing stream of EMG signals.
Deciphering the individual neural signals based on what can be read by surface EMG, however, is not a simple task. A typical muscle receives signals from hundreds of spinal neurons. Moreover, each axon branches at the muscle and may connect with a hundred or more individual muscle fibers distributed throughout the muscle. A surface EMG electrode picks up a sampling of this cacophony of pulses.
A breakthrough in noninvasive neural interfaces came with the discovery in 2010 that the signals picked up by high-density EMG, in which tens to hundreds of electrodes are fastened to the skin,
can be disentangled, providing information about the commands sent by individual motor neurons in the spine. Such information had previously been obtained only with invasive electrodes in muscles or nerves. Our high-density surface electrodes provide good sampling over multiple locations, enabling us to identify and decode the activity of a relatively large proportion of the spinal motor neurons involved in a task. And we can now do it in real time, which suggests that we can develop noninvasive BMI systems based on signals from the spinal cord.
A typical muscle receives signals from hundreds of spinal neurons.
The current version of our system consists of two parts: a training module and a real-time decoding module. To begin, with the EMG electrode grid attached to their skin, the user performs gentle muscle contractions, and we feed the recorded EMG signals into the training module. This module performs the difficult task of identifying the individual motor neuron pulses (also called spikes) that make up the EMG signals. The module analyzes how the EMG signals and the inferred neural spikes are related, which it summarizes in a set of parameters that can then be used with a much simpler mathematical prescription to translate the EMG signals into sequences of spikes from individual neurons.
With these parameters in hand, the decoding module can take new EMG signals and extract the individual motor neuron activity in real time. The training module requires a lot of computation and would be too slow to perform real-time control itself, but it usually has to be run only once each time the EMG electrode grid is fixed in place on a user. By contrast, the decoding algorithm is very efficient, with latencies as low as a few milliseconds, which bodes well for possible self-contained wearable BMI systems. We validated the accuracy of our system by comparing its results with signals obtained concurrently by two invasive EMG electrodes inserted into the user’s muscle.
Exploiting extra bandwidth in neural signals
Developing this real-time method to extract signals from spinal motor neurons was the key to our present work on controlling extra robotic limbs. While studying these neural signals, we noticed that they have, essentially, extra bandwidth. The low-frequency part of the signal (below about 7 hertz) is converted into muscular force, but the signal also has components at higher frequencies, such as those in the beta band at 13 to 30 Hz, which are too high to control a muscle and seem to go unused. We don’t know why the spinal neurons send these higher-frequency signals; perhaps the redundancy is a buffer in case of new conditions that require adaptation. Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle.
That discovery set us thinking about what could be done with the spare frequencies. In particular, we wondered if we could take that extraneous neural information and use it to control a robotic limb. But we didn’t know if people would be able to voluntarily control this part of the signal separately from the part they used to control their muscles. So we designed an experiment to find out.
Neural Control Demonstrated
A volunteer exploits unused neural bandwidth to direct the motion of a cursor on the screen in front of her. Neural signals pass from her brain, through spinal neurons, to the muscle in her shin, where they are read by an electromyography (EMG) electrode array on her leg and deciphered in real time. These signals include low-frequency components [blue] that control muscle contractions, higher frequencies [beta band, yellow] with no known biological purpose, and noise [gray]. Chris Philpot; Source: M. Bräcklein et al., Journal of Neural Engineering
In our first proof-of-concept experiment, volunteers tried to use their spare neural capacity to control computer cursors. The setup was simple, though the neural mechanism and the algorithms involved were sophisticated. Each volunteer sat in front of a screen, and we placed an EMG system on their leg, with 64 electrodes in a 4-by-10-centimeter patch stuck to their shin over the
tibialis anterior muscle, which flexes the foot upward when it contracts. The tibialis has been a workhorse for our experiments: It occupies a large area close to the skin, and its muscle fibers are oriented along the leg, which together make it ideal for decoding the activity of spinal motor neurons that innervate it.
These are some results from the experiment in which low- and high-frequency neural signals, respectively, controlled horizontal and vertical motion of a computer cursor. Colored ellipses (with plus signs at centers) show the target areas. The top three diagrams show the trajectories (each one starting at the lower left) achieved for each target across three trials by one user. At bottom, dots indicate the positions achieved across many trials and users. Colored crosses mark the mean positions and the range of results for each target.Source: M. Bräcklein et al., Journal of Neural Engineering
We asked our volunteers to steadily contract the tibialis, essentially holding it tense, and throughout the experiment we looked at the variations within the extracted neural signals. We separated these signals into the low frequencies that controlled the muscle contraction and spare frequencies at about 20 Hz in the beta band, and we linked these two components respectively to the horizontal and vertical control of a cursor on a computer screen. We asked the volunteers to try to move the cursor around the screen, reaching all parts of the space, but we didn’t, and indeed couldn’t, explain to them how to do that. They had to rely on the visual feedback of the cursor’s position and let their brains figure out how to make it move.
Remarkably, without knowing exactly what they were doing, these volunteers mastered the task within minutes, zipping the cursor around the screen, albeit shakily. Beginning with one neural command signal—contract the tibialis anterior muscle—they were learning to develop a second signal to control the computer cursor’s vertical motion, independently from the muscle control (which directed the cursor’s horizontal motion). We were surprised and excited by how easily they achieved this big first step toward finding a neural control channel separate from natural motor tasks. But we also saw that the control was not accurate enough for practical use. Our next step will be to see if more accurate signals can be obtained and if people can use them to control a robotic limb while also performing independent natural movements.
We are also interested in understanding more about how the brain performs feats like the cursor control. In a recent study using a variation of the cursor task, we concurrently used EEG to see what was happening in the user’s brain, particularly in the area associated with the voluntary control of movements. We were excited to discover that the changes happening to the extra beta-band neural signals arriving at the muscles were tightly related to similar changes at the brain level. As mentioned, the beta neural signals remain something of a mystery since they play no known role in controlling muscles, and it isn’t even clear where they originate. Our result suggests that our volunteers were learning to modulate brain activity that was sent down to the muscles as beta signals. This important finding is helping us unravel the potential mechanisms behind these beta signals.
Meanwhile, at Imperial College London we have set up a system for testing these new technologies with extra robotic limbs, which we call the
MUlti-limb Virtual Environment, or MUVE. Among other capabilities, MUVE will enable users to work with as many as four lightweight wearable robotic arms in scenarios simulated by virtual reality. We plan to make the system open for use by other researchers worldwide.
Next steps in human augmentation
Connecting our control technology to a robotic arm or other external device is a natural next step, and we’re actively pursuing that goal. The real challenge, however, will not be attaching the hardware, but rather identifying multiple sources of control that are accurate enough to perform complex and precise actions with the robotic body parts.
We are also investigating how the technology will affect the neural processes of the people who use it. For example, what will happen after someone has six months of experience using an extra robotic arm? Would the natural plasticity of the brain enable them to adapt and gain a more intuitive kind of control? A person born with six-fingered hands can have
fully developed brain regions dedicated to controlling the extra digits, leading to exceptional abilities of manipulation. Could a user of our system develop comparable dexterity over time? We’re also wondering how much cognitive load will be involved in controlling an extra limb. If people can direct such a limb only when they’re focusing intently on it in a lab setting, this technology may not be useful. However, if a user can casually employ an extra hand while doing an everyday task like making a sandwich, then that would mean the technology is suited for routine use.
Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle.
Other research groups are pursuing the same neuroscience questions. Some are experimenting with control mechanisms involving either scalp-based EEG or neural implants, while others are working on muscle signals. It is early days for movement augmentation, and researchers around the world have just begun to address the most fundamental questions of this emerging field.
Two practical questions stand out: Can we achieve neural control of extra robotic limbs concurrently with natural movement, and can the system work without the user’s exclusive concentration? If the answer to either of these questions is no, we won’t have a practical technology, but we’ll still have an interesting new tool for research into the neuroscience of motor control. If the answer to both questions is yes, we may be ready to enter a new era of human augmentation. For now, our (biological) fingers are crossed.
Match ID: 105 Score: 17.86 source: spectrum.ieee.org age: 4 days qualifiers: 17.86 mit
Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.
“Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.
Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.
DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.
“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.
“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”
So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.
The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.
The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA
But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.
NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.
The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.
The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA
What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”
So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.
“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”
There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.
“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”
Match ID: 108 Score: 15.71 source: spectrum.ieee.org age: 132 days qualifiers: 9.29 nasa, 3.57 mit, 2.86 planets
Update 5 Sept.: For now, NASA’s giant Artemis I remains on the ground after two launch attempts scrubbed by a hydrogen leak and a balky engine sensor. Mission managers say Artemis will fly when everything's ready—but haven't yet specified whether that might be in late September or in mid-October.
“When you look at the rocket, it looks almost retro,” said Bill Nelson, the administrator of NASA. “Looks like we’re looking back toward the Saturn V. But it’s a totally different, new, highly sophisticated—more sophisticated—rocket, and spacecraft.”
Artemis, powered by the Space Launch System rocket, is America’s first attempt to send astronauts to the moon since Apollo 17 in 1972, and technology has taken giant leaps since then. On Artemis I, the first test flight, mission managers say they are taking the SLS, with its uncrewed Orion spacecraft up top, and “stressing it beyond what it is designed for”—the better to ensure safe flights when astronauts make their first landings, currently targeted to begin with Artemis III in 2025.
But Nelson is right: The rocket is retro in many ways, borrowing heavily from the space shuttles America flew for 30 years, and from the Apollo-Saturn V.
Much of Artemis’s hardware is refurbished: Its four main engines, and parts of its two strap-on boosters, all flew before on shuttle missions. The rocket’s apricot color comes from spray-on insulation much like the foam on the shuttle’s external tank. And the large maneuvering engine in Orion’s service module is actually 40 years old—used on 19 space shuttle flights between 1984 and 1992.
“I have a name for missions that use too much new technology—failures.” —John Casani, NASA
Perhaps more important, the project inherits basic engineering from half a century of spaceflight. Just look at Orion’s crew capsule—a truncated cone, somewhat larger than the Apollo Command Module but conceptually very similar.
Old, of course, does not mean bad. NASA says there is no need to reinvent things engineers got right the first time.
“There are certain fundamental aspects of deep-space exploration that are really independent of money,” says Jim Geffre, Orion vehicle-integration manager at the Johnson Space Center in Houston. “The laws of physics haven’t changed since the 1960s. And capsule shapes happen to be really good for coming back into the atmosphere at Mach 32.”
Roger Launius, who served as NASA’s chief historian from 1990 to 2002 and as a curator at the Smithsonian Institution from then until 2017, tells of a conversation he had with John Casani, a veteran NASA engineer who managed the Voyager, Galileo, and Cassini probes to the outer planets.
“I have a name for missions that use too much new technology,” he recalls Casani saying. “Failures.”
The Artemis I flight is slated for about six weeks. (Apollo 11 lasted eight days.) The ship roughly follows Apollo’s path to the moon’s vicinity, but then puts itself in what NASA calls a distant retrograde orbit. It swoops within 110 kilometers of the lunar surface for a gravity assist, then heads 64,000 km out—taking more than a month but using less fuel than it would in closer orbits. Finally, it comes home, reentering the Earth’s atmosphere at 11 km per second, slowing itself with a heatshield and parachutes, and splashing down in the Pacific not far from San Diego.
If all four, quadruply redundant flight computer modules fail, there is a fifth, entirely separate computer onboard, running different code to get the spacecraft home.
“That extra time in space,” says Geffre, “allows us to operate the systems, give more time in deep space, and all those things that stress it, like radiation and micrometeoroids, thermal environments.”
There are, of course, newer technologies on board. Orion is controlled by two vehicle-management computers, each composed of two flight computer modules (FCMs) to handle guidance, navigation, propulsion, communications, and other systems. The flight control system, Geffre points out, is quad-redundant; if at any point one of the four FCMs disagrees with the others, it will take itself offline and, in a 22-second process, reset itself to make sure its outputs are consistent with the others’. If all four FCMs fail, there is a fifth, entirely separate computer running different code to get the spacecraft home.
Guidance and navigation, too, have advanced since the sextant used on Apollo. Orion uses a star tracker to determine its attitude, imaging stars and comparing them to an onboard database. And an optical navigation camera shoots Earth and the moon so that guidance software can determine their distance and position and keep the spacecraft on course. NASA says it’s there as backup, able to get Orion to a safe splashdown even if all communication with Earth has been lost.
But even those systems aren’t entirely new. Geffre points out that the guidance system’s architecture is derived from the Boeing 787. Computing power in deep space is limited by cosmic radiation, which can corrupt the output of microprocessors beyond the protection of Earth’s atmosphere and magnetic field.
Beyond that is the inevitable issue of cost. Artemis is a giant project, years behind schedule, started long before NASA began to buy other launches from companies like SpaceX and Rocket Lab. NASA’s inspector general, Paul Martin, testified to Congressin March that the first four Artemis missions would cost US $4.1 billion each—“a price tag that strikes us as unsustainable.”
Launius, for one, rejects the argument that government is inherently wasteful. “Yes, NASA’s had problems in managing programs in the past. Who hasn’t?” he says. He points out that Blue Origin and SpaceX have had plenty of setbacks of their own—they’re just not obliged to be public about them. “I could go on and on. It’s not a government thing per se and it’s not a NASA thing per se.”
So why return to the moon with—please forgive the pun—such a retro rocket? Partly, say those who watch Artemis closely, because it’s become too big to fail, with so much American money and brainpower invested in it. Partly because it turns NASA’s astronauts outward again, exploring instead of maintaining a space station. Partly because new perspectives could come of it. And partly because China and Russia have ambitions in space that threaten America’s.
“Apollo was a demonstration of technological verisimilitude—to the whole world,” says Launius. “And the whole world knew then, as they know today, that the future belongs to the civilization that can master science and technology.”
Update 7 Sept.: Artemis I has been on launchpad 39B, not 39A as previously reported, at Kennedy Space Center.
Match ID: 109 Score: 15.71 source: spectrum.ieee.org age: 158 days qualifiers: 9.29 nasa, 3.57 mit, 2.86 planets
“Build something that will absolutely, positively work.” This was the mandate from NASA for designing and building the James Webb Space Telescope—at 6.5 meters wide the largest space telescope in history. Last December, JWST launched famously and successfully to its observing station out beyond the moon. And now according to NASA, as soon as next week, the JWST will at long last begin releasing scientific images and data.
Mark Kahan, on JWST’s product integrity team, recalls NASA’s engineering challenge as a call to arms for a worldwide team of thousands that set out to create one of the most ambitious scientific instruments in human history. Kahan—chief electro-optical systems engineer at Mountain View, Calif.–based Synopsys—and many others in JWST’s “pit crew” (as he calls the team) drew hard lessons from three decades ago, having helped repair another world-class space telescope with a debilitating case of flawed optics. Of course the Hubble Space Telescope is in low Earth orbit, and so a special space-shuttle mission to install corrective optics (
as happened in 1993) was entirely possible.
Not so with the JWST.
The meticulous care NASA demanded of JWST’s designers is all the more a necessity because Webb is well out of reach of repair crews. Its mission is to study the infrared universe, and that requires shielding the telescope and its sensors from both the heat of sunlight and the infrared glow of Earth. A good place to do that without getting too far from Earth is an empty patch of interplanetary space 1.5 million kilometers away (well beyond the moon’s orbit) near a spot physicists call the
second Lagrange point, or L2.
The pit crew’s job was “down at the detail level, error checking every critical aspect of the optical design,” says Kahan. Having learned the hard way from Hubble, the crew insisted that every measurement on Webb’s optics be made in at least two different ways that could be checked and cross-checked. Diagnostics were built into the process, Kahan says, so that “you could look at them to see what to kick” to resolve any discrepancies. Their work had to be done on the ground, but their tests had to assess how the telescope would work in deep space at cryogenic temperatures.
Three New Technologies for the Main Mirror
Superficially, Webb follows the design of all large reflecting telescopes. A big mirror collects light from stars, galaxies, nebulae, planets, comets, and other astronomical objects—and then focuses those photons onto a smaller secondary mirror that sends it to a third mirror that then ultimately directs the light to instruments that record images and spectra.
6.5-meter primary mirror is the first segmented mirror to be launched into space. All the optics had to be made on the ground at room temperature but were deployed in space and operated at 30 to 55 degrees above absolute zero. “We had to develop three new technologies” to make it work, says Lee D. Feinberg of the NASA Goddard Space Flight Center, the optical telescope element manager for Webb for the past 20 years.
The longest wavelengths that Hubble has to contend with were 2.5 micrometers, whereas
Webb is built to observe infrared light that stretches to 28 μm in wavelength. Compared with Hubble, whose primary mirror is a circle of an area 4.5 square meters, “[Webb’s primary mirror] had to be 25 square meters,” says Feinberg. Webb also “needed segmented mirrors that were lightweight, and its mass was a huge consideration,” he adds. No single-component mirror that could provide the required resolution would have fit on the Ariane 5 rocket that launched JWST. That meant the mirror would have to be made in pieces, assembled, folded, secured to withstand the stress of launch, then unfolded and deployed in space to create a surface that was within tens of nanometers of the shape specified by the designers.
The James Webb Space Telescope [left] and the Hubble Space Telescope side by side—with Hubble’s 2.4-meter-diameter mirror versus Webb’s array of hexagonal mirrors making a 6.5-meter-diameter light-collecting area. NASA Goddard Space Flight Center
NASA and the
U.S. Air Force, which has its own interests in large lightweight space mirrors for surveillance and focusing laser energy, teamed up to develop the technology. The two agencies narrowed eight submitted proposals down to two approaches for building JWST’s mirrors: one based on low-expansion glass made of a mixture of silicon and titanium dioxides similar to that used in Hubble and the other the light but highly toxic metal beryllium. The most crucial issue came down to how well the materials could withstand temperature changes from room temperature on the ground to around 50 K in space. Beryllium won because it could fully release stress after cooling without changing its shape, and it’s not vulnerable to the cracking that can occur in glass. The final beryllium mirror was a 6.5-meter array of 18 hexagonal beryllium mirrors, each weighing about 20 kilograms. The weight per unit area of JWST’s mirror was only 10 percent of that in Hubble. A 100-nanometer layer of pure gold makes the surface reflect 98 percent of incident light from JWST’s main observing band of 0.6 to 28.5 μm. “Pure silver has slightly higher reflectivity than pure gold, but gold is more robust,” says Feinberg. A thin layer of amorphous silica protects the metal film from surface damage.
In addition, a wavefront-sensing control system keeps mirror segment surfaces
aligned to within tens of nanometers. Built on the ground, the system is expected to keep mirror alignment stabilized throughout the telescope’s operational life. A backplane kept at a temperature of 35 K holds all 2.4 tonnes of the telescope and instruments rock-steady to within 32 nm while maintaining them at cryogenic temperatures during observations.
The JWST backplane, the “spine” that supports the entire hexagonal mirror structure and carries more than 2,400 kg of hardware, is readied for assembly to the rest of the telescope. NASA/Chris Gunn
Hubble’s amazing, long-exposure images of distant galaxies are possible through the use of gyroscopes and reaction wheels. The gyroscopes are used to sense unwanted rotations, and reaction wheels are used to counteract them.
But the gyroscopes used on Hubble have had a bad track record and have had to be replaced repeatedly. Only three of Hubble’s six gyros remain operational today, and NASA has devised plans for operating with one or two gyros at reduced capability.
Hubble also includes reaction wheels and magnetic torquers, used to maintain its orientation when needed or to point at different parts of the sky.
Webb uses reaction wheels similarly to turn across the sky, but instead of using mechanical gyros to sense direction,
it uses hemispherical resonator gyroscopes, which have no moving parts. Webb also has a small fine-steering mirror in the optical path, which can tilt over an angle of just 5 arc seconds. Those very fine adjustments of the light path into the instruments keep the telescope on target. “It’s a really wonderful way to go,” says Feinberg, adding that it compensates for small amounts of jitter without having to move the whole 6-tonne observatory.
Other optics distribute light from the fine-steering mirror among four instruments, two of which can observe simultaneously. Three instruments have sensors that observe wavelengths of 0.6 to 5 μm, which astronomers call the near-infrared. The fourth, called the Mid-InfraRed Instrument (MIRI), observes what astronomers call the mid-infrared spectrum, from 5 to 28.5 μm. Different instruments are needed because sensors and optics have limited wavelength ranges. (Optical engineers may blanch slightly at astronomers’ definitions of what constitutes the near- and mid-infrared wavelength ranges. These two groups simply have differing conventions for labeling the various regimes of the infrared spectrum.)
Mid-infrared wavelengths are crucial for observing
young stars and planetary systems and the earliest galaxies, but they also pose some of the biggest engineering challenges. Namely, everything on Earth and planets out to Jupiter glow in the mid-infrared. So for JWST to observe distant astronomical objects, it must avoid recording extraneous mid-infrared noise from all the various sources inside the solar system. “I have spent my whole career building instruments for wavelengths of 5 μm and longer,” says MIRI instrument scientist Alistair Glasse of the Royal Observatory, in Edinburgh. “We’re always struggling against thermal background.”
Mountaintop telescopes can see the near-infrared, but observing the mid-infrared sky requires telescopes in space. However, the thermal radiation from Earth and its atmosphere can cloud their view, and so can the telescopes themselves unless they are cooled far below room temperature. An ample supply of liquid helium and an orbit far from Earth allowed the
Spitzer Space Telescope’s primary observing mission to last for five years, but once the last of the cryogenic fluid evaporated in 2009, its observations were limited to wavelengths shorter than 5 μm.
Another challenge is the limited transparency of optical materials in the mid-infrared. “We use reflective optics wherever possible,” says Glasse, but they also pose problems, he adds. “Thermal contraction is a big deal,” he says, because the instrument was made at room temperature but is used at 7 K. To keep thermal changes uniform throughout MIRI, they made the whole structure of gold-coated aluminum lest other metals cause warping.
Detectors are another problem. Webb’s near-infrared sensors use mercury cadmium telluride photodetectors with a resolution of 2,048 x 2,048 pixels. This resolution is widely used at wavelengths below 5 μm, but sensing at
MIRI’s longer wavelengths required exotic detectors that are limited to offering only 1,024 x 1,024 pixels.
Glasse says commissioning “has gone incredibly well.” Although some stray light has been detected, he says, “we are fully expecting to meet all our science goals.”
NIRCam Aligns the Whole Telescope
The near-infrared detectors and optical materials used for observing at wavelengths shorter than 5 μm are much more mature than those for the mid-infrared, so the Near-Infrared Camera (NIRCam) does double duty by both recording images and aligning all the optics in the whole telescope. That alignment was the trickiest part of building the instrument, says NIRCam principal investigator
Marcia Rieke of the University of Arizona.
Alignment means getting all the light collected by the primary mirror to get to the right place in the final image. That’s crucial for Webb, because it has 18 separate segments that have to overlay their images perfectly in the final image, and because all those segments were built on the ground at room temperature but operate at cryogenic temperatures in space at zero gravity. When NASA recorded a test image of a single star after Webb first opened its primary mirror, it showed 18 separate bright spots, one from each segment.
When alignment was completed on 11 March, the image from NIRcam showed a single star with six spikes caused by diffraction.
Even when performing instrumental calibration tasks, JWST couldn’t help but showcase its stunning sensitivity to the infrared sky. The central star is what telescope technicians used to align JWST’s mirrors. But notice the distant galaxies and stars that photobombed the image too!NASA/STScI
Building a separate alignment system would have added to both the weight and cost of Webb, Rieke realized, and in the original 1995 plan for the telescope she proposed designing NIRCam so it could align the telescope optics once it was up in space as well as record images. “The only real compromise was that it required NIRCam to have exquisite image quality,” says Rieke, wryly. From a scientific point, she adds, using the instrument to align the telescope optics “is great because you know you’re going to have good image quality and it’s going to be aligned with you.” Alignment might be just a tiny bit off for other instruments. In the end, it took a team at Lockheed Martin to develop the computational tools to account for all the elements of thermal expansion.
Escalating costs and delays had troubled Webb for years. But for Feinberg, “commissioning has been a magical five months.” It began with the sight of sunlight hitting the mirrors. The segmented mirror deployed smoothly, and after the near-infrared cameras cooled, the mirrors focused one star into 18 spots, then aligned them to put the spots on top of each other. “Everything had to work to get it to [focus] that well,” he says. It’s been an intense time, but for Feinberg, a veteran of the Hubble repair mission, commissioning Webb was “a piece of cake.”
Corrections 26-28 July 2022: The story was updated a) to reflect the fact that the Lagrange point L2 where Webb now orbits is not that of the “Earth-moon system” (as the story had originally reported) but rather the Earth-sun system
and b) to correct misstatements in the original posting about Webb’s hardware for controlling its orientation.
Corrections 12 Aug. 2022: Alistair Glasse’s name was incorrectly spelled in a previous version of this story, as was NIRCam (which we’d spelled as NIRcam); Webb’s tertiary mirror (we’d originally reported only its primary and secondary mirrors) was also called out in this version.
This article appears in the September 2022 print issue as “Inside the Universe Machine.”
Match ID: 110 Score: 15.71 source: spectrum.ieee.org age: 211 days qualifiers: 9.29 nasa, 3.57 mit, 2.86 planets
There are two major reasons for this: first, EVs are not going to reach the numbers required by 2050 to hit their needed contribution to net zero goals, and even if they did, a host of other personal, social and economic activities must be modified to reach the total net zero mark.
For instance, Alexandre Milovanoff at the University of Toronto and his colleagues’ research (which is described in depth in a recent Spectrum article) demonstrates the U.S. must have 90 percent of its vehicles, or some 350 million EVs, on the road by 2050 in order to hit its emission targets. The likelihood of this occurring is infinitesimal. Some estimates indicate that about 40 percent of vehicles on US roads will be ICE vehicles in 2050, while others are less than half that figure.
For the U.S. to hit the 90 percent EV target, sales of all new ICE vehicles across the U.S. must cease by 2038 at the latest, according to research company BloombergNEF (BNEF). Greenpeace, on the other hand, argues that sales of all diesel and petrol vehicles, including hybrids, must end by 2030 to meet such a target. However, achieving either goal would likely require governments offering hundreds of billions of dollars, if not trillions, in EV subsidies to ICE owners over the next decade, not to mention significant investments in EV charging infrastructure and the electrical grid. ICE vehicle households would also have to be convinced that they would not be giving activities up by becoming EV-only households.
As a reality check, current estimates for the number of ICE vehicles still on the road worldwide in 2050 range from a low of 1.25 billion to more than 2 billion.
Even assuming that the required EV targets were met in the U.S. and elsewhere, it still will not be sufficient to meet net zero 2050 emission targets. Transportation accounts for only 27 percent of greenhouse gas emissions (GHG) in the U.S.; the sources of the other 73 percent of GHG emissions must be reduced as well. Even in the transportation sector, more than 15 percent of the GHG emissions are created by air and rail travel and shipping. These will also have to be decarbonized.
Nevertheless, for EVs themselves to become true zero emission vehicles, everything in their supply chain from mining to electricity production must be nearly net-zero emission as well. Today, depending on the EV model, where it charges, and assuming it is a battery electric and not a hybrid vehicle, it may need to be driven anywhere from 8,400 to 13,500 miles, or controversially, significantly more to generate less GHG emissions than an ICE vehicle. This is due to the 30 to 40 percent increase in emissions EVs create in comparison to manufacturing an ICE vehicle, mainly from its battery production.
In states (or countries) with a high proportion of coal-generated electricity, the miles needed to break-even climb more. In Poland and China, for example, an EV would need to be driven 78,700 miles to break-even. Just accounting for miles driven, however, BEVs cars and trucks appear cleaner than ICE equivalents nearly everywhere in the U.S. today. As electricity increasingly comes from renewables, total electric vehicle GHG emissions will continue downward, but that will take at least a decade or more to happen everywhere across the U.S. (assuming policy roadblocks disappear), and even longer elsewhere.
If EVs aren’t enough, what else is needed?
Given that EVs, let alone the rest of the transportation sector, likely won’t hit net zero 2050 targets, what additional actions are being advanced to reduce GHG emissions?
A high priority, says IEA’s Birol, is investment in across-the-board energy-related technology research and development and their placement into practice. According to Birol, “IEA analysis shows that about half the reductions to get to net zero emissions in 2050 will need to come from technologies that are not yet ready for market.”
Many of these new technologies will be aimed at improving the efficient use of fossil fuels, which will not be disappearing anytime soon. The IEA expects that energy efficiency improvement, such as the increased use of variable speed electric motors, will lead to a 40 percent reduction in energy-related GHG emissions over the next twenty years.
But even if these hoped for technological improvements arrive, and most certainly if they do not, the public and businesses are expected to take more energy conscious decisions to close what the United Nations says is the expected 2050 “emissions gap.” Environmental groups foresee the public needing to use electrified mass transit, reduce long-haul flights for business as well as pleasure), increase telework, walk and cycle to work or stores, change their diet to eat more vegetables, or if absolutely needed, drive only small EVs. Another expectation is that homeowners and businesses will become “fully electrified” by replacing oil, propane and gas furnaces with heat pumps along with gas fired stoves as well as installing solar power and battery systems.
Dronning Louise’s Bro (Queen Louise’s Bridge) connects inner Copenhagen and Nørrebro and is frequented by many cyclists and pedestrians every day.Frédéric Soltan/Corbis/Getty Images
Underpinning the behavioral changes being urged (or encouraged by legislation) is the notion of rejecting the current car-centric culture and completely rethinking what personal mobility means. For example, researchers at University of Oxford in the U.K. argue that, “Focusing solely on electric vehicles is slowing down the race to zero emissions.” Their studyfound “emissions from cycling can be more than 30 times lower for each trip than driving a fossil fuel car, and about ten times lower than driving an electric one.” If just one out of five urban residents in Europe permanently changed from driving to cycling, emissions from automobiles would be cut by 8 percent, the study reports.
Even then, Oxford researchers concede, breaking the car’s mental grip on people is not going to be easy, given the generally poor state of public transportation across much of the globe.
Behavioral change is hard
How willing are people to break their car dependency and other energy-related behaviors to address climate change? The answer is perhaps some, but maybe not too much. A Pew Research Centersurvey taken in late 2021 of seventeen countries with advanced economies indicated that 80 percent of those surveyed were willing to alter how then live and work to combat climate change.
However, a Kanter Publicsurvey of ten of the same countries taken at about the same time gives a less positive view, with only 51 percent of those polled stating they would alter their lifestyles. In fact, some 74 percent of those polled indicated they were already “proud of what [they are] currently doing” to combat climate change.
What both polls failed to explore are what behaviors specifically would respondents being willing to permanently change or give up in their lives to combat climate change?
For instance, how many urban dwellers, if told that they must forever give up their cars and instead walk, cycle or take public transportation, would willingly agree to doing so? And how many of those who agreed, would also consent to go vegetarian, telework, and forsake trips abroad for vacation?
It is one thing to answer a poll indicating a willingness to change, and quite another to “walk the talk” especially if there are personal, social or economic inconveniences or costs involved. For instance, recent U.S. survey information shows that while 22 percent of new car buyers expressed interest in a battery electric vehicle (BEV), only 5 percent actually bought one.
The world’s largest bike parking facility, Stationsplein Bicycle Parking near Utrecht Central Station in Utrecht, Netherlands has 12,500 parking places.Abdullah Asiran/Anadolu Agency/Getty Images
However, in countless other urban areas, especially across most of the U.S., even those wishing to forsake owning a car would find it very difficult to do so without a massive influx of investment into all forms of public transport and personal mobility to eliminate the scores of US transit deserts.
As Tony Dutzik of the environmental advocacy group Frontier Group has written that in the U.S. “the price of admission to jobs, education and recreation is owning a car.” That’s especially true if you are a poor urbanite. Owning a reliable automobile has long been one of the only successful means of getting out of poverty.
Massive investment in new public transportation in the U.S. in unlikely, given its unpopularity with politicians and the public alike. This unpopularity has translated into aging and poorly-maintained bus, train and transit systems that few look forward to using. The American Society of Civil Engineers gives the current state of American public transportation a grade of D- and says today’s $176 billion investment backlog is expected to grow to $250 billion through 2029.
While the $89 billion targeted to public transportation in the recently passed Infrastructure Investment and Jobs Act will help, it also contains more than $351 billion for highways over the next five years. Hundreds of billions in annual investment are needed not only to fix the current public transport system but to build new ones to significantly reduce car dependency in America. Doing so would still take decades to complete.
Yet, even if such an investment were made in public transportation, unless its service is competitive with an EV or ICE vehicle in terms of cost, reliability and convenience, it will not be used. With EVs costing less to operate than ICE vehicles, the competitive hurdle will increase, despite the moves to offer free transit rides. Then there is the social stigma attached riding public transportation that needs to be overcome as well.
A few experts proclaim that ride-sharing using autonomous vehicles will separate people from their cars. Some even claim such AV sharing signals the both the end of individual car ownership as well as the need to invest in public transportation. Both outcomes are far from likely.
Other suggestions include redesigning cities to be more compact and more electrified, which would eliminate most of the need for personal vehicles to meet basic transportation needs. Again, this would take decades and untold billions of dollars to do so at the scale needed. The San Diego, California region has decided to spend $160 billion as a way to meet California’s net zero objectives to create “a collection of walkable villages serviced by bustling (fee-free) train stations and on-demand shuttles” by 2050. However, there has been public pushback over how to pay for the plan and its push to decrease personal driving by imposing a mileage tax.
According to University of Michigan public policy expert John Leslie King, the challenge of getting to net zero by 2050 is that each decarbonization proposal being made is only part of the overall solution. He notes, “You must achieve all the goals, or you don’t win. The cost of doing each is daunting, and the total cost goes up as you concatenate them.”
Concatenated costs also include changing multiple personal behaviors. It is unlikely that automakers, having committed more than a trillion dollars so far to EVs and charging infrastructure, are going to support depriving the public of the activities they enjoy today as a price they pay to shift to EVs. A war on EVs will be hard fought.
The number of Massachusetts households that can afford or are willing to buy an EV and or convert their homes to a heat pump system in the next eight years, even with a current state median household income of $89,000 and subsidies, is likely significantly smaller than the targets set. So, what happens if by 2030, the numbers are well below target, not only in Massachusetts, but other states like California, New York, or Illinois that also have aggressive GHG emission reduction targets?
Will governments move from encouraging behavioral changes to combat climate change or, in frustration or desperation, begin mandating them? And if they do, will there be a tipping point that spurs massive social resistance?
For example, dairy farmers in the Netherlands have been protesting plans by the government to force them to cut their nitrogen emissions. This will require dairy farms to reduce their livestock, which will make it difficult or impossible to stay in business. The Dutch government estimates 11,200 farms must close, and another 17,600 to reduce their livestock numbers. The government says farmers who do not comply will have their farms taken away by forced buyouts starting in 2023.
California admits getting to a zero-carbon transportation system by 2045 means car owners must travel 25 percent below 1990 levels by 2030 and even more by 2045. If drivers fail to do so, will California impose weekly or monthly driving quotas, or punitive per mile driving taxes, along with mandating mileage data from vehicles ever-more connected to the Internet? The San Diego backlash over a mileage tax may be just the beginning.
“EVs,” notes King, “pull an invisible trailer filled with required major lifestyle changes that the public is not yet aware of.”
When it does, do not expect the public to acquiesce quietly.
In the final article of the series, we explore potential unanticipated consequences of transitioning to EVs at scale.
Match ID: 112 Score: 14.29 source: spectrum.ieee.org age: 5 days qualifiers: 14.29 mit
If electric vertical takeoff and landing aircraft do manage to revolutionize transportation, the date of 5 October 2011, may live on in aviation lore. That was the day when a retired mechanical engineer named Marcus Leng flew a home-built eVTOL across his front yard in Warkworth, Ont., Canada, startling his wife and several of his friends.
“So, take off, flew about 6 feet above the ground, pitched the aircraft towards my wife and the two couples that were there, who were behind automobiles for protection, and decided to do a skidding stop in front of them. Nobody had an idea that this was going to be happening,” recalls Leng.
But as he looked to set his craft down, he saw a wing starting to dig into his lawn. “Uh-oh, this is not good,” he thought. “The aircraft is going to spin out of control. But what instead happened was the propulsion systems revved up and down so rapidly that as the aircraft did that skidding turn, that wing corner just dragged along my lawn exactly in the direction I was holding the aircraft, and then came to a stable landing,” says Leng. At that point, he knew that such an aircraft was viable “because to have that sort of an interference in the aircraft and for the control systems to be able to control it was truly remarkable.”
It was the
second time anyone, anywhere had ever flown an eVTOL aircraft.
350 organizations in 48 countries are designing, building, or flying eVTOLs, according to the Vertical Flight Society. These companies are fueled by more than US $7 billion and perhaps as much as $10 billion in startup funding. And yet, 11 years after Leng’s flight, no eVTOLs have been delivered to customers or are being produced at commercial scale. None have even been certified by a civil aviation authority in the West, such as the U.S. Federal Aviation Administration or the European Union Aviation Safety Agency.
But 2023 looks to be a pivotal year for eVTOLs. Several well-funded startups are expected to reach important early milestones in the certification process. And the company Leng founded, Opener, could beat all of them by making its first deliveries—which would also be the first for any maker of an eVTOL.
Today, some 350 organizations in 48 countries are designing, building, or flying eVTOLs, according to the Vertical Flight Society.
As of late October, the company had built at its facility in Palo Alto, Calif., roughly 70 aircraft—considerably more than are needed for simple testing and evaluation. It had flown more than 30 of them. And late in 2022, the company had begun training a group of operators on a state-of-the-art virtual-reality simulator system.
Opener’s highly unusual, single-seat flier is intended for personal use rather than transporting passengers, which makes it almost unique. Opener intends to have its aircraft classified as an “ultralight,” enabling it to bypass the rigorous certification required for commercial-transport and other aircraft types. The certification issue looms as a major unknown over the entire eVTOL enterprise, at least in the United States, because, as the blog Jetlaw.com
noted last August, “the FAA has no clear timeline or direction on when it will finalize a permanent certification process for eVTOL.”
Opener’s strategy is not without risks, either. For one, there’s no guarantee that the FAA will ultimately agree that Opener’s aircraft, called BlackFly, qualifies as an ultralight. And not everyone is happy with this approach. “My concern is, these companies that are saying they can be ultralights and start flying around in public are putting at risk a $10 billion [eVTOL] industry,” says Mark Moore, founder and chief executive of
Whisper Aero in Crossville, Tenn. “Because if they crash, people won’t know the difference” between the ultralights and the passenger eVTOLs, he adds. “To me, that’s unacceptable.” Previously, Moore led a team at NASA that designed a personal-use eVTOL and then served as engineering director at Uber’s Elevate initiative.
A BlackFly eVTOL took off on 1 October, 2022, at the Pacific Airshow in Huntington Beach, Calif. Irfan Khan/Los Angeles Times/Getty Images
Making eVTOLs personal
Opener’s aircraft is as singular as its business model. It’s a radically different kind of aircraft, and it sprang almost entirely from Leng’s fertile mind.
“As a kid,” he says, “I already envisioned what it would be like to have an aircraft that could seamlessly do a vertical takeoff, fly, and land again without any encumbrances whatsoever.” It was a vision that never left him, from a mechanical-engineering degree at the University of Toronto, management jobs in the aerospace industry, starting a company and making a pile of money by
inventing a new kind of memory foam, and then retiring in 1996 at the age of 36.
The fundamental challenge to designing a vertical-takeoff aircraft is endowing it with both vertical lift and efficient forward cruising. Most eVTOL makers achieve this by physically tilting multiple large rotors from a vertical rotation axis, for takeoff, to a horizontal one, for cruising. But the mechanism for tilting the rotors must be extremely robust, and therefore it inevitably adds substantial complexity and weight. Such tilt-rotors also entail significant compromises and trade-offs in the size of the rotors and their placement relative to the wings.
Opener’s BlackFly ingeniously avoids having to make those trade-offs and compromises. It has two wings, one in front and one behind the pilot. Affixed to each wing are four motors and rotors—and these never change their orientation relative to the wings. Nor do the wings move relative to the fuselage. Instead, the entire aircraft rotates in the air to transition between vertical and horizontal flight.
To control the aircraft,
the pilot moves a joystick, and those motions are instantly translated by redundant flight-control systems into commands that alter the relative thrust among the eight motor-propellers.
Visually, it’s an astounding aircraft, like something from a 1930s pulp sci-fi magazine. It’s also a triumph of engineering.
Leng says the journey started for him in 2008, when “I just serendipitously stumbled upon the fact that all the key technologies for making electric VTOL human flight practical were coming to a nexus.”
The journey that made Leng’s dream a reality kicked into high gear in 2014 when a chance meeting with investor Sebastian Thrun at an aviation conference led to Google cofounder
Larry Page investing in Leng’s project.
Designing an eVTOL from first principles
Leng started in his basement in 2010, spending his own money on a mélange of home-built and commercially available components. The motors were commercial units that Leng modified himself, the motor controllers were German and off the shelf, the inertial-measurement unit was open source and based on an Arduino microcontroller. The batteries were modified model-aircraft lithium-polymer types.
“The main objective behind this was proof of concept,” he says.“I had to prove it to myself, because up until that point, they were just equations on a piece of paper. I had to get to the point where I knew that this could be practical.”
After his front-yard flight in 2011, there followed several years of refining and rebuilding all of the major components until they achieved the specifications Leng wanted. “Everything on BlackFly is from first principles,” he declares.
The motors started out generating 160 newtons (36 pounds) of static thrust. It was way too low. “I actually tried to purchase motors and motor controllers from companies that manufactured those, and I specifically asked them to customize those motors for me, by suggesting a number of changes,” he says. “I was told that, no, those changes won’t work.”
So he started designing his own brushless AC motors. “I did not want to design motors,” says Leng. “In the end, I was stunned at how much improvement we could make by just applying first principles to this motor design.”
Eleven years after Leng’s flight, no eVTOLs have been delivered to customers or are being produced at commercial scale.
To increase the power density, he had to address the tendency of a motor in an eVTOL to overheat at high thrust, especially during hover, when cooling airflow over the motor is minimal. He began by designing a system to force air through the motor. Then he began working on the rotor of the motor (not to be confused with the rotor wings that lift and propel the aircraft). This is the spinning part of a motor, which is typically a single piece of electrical steel. It’s an iron alloy with very high magnetic permeability.
By layering the steel of the rotor, Leng was able to greatly reduce its heat generation, because the thinner layers of steel limited the eddy currents in the steel that create heat. Less heat meant he could use higher-strength neodymium magnets, which would otherwise become demagnetized. Finally, he rearranged those magnets into a configuration called a Halbach array. In the end Leng’s motors were able to produce 609 newtons (137 lbs.) of thrust.
Overall, the 2-kilogram motors are capable of sustaining 20 kilowatts, for a power density of 10 kilowatts per kilogram, Leng says. It’s an extraordinary figure. One of the few motor manufacturers claiming a density in that range is
H3X Technologies, which says its HPDM-250 clocks in at 12 kw/kg.
Software engineer Bodhi Connolly took a BlackFly eVTOL aircraft for a twilight spin on 29 July 2022, at the EAA AirVenture show in Oshkosh, Wis.
Advanced air mobility for everybody
The brain of the BlackFly consists of three independent flight controllers, which calculate the aircraft’s orientation and position, based on readings from the inertial-measurement units, GPS receivers, and magnetometers. They also use pitot tubes to measure airspeed. The flight controllers continually cross-check their outputs to make sure they agree. They also feed instructions, based on the operator’s movement of the joystick, to the eight motor controllers (one for each motor).
Equipped with these sophisticated flight controllers, the fly-by-wire BlackFly is similar in that regard to the hobbyist drones that rely on processors and clever algorithms to avoid the tricky manipulations of sticks, levers, and pedals required to fly a traditional fixed- or rotary-wing aircraft.
That sophisticated, real-time control will allow a far larger number of people to consider purchasing a BlackFly when it becomes available. In late November, Opener had not disclosed a likely purchase price, but in the past the company had suggested that BlackFly would cost as much as a luxury SUV. So who might buy it? CEO Ken Karklin points to several distinct groups of potential buyers who have little in common other than wealth.
There are early tech adopters and also people who are already aviators and are “passionate about the future of electric flight, who love the idea of being able to have their own personal vertical-takeoff-and-landing, low-maintenance, clean aircraft that they can fly in rural and uncongested areas,” Karklin says. “One of them is a business owner. He has a plant that’s a 22-mile drive but would only be a 14-mile flight, and he wants to install charging infrastructure on either end and wants to use it to commute every day. We love that.”
Others are less certain about how, or even whether, this market segment will establish itself. “When it comes to personal-use eVTOLs, we are really struggling to see the business case,” says Sergio Cecutta, founder and partner at SMG Consulting, where he studies eVTOLs among other high-tech transportation topics. “I’m not saying they won’t sell. It’s how many will they sell?” He notes that Opener is not the only eVTOL maker pursuing a path to success through the ultralight or some other specialized FAA category. As of early November, the list included
Alauda Aeronautics,Air,Alef, Bellwether Industries, Icon Aircraft, Jetson, Lift Aircraft, andRyse Aero Technologies.
What makes Opener special? Both Karklin and Leng emphasize the value of all that surrounds the BlackFly aircraft. For example, there are virtual-reality-based simulators that they say enable them to fully train an operator in 10 to 15 hours. The aircraft themselves are heavily instrumented: “Every flight, literally, there’s over 1,000 parameters that are recorded, some of them at 1,000 hertz, some 100 Hz, 10 Hz, and 1 Hz,” says Leng. “All that information is stored on the aircraft and downloaded to our database at the end of the flight. When we go and make a software change, we can do what’s called regression testing by running that software using all the data from our previous flights. And we can compare the outputs against what the outputs were during any specific flight and can automatically confirm that the changes that we’ve made are without any issues. And we can also compare, to see if they make an improvement.”
Ed Lu, a former NASA astronaut and executive at Google, sits on Opener’s safety-review board. He says what impressed him most when he first met the BlackFly team was “the fact that they had based their entire development around testing. They had a wealth of flight data from flying this vehicle in a drone mode, an unmanned mode.” Having all that data was key. “They could make their decisions based not on analysis, but after real-world operations,” Lu says, adding that he is particularly impressed by Opener’s ability to manage all the flight data. “It allows them to keep track of every aircraft, what sensors are in which aircraft, which versions of code, all the way down to the flights, to what happened in each flight, to videos of what’s happening.” Lu thinks this will be a huge advantage once the aircraft is released into the “real” world.
Karklin declines to comment on whether an ultralight approval, which is governed by what the FAA designates “
Part 103,” might be an opening move toward an FAA type certification in the future. “This is step one for us, and we are going to be very, very focused on personal air vehicles for recreational and fun purposes for the foreseeable future,” he says. “But we’ve also got a working technology stack here and an aircraft architecture that has considerable utility beyond the realm of Part-103 [ultralight] aircraft, both for crewed and uncrewed applications.” Asked what his immediate goals are, Karklin responds without hesitating. “We will be the first eVTOL company, we believe, in serial production, with a small but steadily growing revenue and order book, and with a growing installed base of cloud-connected aircraft that with every flight push all the telemetry, all the flight behavior, all the component behavior, all the operator-behavior data representing all of this up to the cloud, to be ingested by our back office, and processed. And that provides us a lot of opportunity.”
This article appears in the January 2023 print issue as “Finally, an eVTOL You Can Buy Soonish.”
The James Webb Space Telescope, in just a few months of operation, has begun to change our view of the universe. Its images—more detailed than what was possible before—show space aglow with galaxies, some of them formed very soon after the big bang.
Acton grew up in Wyoming and spent more than 20 years on the Webb team. IEEE Spectrum spoke with Acton after his team had finished aligning the telescope’s optics in space. This transcript has been edited for clarity and brevity.
Tell your story. What got you started?
Scott Acton: When I was seven-years-old, my dad brought home a new television. And he gave me the old television to take apart. I was just enthralled by what I saw inside this television. And from that moment on I was defined by electronics. You look inside an old television and there are mechanisms, there are smells and colors and sights and for a seven-year-old kid, it was just the most amazing thing I’d ever seen.
Fast-forward 25 years and I’m working in the field of adaptive optics. And eventually that led to wavefront sensing and controls, which led to the Webb telescope.
Called the Cosmic Cliffs, Webb’s seemingly three-dimensional picture looks like craggy mountains on a moonlit evening. In reality, it is the edge of the giant, gaseous cavity within NGC 3324, and the tallest “peaks” in this image are about 7 light-years high. NASA/ESA/CSA/STScI
Talk about your work getting the telescope ready for flight. You worked on it for more than 20 years.
Acton: Well, we had to invent all of the wavefront sensing and controls. None of that technology really existed in 2001, so we started from the ground up with concepts and simple experiments. Then more complicated, very complicated experiments and eventually something known as TRL 6 technology—Technology Readiness Level 6—which demonstrated that we could do this in a flightlike environment. And then it was a question of taking this technology, algorithms, understanding it and implementing it into very robust procedures, documentation, and software, so that it could then be applied on the flight telescope.
What was it like finally to launch?
Acton: Well, I’ve got to say, there was a lot of nervousness, at least on my part. I was thinking we had a 70 percent chance of mission success, or something like that. It’s like sending your kid off to college—this instrument that we’d been looking at and thinking about.
The Ariane 5 vehicle is so reliable. I didn’t think there was going to be any problem with it, but deployment starts, basically, minutes after launch. So, for me, the place to be was at a computer console [at the Space Telescope Science Institute in Baltimore].
And then there were a lot of things that had to work.
Acton: Yes, right. But there are some things that that are interesting. They have these things called nonexplosive actuators [used to secure the spacecraft during launch]. There are about 130 of them. And you actually can’t test them. You build them and they get used, basically, once. If you do reuse one, well, it’s now a different actuator because you have to solder it back together. So you can’t qualify the part, but what you can do is qualify the process.
We could have still had a mission if some didn’t fire, but most of them were absolutely necessary for the success of the mission. So just ask yourself, let’s suppose you want to have a 95 percent chance of success. What number raised to the 130th power is equal to 0.95? That number is basically one. These things had to be perfect.
I remember walking home one night, talking on the phone to my wife, Heidi, and saying, “If I’m wrong about this I’ve just completely screwed up the telescope.” She said, “Scott, that’s why you’re there.” That was her way of telling me to cowboy up. The responsibility had to come down to somebody and in that moment, it was me.
I think the public perception was that the Webb was in very good shape and the in-flight setup all went very well. Would you say that’s accurate?
Acton: Early on in the mission there were hiccups, but other than that, I’d say things just went beyond our wildest expectations. Part of that comes down to the fact that my team and I had commissioned the telescope 100 times in simulations. And we always made it a little harder. I think that served us well because when we got to the real telescope, it was quite robust. It just worked.
Take us through the process of aligning the telescope.
Acton: The first image we got back from the telescope was 2 February, in the middle of the night. Most people had gone home, but I was there, and a lot of other people were too. We just pointed the telescope at the Large Magellanic Cloud, which has lots and lots of stars in it, and took images on the near-infrared cameras. People were really happy to see these images because they were looking basically to make sure that the science instruments worked.
But some of us were really concerned with that image, because you could see some very significant astigmatism—stronger than we were expecting to see from our simulations. Later we would learn that the telescope’s secondary mirror was off in translation—about 1.5 millimeters along the deployment axis and about a millimeter in the other axis. And the primary mirror segments were clocked a bit from the perfectly aligned state.
Lee Feinberg, the telescope lead at NASA Goddard, texted me and said, “Scott, why can’t you just simulate this to see if you can get some images that bad?” So that morning I ran a simulation and was able to reproduce almost exactly what we were seeing in these images. We realized that we were not going to have any major problems with the wavefront.
Describe the cadence of your work during commissioning. What would a day be like?
Acton: One of the rules we set up very early on was that in terms of wavefront sensing and control, we would always have two people sitting in front of the computers at any given time. Anytime anything significant happened, I always wanted to make sure that I was there, so I got an apartment [near the institute in Baltimore]. From my door to the door of the of the Mission Operations Center was a 7-minute walk.
In this mosaic image stretching 340 light-years across, Webb’s Near-Infrared Camera (NIRCam) displays the Tarantula Nebula star-forming region in a new light, including tens of thousands of never-before-seen young stars that were previously shrouded in cosmic dust.NASA/ESA/CSA/STScI/Webb ERO Production Team
There were certainly times during the process where it had a very large pucker factor, if you will. We couldn’t point the telescope reliably at the very beginning. And a lot of our software, for the early steps of commissioning, depended on the immutability of telescope pointing. We wanted to have the telescope repeatedly pointed to within a couple of arc-seconds and it was closer to 20 or 30. Because of that, some of the initial moves to align the telescope had to be calculated, if you will, by hand.
I remember walking home one night, talking on the phone to my wife, Heidi, and saying, “If I’m wrong about this I’ve just completely screwed up the telescope.” She said, “Scott, that’s why you’re there.” That was her way of telling me to cowboy up. The responsibility had to come down to somebody and in that moment, it was me.
But when the result came back, we could see the images. We pointed the telescope at a bright isolated star and then we could see, one at a time, 18 spots appearing in the middle of our main science detector. I remember a colleague saying, “I now believe we’re going to completely align the telescope.” He felt in his mind that if we could get past that step, that everything else was downhill.
You’re trying to piece together the universe. It’s hard to get it right, and very easy to make mistakes. But we did it.
Building the Webb was, of course, a big, complicated project. Do you think there are any particular lessons to be drawn from it that people in the future might find useful?
Acton: Here are a couple of really big ones that apply to wavefront sensing and control. One is that there are multiple institutions involved—Northrop Grumman, Ball Aerospace, the Goddard Space Flight Center, the Space Telescope Science Institute—and the complication of having all these institutional lines. It could have been very, very difficult to navigate. So very early on we decided not to have any lines. We were a completely badgeless team. Anybody could talk to anybody. If someone said, “No, I think this is wrong, you should do it this way,” even if they didn’t necessarily have contractual responsibility, everybody listened.
Another big lesson we learned was about the importance of the interplay between experimentation and simulation. We built a one-sixth scale model, a fully functional optical model of the telescope, and it’s still working. It allowed us, very early on, to know what was going to be difficult. Then we could address those issues in simulation. That understanding, the interplay between experimentation and modeling and simulations, was absolutely essential.
Recognizing of course, that it’s very early, do you yet have a favorite image?
Acton: My favorite image, so far, was one that was taken during the last real wavefront activity that we did as part of commissioning. It was called a thermal slew test. The telescope has a large sunshield, but the sunshield can be at different angles with respect to the sun. So to make sure it was stable, we aimed it at a bright star we used as a guide star, put it in one orientation, and stayed there for five or six days. And then we switched to a different orientation for five or six days. It turned out to be quite stable. But how do you know that the telescope wasn’t rolling about the guide star? To check this, we took a series of test images with the redundant fine-guidance sensor. As you can imagine, when you have a 6-1/2 meter telescope at L2 away from any competing light sources that is cooled to 50 kelvins, yes, it is sensitive. Even just one 20-minute exposure is going to just have unbelievable detail regarding the deep universe. Imagine what happens if you take 100 of those images and average them together. We came up with an image of just some random part of the sky.
Scott Acton’s favorite Webb image: A test image of a random part of the sky, shot with the Webb’s fine-guidance sensor. The points with six-pointed diffraction patterns are stars; all other points are galaxies. NASA/CSA/FGS
I sent this image to James Larkin at UCLA, and he looked at it and estimated that that single image had 15,000 galaxies in it. Every one of those galaxies probably has between 100 [billion] and 200 billion stars.
I don’t talk about religion too much when it comes to this, but I must have had in my mind a Biblical reference to the stars singing. I pictured all of those galaxies as singing, as if this was a way for the universe to express joy that after all these years, we could finally see them. It was quite an emotional experience for me and for many people.
You realized that there was so much out there, and you weren’t even really looking for it yet? You were still phasing the telescope?
Acton: That’s right. I guess I I’m not sure what I expected. I figured you’d just see dark sky. Well, there is no dark sky. Dark sky is a myth. Galaxies are everywhere.
Finally, we got to our first diffraction-limited image [with the telescope calibrated for science observations for the first time]. And that’s the way the telescope is operating now.
Several days later, about 70 of us got together—astronomers, engineers, and other team members. A member of the team—his name is Anthony Galyer—and I had gone halves several years earlier and purchased a bottle of cognac from 1906, the year that James Webb was born. We toasted James Webb and the telescope that bears his name.
Match ID: 115 Score: 12.86 source: spectrum.ieee.org age: 146 days qualifiers: 9.29 nasa, 3.57 mit
Planning for the return journey is an integral part of the preparations for a crewed Mars mission. Astronauts will require a total mass of about 50 tonnes of rocket propellent for the ascent vehicle that will lift them off the planet’s surface, including 31 tonnes of oxygen approximately. The less popular option is for crewed missions to carry the required oxygen themselves. But scientists are optimistic that it could instead be produced from the carbon dioxide–rich Martian atmosphere itself, using a system called MOXIE.
Between February 2021, when it arrived on Mars aboard the Perseverance, and the end of the year, MOXIE has had several successful test runs. According to a review of the system by Hoffman and colleagues, published in Science Advances, it has demonstrated its ability to produce oxygen during both night and day, when temperatures can vary by over 100 ºC. The generation and purity rates of oxygen also meet requirements to produce rocket propellent and for breathing. The authors assert that a scaled-up version of MOXIE could produce the required oxygen for lift-off as well as for the astronauts to breathe.
Next question: How to power any oxygen-producing factories that NASA can land on Mars? Perhaps via NASA’s Kilopower fission reactors?
MOXIE is a first step toward a much larger and more complex system to support the human exploration of Mars. The researchers estimate a required generation rate of 2 to 3 kilograms per hour, compared with the current MOXIE rate of 6 to 8 grams per hour, to produce enough oxygen for lift-off for a crew arriving 26 months later. “So we’re talking about a system that’s a couple of hundred times bigger than MOXIE,” Hoffman says.
They calculate this rate accounting for eight months to get to Mars, followed by some time to set up the system. “We figure you'd probably have maybe 14 months to make all the oxygen.” Further, he says, the produced oxygen would have to be liquefied to be used a rocket propellant, something the current version of MOXIE doesn’t do.
MOXIE also currently faces several design constraints because, says Hoffman, a former astronaut, “our only ride to Mars was inside the Perseverance rover.” This limited the amount of power available to operate the unit, the amount of heat they could produce, the volume and the mass.
“MOXIE does not work nearly as efficiently as a stand-alone system that was specifically designed would,” says Hoffman. Most of the time, it’s turned off. “Every time we want to make oxygen, we have to heat it up to 800 ºC, so most of the energy goes into heating it up and running the compressor, whereas in a well-designed stand-alone system, most of the energy will go into the actual electrolysis, into actually producing the oxygen.”
However, there are still many kinks to iron out for the scaling-up process. To begin with, any oxygen-producing system will need lots of power. Hoffman thinks nuclear power is the most likely option, maybe NASA’s Kilopower fission reactors. The setup and the cabling would certainly be challenging, he says. “You’re going to have to launch to all of these nuclear reactors, and of course, they’re not going to be in exactly the same place as the [other] units,” he says. "So, robotically, you’re going to have to connect to the electrical cables to bring power to the oxygen-producing unit.”
Then there is the solid oxide electrolysis units, which Hoffman points out are carefully machined systems. Fortunately, the company that makes them, OxEon, has already designed, built, and tested a full-scale unit, a hundred times bigger than the one on MOXIE. “Several of those units would be required to produce oxygen at the quantities that we need,” Hoffman says.
He also adds that at present, there is no redundancy built into MOXIE. If any part fails, the whole system dies. “If you’re counting on a system to produce oxygen for rocket propellant and for breathing, you need very high reliability, which means you’re going to need quite a few redundant units.”
Moreover, the system has to be pretty much autonomous, Hoffman says. “It has to be able to monitor itself, run itself.” For testing purposes, every time MOXIE is powered up, there is plenty of time to plan. A full-scale MOXIE system, though, would have to run continuously, and for that it has to be able to adjust automatically to changes in the Mars atmosphere, which can vary by a factor of two over a year, and between nighttime and daytime temperature differences.
Match ID: 116 Score: 12.86 source: spectrum.ieee.org age: 147 days qualifiers: 9.29 nasa, 3.57 mit
This past weekend, NASA scrubbed the Artemis I uncrewed mission to the moon and back. Reportedly, the space agency will try again to launch the inaugural moon mission featuring the gargantuan Space Launch System (SLS) at the end of this month or sometime in October. Meanwhile, half a world away, China is progressing on its own step-by-step program to put both robotic and, eventually, crewed spacecraft on the lunar surface and keep pace with NASA-led achievements.
Asia’s rapidly growing space power has already made a number of impressive lunar leaps but will need to build on these in the coming years. Ambitious sample-return missions, landings at the lunar south pole, testing the ability to 3D print using materials from regolith, and finally sending astronauts on a short-term visit to our celestial neighbor are in the cards before the end of the decade.
The next step, expected around 2024, is Chang’e-6: an unprecedented attempt to collect rock samples from the far side of the moon.
The mission will build on two recent major space achievements. In 2019, China became the first country to safely land a spacecraft on the far side of the moon, a hemisphere which cannot be seen from Earth—as the moon is tidally locked. The mission was made possible by a relay satellite out beyond the moon at Earth-moon Lagrange point 2, where it can bounce signals between Chang’e-4 and ground stations in China.
Chang’e-5 in 2020 performed the first sampling of lunar material in over four decades. The complex, four-spacecraft mission used an orbiter, lander, ascent vehicle, and return capsule to successfully deliver 1.731 grams of lunar rocks to Earth. The automated rendezvous and docking in lunar orbit of the orbiter and ascent spacecraft was also seen as a test of the technology for getting astronauts off the moon and back to Earth.
Chang’e-6 will again attempt to collect new samples, this time from the South pole-Aitken basin, a massive and ancient impact crater on the far side of the moon. The science return of such a mission could likewise be huge as its rocks have the potential to answer some significant questions about the moon’s geological past, says planetary scientist Katherine Joy of the University of Manchester, in England.
“We think that the basin-formation event was so large that the moon’s mantle could have been excavated from tens of kilometers deep,” says Joy. Fragments of this mantle material originating from deep in the moon would help us to understand how the Moon differentiated early in its history, the nature of its interior, and how volcanism on the far side of the moon is different or similar to that on the nearside.
Chang’e-7, also scheduled for 2024, will look at a different set of questions geared toward lunar resources. It will target the lunar south pole, a region where NASA’s Artemis 3 crewed mission is also looking to land.
The mission will involve a flotilla of spacecraft, including a new relay satellite, an orbiter, lander, rover and a small “hopping” spacecraft designed to inspect permanently shadowed craters which are thought to contain water ice which could be used in the future to provide breathable oxygen, rocket fuel, or drinking water to lunar explorers.
Following this Chang’e-8 is expected to launch around 2027 to test in situ resource utilization and conduct other experiments and technology tests such as oxygen extraction and 3D printing related to building a permanent lunar base—for both robots and crew—in the 2030s, named the International Lunar Research Station (ILRS).
The upcoming Chang’e-6, 7 and 8 missions are expected to launch on China’s largest current rocket, the Long March 5. But, as with NASA and Artemis, China will need its own megarockets to make human lunar exploration and ultimately, perhaps, crewed lunar bases a reality.
In part in reaction to the achievements of SpaceX, the China Aerospace Science and Technology Corporation (CASC), the country’s main space contractor, is developing a new rocket specifically for launching astronauts beyond low Earth orbit.
The “new generation crew launch vehicle” will essentially bundle three Long March 5 core stages together (which will be no mean feat of engineering) while also improving the performance of its kerosene engines. The result will be a roughly 90-meter-tall rocket resembling a Long March version of SpaceX’s Falcon Heavy, capable of sending 27 tonnes of payload into translunar injection.
Two launches of the rocket will by 2030, according to leading Chinese space officials, be able to put a pair of astronauts on the moon for a 6-hour stay. Such a mission also requires developing a lunar lander and a new spacecraft capable of keeping astronauts safe in deep space.
For building infrastructure on the moon, China is looking to the future Long March 9, an SLS-class rocket capable of sending 50 tonnes into translunar injection. The project will require CASC to make breakthroughs in a number of areas, including manufacturing new, wider rocket bodies of up to 10 meters in diameter, mastering massive, higher-thrust rocket engines, and building a new launch complex at Wenchang, Hainan island, to handle the monster.
Once again NASA is leading humanity’s journey to the moon, but China’s steady accumulation of capabilities and long-term ambitions means it will likely not be far behind.
Match ID: 117 Score: 12.86 source: spectrum.ieee.org age: 148 days qualifiers: 9.29 nasa, 3.57 mit
Quantum signals may possess a number of advantages over regular forms of communication, leading scientists to wonder if humanity was not alone in discovering such benefits. Now a new study suggests that, for hypothetical extraterrestrial civilizations, quantum transmissions using X-rays may be possible across interstellar distances.
Quantum communication relies on a quantum phenomenon known as
entanglement. Essentially, two or more particles such as photons that get “linked” via entanglement can, in theory, influence each other instantly no matter how far apart they are.
Entanglement is essential to
quantum teleportation, in which data can essentially disappear one place and reappear someplace else. Since this information does not travel across the intervening space, there is no chance the information will be lost.
To accomplish quantum teleportation, one would first entangle two photons. Then, one of the photons—the one to be teleported—is kept at one location while the other is beamed to whatever destination is desired.
Next, the photon at the destination's quantum state—which defines its key characteristics—is analyzed, an act that also destroys its quantum state. Entanglement will lead the destination photon to prove identical to its partner. For all intents and purposes, the photon at the origin point “teleported” to the destination point—no physical matter moved, but the two photons are physically indistinguishable.
And to be clear, quantum teleportation cannot send information faster than the speed of light, because the destination photon must still be transmitted via conventional means.
One weakness of quantum communication is that entanglement is fragile. Still, researchers have successfully transmitted entangled photons that remained stable or “coherent” enough for quantum teleportation across distances as great as 1,400 kilometers.
“If photons in Earth’s atmosphere don’t decohere to 100 km, then in interstellar space where the medium is much less dense then our atmosphere, photons won’t decohere up to even the size of the galaxy,” Berera says.
In the new study, the researchers investigated whether and how well quantum communication might survive interstellar distances. Quantum signals might face disruption from a number of factors, such as the gravitational pull of interstellar bodies, they note.
The scientists discovered the best quantum communication channels for interstellar messages are X-rays. Such frequencies are easier to focus and detect across interstellar distances. (NASA has tested deep-space X-ray communication with its
XCOM experiment.) The researchers also found that the optical and microwave bands could enable communication across large distances as well, albeit less effectively than X-rays.
Although coherence might survive interstellar distances, Berera does note quantum signals might lose fidelity. “This means the quantum state is sustained, but it can have a phase shift, so although the quantum information is preserved in these states, it has been altered by the effect of gravity.” Therefore, it may “take some work at the receiving end to account for these phase shifts and be able to assess the information contained in the original state.”
Why might an interstellar civilization transmit quantum signals as opposed to regular ones? The researchers note that quantum communication may allow
greater data compression and, in some cases, exponentiallyfaster speeds than classical channels. Such a boost in efficiency might prove very useful for civilizations separated by interstellar distances.
“It could be that quantum communication is the main communication mode in an extraterrestrial's world, so they just apply what is at hand to send signals into the cosmos,” Berera says.
The scientists detailed
their findings online 28 June in the journal Physical Review D.
Match ID: 118 Score: 12.86 source: spectrum.ieee.org age: 199 days qualifiers: 9.29 nasa, 3.57 mit
James Webb Space Telescope (JWST) reveals its first images on 12 July, they will be the by-product of carefully crafted mirrors and scientific instruments. But all of its data-collecting prowess would be moot without the spacecraft’s communications subsystem.
The Webb’s comms aren’t flashy. Rather, the data and communication systems are designed to be incredibly, unquestionably dependable and reliable. And while some aspects of them are relatively new—it’s the first mission to use
Ka-band frequencies for such high data rates so far from Earth, for example—above all else, JWST’s comms provide the foundation upon which JWST’s scientific endeavors sit.
As previous articles in this series have noted, JWST is parked at
Lagrange point L2. It’s a point of gravitational equilibrium located about 1.5 million kilometers beyond Earth on a straight line between the planet and the sun. It’s an ideal location for JWST to observe the universe without obstruction and with minimal orbital adjustments.
Being so far away from Earth, however, means that data has farther to travel to make it back in one piece. It also means the communications subsystem needs to be reliable, because the prospect of a repair mission being sent to address a problem is, for the near term at least, highly unlikely. Given the cost and time involved, says
Michael Menzel, the mission systems engineer for JWST, “I would not encourage a rendezvous and servicing mission unless something went wildly wrong.”
According to Menzel, who has worked on JWST in some capacity for over 20 years, the plan has always been to use well-understood K
a-band frequencies for the bulky transmissions of scientific data. Specifically, JWST is transmitting data back to Earth on a 25.9-gigahertz channel at up to 28 megabits per second. The Ka-band is a portion of the broader K-band (another portion, the Ku-band, was also considered).
The Lagrange points are equilibrium locations where competing gravitational tugs on an object net out to zero. JWST is one of three craft currently occupying L2 (Shown here at an exaggerated distance from Earth). IEEE Spectrum
Both the data-collection and transmission rates of JWST dwarf those of the older
Hubble Space Telescope. Compared to Hubble, which is still active and generates 1 to 2 gigabytes of data daily, JWST can produce up to 57 GB each day (although that amount is dependent on what observations are scheduled).
Menzel says he first saw the frequency selection proposals for JWST around 2000, when he was working at
Northrop Grumman. He became the mission systems engineer in 2004. “I knew where the risks were in this mission. And I wanted to make sure that we didn’t get any new risks,” he says.
a-band frequencies can transmit more data than X-band (7 to 11.2 GHz) or S-band (2 to 4 GHz), common choices for craft in deep space. A high data rate is a necessity for the scientific work JWST will be undertaking. In addition, according to Carl Hansen, a flight systems engineer at the Space Telescope Science Institute (the science operations center for JWST), a comparable X-band antenna would be so large that the spacecraft would have trouble remaining steady for imaging.
Although the 25.9-GHz K
a-band frequency is the telescope’s workhorse communication channel, it also employs two channels in the S-band. One is the 2.09-GHz uplink that ferries future transmission and scientific observation schedules to the telescope at 16 kilobits per second. The other is the 2.27-GHz, 40-kb/s downlink over which the telescope transmits engineering data—including its operational status, systems health, and other information concerning the telescope’s day-to-day activities.
Any scientific data the JWST collects during its lifetime will need to be stored on board, because the spacecraft doesn’t maintain round-the-clock contact with Earth. Data gathered from its scientific instruments, once collected, is stored within the spacecraft’s 68-GB solid-state drive (3 percent is reserved for engineering and telemetry data).
Alex Hunter, also a flight systems engineer at the Space Telescope Science Institute, says that by the end of JWST’s 10-year mission life, they expect to be down to about 60 GB because of deep-space radiation and wear and tear.
The onboard storage is enough to collect data for about 24 hours before it runs out of room. Well before that becomes an issue, JWST will have scheduled opportunities to beam that invaluable data to Earth.
Sandy Kwan, a DSN systems engineer, says that contact windows with spacecraft are scheduled 12 to 20 weeks in advance. JWST had a greater number of scheduled contact windows during its commissioning phase, as instruments were brought on line, checked, and calibrated. Most of that process required real-time communication with Earth.
All of the communications channels use the
Reed-Solomonerror-correction protocol—the same error-correction standard as used in DVDs and Blu-ray discs as well as QR codes. The lower data-rate S-band channels use binary phase-shift key modulation—involving phase shifting of a signal’s carrier wave. The K-band channel, however, uses a quadrature phase-shift key modulation. Quadrature phase-shift keying can double a channel’s data rate, at the cost of more complicated transmitters and receivers.
JWST’s communications with Earth incorporate an acknowledgement protocol—only after the JWST gets confirmation that a file has been successfully received will it go ahead and delete its copy of the data to clear up space.
The communications subsystem was assembled along with the rest of the spacecraft bus by
Northrop Grumman, using off-the-shelf components sourced from multiple manufacturers.
JWST has had a long and
often-delayed development, but its communications system has always been a bedrock for the rest of the project. Keeping at least one system dependable means it’s one less thing to worry about. Menzel can remember, for instance, ideas for laser-based optical systems that were invariably rejected. “I can count at least two times where I had been approached by people who wanted to experiment with optical communications,” says Menzel. “Each time they came to me, I sent them away with the old ‘Thank you, but I don’t need it. And I don’t want it.’”
Match ID: 119 Score: 12.86 source: spectrum.ieee.org age: 209 days qualifiers: 9.29 nasa, 3.57 mit
If the James Webb Space Telescope is to work—looking so far out and therefore so far back in time that it can see the first galaxies forming after the big bang—it will have to image objects so faint that they barely stand out from the cold around them. The world will begin finding out how well the observatory works as soon as next week, when JWST is expected to release its first trove of scientific images and spectroscopic data.
So, for argument’s sake, let’s assume all indications so far do in fact point to a successful kickoff of the (hopefully long and storied) scientific data-gathering phase of Webb’s mission. How then did the engineers and designers of this massive telescope ever make it possible to cool the telescope down enough—all at a remove of nearly four times the distance from Earth to the moon—to possibly do its job?
After more than 25 years’ work and technological hurdles beyond counting, the Webb team has launched and stationed its mammoth observatory in solar orbit—and brought its instruments below 40 kelvins (-233 °C), cold enough to see the early universe more than 13.5 billion years ago. Remarkably, most of the cooling has been done passively, by shielding the telescope from the sun and letting physics take care of the rest.
“Webb is not just the product of a group of people. It’s not the product of some smart astronomers—Webb is truly the product of our entire world’s capability,” says Keith Parrish, a leader on the Webb team at NASA’s Goddard Space Flight Center in Maryland. “Taken as a whole, Webb is truly the result of our entire know-how of how to build complex machines.”
Parrish joined the project in 1997, ultimately becoming its commissioning manager through the years of design, assembly, testing, delay and, finally, launch on 25 December 2021. He says almost everything about it—its shape and location, the materials from which it’s made—was dictated by the need to have an observatory that would survive for years at supercold temperatures.
In this photo, the five-layered JWST sunshield is being unfurled and inspected in a clean room. The layers of coated Kapton E never touch, minimizing the transmission of heat from one layer to the next. Alex Evers/Northrop Grumman
The Webb is an infrared observatory for many reasons, not the least of which is that as the universe expands, the wavelength of light from distant objects is lengthened, causing dramatic redshift. Infrared is also good for seeing through cosmic dust and gas, and for imaging cold things such as comets, Kuiper Belt objects, and perhaps planets orbiting other stars.
But infrared radiation is often best measured as heat, which is why it’s important for the Webb to be so cold. If, like the Hubble Telescope, it were in low Earth orbit, and it had no shielding from the sun, most of its targets would be drowned out by the sun and ground, and by heat in the telescope itself.
“If my signal is heat—and infrared is heat—then what I can’t have is other heat sources that are noise in the system,” says Jim Flynn, the sunshield manager at Northrop Grumman, the prime contractor for the Webb.
So the Webb has been sent to circle a spot in space called L2, 1.5 million kilometers away, opposite the sun, one of the locations known as Lagrange points. These "L" points are where the gravity of Earth and the sun exactly conspire to keep it in a stable and relatively "fixed" orbit with respect to the Earth as it makes its way around its 365.256-day course circling the sun. It’s a good compromise: Earth is distant enough that it doesn’t interfere with observations, but close enough that communication with the spacecraft can be relatively fast. And since the ship isn’t flying from day to night and back on every orbit, its temperature is relatively stable. All it needs is a really, really good sunshade.
“Four [layers of sunshield] would have probably done the job. Five gave us a little bit of an insurance policy. I’d like to say it was way more sophisticated than that, but that’s really not what it was at all.”
—Keith Parrish, NASA Goddard Space Flight Center
“The engineering was pushed above and beyond to meet the scientific goals,” says Alexandra Lockwood, a project scientist at the Space Telescope Science Institute, which operates the Webb. “It is specifically designed the way that it is because they wanted to do intensive infrared science.”
It makes for an ungainly-looking ship in many renderings, with the telescope assembly, intentionally open to space to prevent heat buildup, attached to its silvery sunshield, about 14 meters wide and 21 meters long, with five layers of insulating film to keep the telescope in almost total darkness.
From its sunlit side the sunshield roughly resembles a kite. The elongated shape, engineers found, would be the most efficient way to keep the Webb’s optics out of the sun. They considered a square or octagon, but the final version covers more area without much more mass.
“It’s no larger than it needs to be to meet the science field-of-view requirements, and that unique kite shape is the result,” says Parrish. “Any larger than it is now, it just makes everything more complex.”
The shield’s five layers are made of Kapton E, a plastic film first developed by DuPont in the 1960s and used for spacecraft insulation and printed circuits. The layers are coated in aluminum and silicon. Each is thinner than a human hair. But engineers say they are, together, very effective in blocking the sun’s heat. The first layer reduces its strength by about an order of magnitude (or 90 percent), the second layer removes another order of magnitude, and so on. The layers never touch, and they’re slightly flared as one gets away from the center of the shield, so that heat will escape out the sides.
Why five layers? There was a lot of computer modeling, but it was hard to simulate the shield’s thermal behavior before flight. “Four would have probably done the job. Five gave us a little bit of an insurance policy,” says Parrish. “I’d like to say it was way more sophisticated than that, but that’s really not what it was at all.”
The ability to cool the telescope naturally, first calculated in the 1980s to be possible, was a major advance. It meant the Webb would not have to rely on a heavy, complex cryogenic apparatus, with refrigerants that could leak and shorten the mission. Of its four main scientific instruments, only one, a midinfrared detector called MIRI, needs to be cooled to 6.7 K. It’s chilled by a multistage cryocooler, which pumps cold helium gas through pulse tubes to draw heat away from the instrument’s sensor. It uses the Joule-Thomson effect, reducing the temperature of the helium by making it expand after it’s forced through a 1-millimeter valve. Pressure comes from two pistons—the cryocooler system’s only moving parts—facing opposite directions so their movements will cancel each other out and not disturb observations.
Building the telescope proved immensely complicated; it fell years behind while its budget ballooned toward US $10 billion. The sunshield needed lengthy redesign after testing, when Kapton tore and fasteners came loose.
“We just bit off way more than we could chew,” Parrish says now. “That’s exactly what NASA should be doing. It should be pushing the envelope. The problem is that eventually Webb got too big to fail.”
But it’s finally deployed, sending data, and surprising engineers who expected at least some failures as it began to operate. Keith Parrish, his work done, is moving on to other projects at Goddard.
“I think Webb,” he says, “is just a great product of what it means to be an advanced civilization.”
Update: 26 July 2022: The story was updated to clarify that the gravity at Lagrange point L2 does not "cancel" (as the story had previously stated) but in fact adds to keep an object at L2 orbiting at the precise same orbital period as, in this case, the Earth—i.e. at 365.256 days.
Match ID: 120 Score: 12.14 source: spectrum.ieee.org age: 210 days qualifiers: 9.29 nasa, 2.86 planets
About Half of Sun-Like Stars Could Host Rocky, Potentially Habitable Planets Thu, 29 Oct 2020 07:00 EDT According to new research using data from NASA’s retired planet-hunting mission, the Kepler space telescope, about half the stars similar in temperature to our Sun could have a rocky planet capable of supporting liquid water on its surface. Match ID: 122 Score: 12.14 source: www.nasa.gov age: 826 days qualifiers: 9.29 nasa, 2.86 planets
Gravity Assist: Puffy Planets, Powerful Telescopes, with Knicole Colon Fri, 12 Jun 2020 09:01 EDT NASA astrophysicist Knicole Colon describes her work on the Kepler, Hubble, TESS and Webb missions, and takes us on a tour of some of her favorite planets. Match ID: 123 Score: 12.14 source: www.nasa.gov age: 965 days qualifiers: 9.29 nasa, 2.86 planets
The Big Picture features technology through the lens of photographers.
Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition.
Enjoy the latest images, and if you have suggestions, leave a comment below.
The Wurst Use of AI
From the time the ancient Sumerians started making sausage around 4,000 years ago, the process has been the province of artisans dedicated to the craft of preserving meat so it remained safe to eat for as long as possible. Yet even traditional methods can stand to be improved on from time to time. Katharina Koch of the Landfleischerei Koch in Calden, Germany [right], has retained ancient customs such as the clay chambers in which Ahle sausages ripen while also fine-tuning the conditions under which the meats are cured (such as temperature and moisture level) via AI algorithms. The digital modifications she and scientists at the nearby University of Kassel have developed replicate the production methods that have been passed down for generations. So, instead of spending nearly a year manually monitoring the meats’ maturation process, a sausage maker using the new AI methods will be able to set it and forget it.
People with diabetes will usually prick their fingers multiple times a day in order to get readings on the amount of glucose (the type of sugar the body uses for fuel) that is in their bloodstream. But researchers at the University of California, San Diego, have developed a bloodless method for tracking blood sugar and other chemical metabolites in the gastrointestinal tract that can be used to infer the person’s relative state of health. Their solution to the finger-pricking problem: an electronic pill capable of sensing metabolite levels and transmitting data wirelessly every 5 seconds over a span of several hours. So, instead of snapshots of how the body is reacting to stimuli like food, clinicians will get a steady stream of data. The major innovation boasted by the UCSD team is that their pill draws power from a fuel cell that runs on the glucose in the gut, instead of relying on a battery laden with potentially harmful chemicals.
The phrase musical arrangement has long referred to the work of art that results from a composition being adapted for different instruments or voices. But going forward, sound will get in on the act of arranging. Engineers at the Korea Advanced Institute of Science and Technology report that they used sound waves to disperse metallic droplets embedded in a polymer in order to make flexible circuits. This “musical arrangement” yields an archipelago of droplets spaced so that electrical conductivity is maintained even when the polymer is bent or twisted.
Korea Advanced Institute of Science and Technology
The relative proportions of a bee’s body and its wings say that, at least in theory, it shouldn’t be able to fly. But where would we be if bees were incapable of flitting from flower to flower, collecting nectar and spreading pollen? Roboticists at ETH Zurich, taking a page from nature, say they too have created a machine whose movement seems to defy the laws of physics. The 1.TK-meter-long gadget, called Cubli, balances on a single point, with a single internal reaction wheel whose spin keeps the unit upright. The way this is supposed to work, the Cubli would need a wheel to manage pitch and another to handle roll. But the Zurich team worked out the Cubli’s dimensions so the one wheel is capable of counterbalancing any forces that would topple the machine.
Match ID: 124 Score: 10.71 source: spectrum.ieee.org age: 6 days qualifiers: 10.71 mit
Early in his career, Kevin Mitnick successfully hacked California law. He told me the story when he heard about my new book, which he partially recounts his 2012 book, Ghost in the Wires.
The setup is that he just discovered that there’s warrant for his arrest by the California Youth Authority, and he’s trying to figure out if there’s any way out of it.
As soon as I was settled, I looked in the Yellow Pages for the nearest law school, and spent the next few days and evenings there poring over the Welfare and Institutions Code, but without much hope...
Match ID: 125 Score: 10.71 source: www.schneier.com age: 6 days qualifiers: 10.71 mit
This sponsored article is brought to you by COMSOL.
To someone standing near a glacier, it may seem as stable and permanent as anything on Earth can be. However, Earth’s great ice sheets are always moving and evolving. In recent decades, this ceaseless motion has accelerated. In fact, ice in polar regions is proving to be not just mobile, but alarmingly mortal.
Rising air and sea temperatures are speeding up the discharge of glacial ice into the ocean, which contributes to global sea level rise. This ominous progression is happening even faster than anticipated. Existing models of glacier dynamics and ice discharge underestimate the actual rate of ice loss in recent decades. This makes the work of Angelika Humbert, a physicist studying Greenland’s Nioghalvfjerdsbræ outlet glacier, especially important — and urgent.
As the leader of the Modeling Group in the Section of Glaciology at the Alfred Wegener Institute (AWI) Helmholtz Centre for Polar and Marine Research in Bremerhaven, Germany, Humbert works to extract broader lessons from Nioghalvfjerdsbræ’s ongoing decline. Her research combines data from field observations with viscoelastic modeling of ice sheet behavior. Through improved modeling of elastic effects on glacial flow, Humbert and her team seek to better predict ice loss and the resulting impact on global sea levels.
She is acutely aware that time is short. “Nioghalvfjerdsbræ is one of the last three ‘floating tongue’ glaciers in Greenland,” explains Humbert. “Almost all of the other floating tongue formations have already disintegrated.”
One Glacier That Holds 1.1 Meter of Potential Global Sea Level Rise
The North Atlantic island of Greenland is covered with the world’s second largest ice pack after that of Antarctica. (Fig. 1) Greenland’s sparsely populated landscape may seem unspoiled, but climate change is actually tearing away at its icy mantle.
The ongoing discharge of ice into the ocean is a “fundamental process in the ice sheet mass-balance,” according to a 2021 article in Communications Earth & Environment by Humbert and her colleagues. (Ref. 1) The article notes that the entire Northeast Greenland Ice Stream contains enough ice to raise global sea levels by 1.1 meters. While the entire formation is not expected to vanish, Greenland’s overall ice cover has declined dramatically since 1990. This process of decay has not been linear or uniform across the island. Nioghalvfjerdsbræ, for example, is now Greenland’s largest outlet glacier. The nearby Petermann Glacier used to be larger, but has been shrinking even more quickly. (Ref. 2)
Existing Models Underestimate the Rate of Ice Loss
Greenland’s overall loss of ice mass is distinct from “calving”, which is the breaking off of icebergs from glaciers’ floating tongues. While calving does not directly raise sea levels, the calving process can quicken the movement of land-based ice toward the coast. Satellite imagery from the European Space Agency (Fig. 2) has captured a rapid and dramatic calving event in action. Between June 29 and July 24 of 2020, a 125 km2 floating portion of Nioghalvfjerdsbræ calved into many separate icebergs, which then drifted off to melt into the North Atlantic.
Direct observations of ice sheet behavior are valuable, but insufficient for predicting the trajectory of Greenland’s ice loss. Glaciologists have been building and refining ice sheet models for decades, yet, as Humbert says, “There is still a lot of uncertainty around this approach.” Starting in 2014, the team at AWI joined 14 other research groups to compare and refine their forecasts of potential ice loss through 2100. The project also compared projections for past years to ice losses that actually occurred. Ominously, the experts’ predictions were “far below the actually observed losses” since 2015, as stated by Martin Rückamp of AWI. (Ref. 3) He says, “The models for Greenland underestimate the current changes in the ice sheet due to climate change.”
Viscoelastic Modeling to Capture Fast-Acting Forces
Angelika Humbert has personally made numerous trips to Greenland and Antarctica to gather data and research samples, but she recognizes the limitations of the direct approach to glaciology. “Field operations are very costly and time consuming, and there is only so much we can see,” she says. “What we want to learn is hidden inside a system, and much of that system is buried beneath many tons of ice! We need modeling to tell us what behaviors are driving ice loss, and also to show us where to look for those behaviors.”
Since the 1980s, researchers have relied on numerical models to describe and predict how ice sheets evolve. “They found that you could capture the effects of temperature changes with models built around a viscous power law function,” Humbert explains. “If you are modeling stable, long-term behavior, and you get your viscous deformation and sliding right, your model can do a decent job. But if you are trying to capture loads that are changing on a short time scale, then you need a different approach.”
To better understand the Northeast Greenland Ice Stream glacial system and its discharge of ice into the ocean, researchers at the Alfred Wegener Institute have developed an improved viscoelastic model to capture how tides and subglacial topography contribute to glacial flow.
What drives short-term changes in the loads that affect ice sheet behavior? Humbert and the AWI team focus on two sources of these significant but poorly understood forces: oceanic tidal movement under floating ice tongues (such as the one shown in Fig. 2) and the ruggedly uneven landscape of Greenland itself. Both tidal movement and Greenland’s topography help determine how rapidly the island’s ice cover is moving toward the ocean.
To investigate the elastic deformation caused by these factors, Humbert and her team built a viscoelastic model of Nioghalvfjerdsbræ in the COMSOL Multiphysics software. The glacier model’s geometry is based on data from radar surveys. The model solved underlying equations for a viscoelastic Maxwell material across a 2D model domain consisting of a vertical cross section along the blue line shown in Fig. 3. The simulated results were then compared to actual field measurements of glacier flow obtained by four GPS stations, one of which is shown in Fig. 3.
How Cycling Tides Affect Glacier Movement
The tides around Greenland typically raise and lower the coastal water line between 1 and 4 meters per cycle. This action exerts tremendous force on outlet glaciers’ floating tongues, and these forces are transmitted into the land-based parts of the glacier as well. AWI’s viscoelastic model explores how these cyclical changes in stress distribution can affect the glacier’s flow toward the sea.
The charts in Figure 4 present the measured tide-induced stresses acting on Nioghalvfjerdsbræ at three locations, superimposed on stresses predicted by viscous and viscoelastic simulations. Chart a shows how displacements decline further when they are 14 kilometers inland from the grounding line (GL). Chart b shows that cyclical tidal stresses lessen at GPS-hinge, located in a bending zone near the grounding line between land and sea. Chart c shows activity at the location called GPS-shelf, which is mounted on ice floating in the ocean. Accordingly, it shows the most pronounced waveform of cyclical tidal stresses acting on the ice.
“The floating tongue is moving up and down, which produces elastic responses in the land-based portion of the glacier,” says Julia Christmann, a mathematician on the AWI team who plays a key role in constructing their simulation models. “There is also a subglacial hydrological system of liquid water between the inland ice and the ground. This basal water system is poorly known, though we can see evidence of its effects.” For example, chart a shows a spike in stresses below a lake sitting atop the glacier. “Lake water flows down through the ice, where it adds to the subglacial water layer and compounds its lubricating effect,” Christmann says.
The plotted trend lines highlight the greater accuracy of the team’s new viscoelastic simulations, as compared to purely viscous models. As Christmann explains, “The viscous model does not capture the full extent of changes in stress, and it does not show the correct amplitude. (See chart c in Fig. 4.) In the bending zone, we can see a phase shift in these forces due to elastic response.” Christmann continues, “You can only get an accurate model if you account for viscoelastic ‘spring’ action.”
Modeling Elastic Strains from Uneven Landscapes
The crevasses in Greenland’s glaciers reveal the unevenness of the underlying landscape. Crevasses also provide further evidence that glacial ice is not a purely viscous material. “You can watch a glacier over time and see that it creeps, as a viscous material would,” says Humbert. However, a purely viscous material would not form persistent cracks the way that ice sheets do. “From the beginning of glaciology, we have had to accept the reality of these crevasses,” she says. The team’s viscoelastic model provides a novel way to explore how the land beneath Nioghalvfjerdsbræ facilitates the emergence of crevasses and affects glacial sliding.
“When we did our simulations, we were surprised at the amount of elastic strain created by topography,” Christmann explains. “We saw these effects far inland, where they would have nothing to do with tidal changes.”
Figure 6 shows how vertical deformation in the glacier corresponds to the underlying landscape and helps researchers understand how localized elastic vertical motion affects the entire sheet’s horizontal movement. Shaded areas indicate velocity in that part of the glacier compared to its basal velocity. Blue zones are moving vertically at a slower rate than the sections that are directly above the ground, indicating that the ice is being compressed. Pink and purple zones are moving faster than ice at the base, showing that ice is being vertically stretched.
These simulation results suggest that the AWI team’s improved model could provide more accurate forecasts of glacial movements. “This was a ‘wow’ effect for us,” says Humbert. “Just as the up and down of the tides creates elastic strain that affects glacier flow, now we can capture the elastic part of the up and down over bedrock as well.”
Scaling Up as the Clock Runs Down
The improved viscoelastic model of Nioghalvfjerdsbræ is only the latest example of Humbert’s decades-long use of numerical simulation tools for glaciological research. “COMSOL is very well suited to our work,” she says. “It is a fantastic tool for trying out new ideas. The software makes it relatively easy to adjust settings and conduct new simulation experiments without having to write custom code.” Humbert’s university students frequently incorporate simulation into their research. Examples include Julia Christmann’s PhD work on the calving of ice shelves, and another degree project that modeled the evolution of the subglacial channels that carry meltwater from the surface to the ice base.
The AWI team is proud of their investigative work, but they are fully cognizant of just how much information about the world’s ice cover remains unknown — and that time is short. “We cannot afford Maxwell material simulations of all of Greenland,” Humbert concedes. “We could burn years of computational time and still not cover everything. But perhaps we can parameterize the localized elastic response effects of our model, and then implement it at a larger scale,” she says.
This scale defines the challenges faced by 21st-century glaciologists. The size of their research subjects is staggering, and so is the global significance of their work. Even as their knowledge is growing, it is imperative that they find more information, more quickly. Angelika Humbert would welcome input from people in other fields who study viscoelastic materials. “If other COMSOL users are dealing with fractures in Maxwell materials, they probably face some of the same difficulties that we have, even if their models have nothing to do with ice!” she says. “Maybe we can have an exchange and tackle these issues together.”
Perhaps, in this spirit, we who benefit from the work of glaciologists can help shoulder some of the vast and weighty challenges they bear.
Match ID: 127 Score: 10.71 source: spectrum.ieee.org age: 6 days qualifiers: 10.71 mit
ISS Daily Summary Report – 1/25/2023 Wed, 25 Jan 2023 16:00:08 +0000 Payloads: PK-4: The PK-4 HD was packed for return and two new hard drives were inserted. The chamber gas insert valve was switched from Neon to Argon. PK-4 is a scientific collaboration between ESA and the Russian Federal Space Agency (Roscosmos), performing research in the field of Complex Plasmas: low temperature gaseous mixtures composed of … Match ID: 128 Score: 9.29 source: blogs.nasa.gov age: 8 days qualifiers: 9.29 nasa
ISS Daily Summary Report – 1/24/2023 Tue, 24 Jan 2023 16:00:59 +0000 Payloads: PK-4: A crewmember caught clouds of particles inside the PK-4 chamber using the PK-4 software on the Columbus Module Laptop 1 as part of campaign 15 experiment operations. PK-4 is a scientific collaboration between ESA and the Russian Federal Space Agency (Roscosmos), performing research in the field of Complex Plasmas: low temperature gaseous mixtures … Match ID: 129 Score: 9.29 source: blogs.nasa.gov age: 9 days qualifiers: 9.29 nasa
ISS Daily Summary Report – 1/23/2023 Mon, 23 Jan 2023 16:00:22 +0000 Payloads: PK-4: A crewmember caught clouds of particles inside the PK-4 chamber using the PK-4 software on the Columbus Module Laptop 1 as part of campaign 15 experiment operations. PK-4 is a scientific collaboration between ESA and the Russian Federal Space Agency (Roscosmos), performing research in the field of Complex Plasmas: low temperature gaseous mixtures … Match ID: 130 Score: 9.29 source: blogs.nasa.gov age: 10 days qualifiers: 9.29 nasa
ISS Daily Summary Report – 1/20/2023 Fri, 20 Jan 2023 16:00:34 +0000 USOS Extravehicular Activity (EVA) 1A ISS Roll-Out Solar Array (IROSA) Prep EVA: Today, Koichi Wakata (EV1) and Nicole Mann (EV2) performed the 1A ISS IROSA Prep EVA. Hatch opening occurred at 7:11 AM CT. The main goal of this EVA was to tighten the 1B Mod Kit Collar Bolts, install the IROSA 1B Mod Kit, … Match ID: 131 Score: 9.29 source: blogs.nasa.gov age: 13 days qualifiers: 9.29 nasa
NASA to Participate in Aerospace Conference, Discuss New Collaboration Thu, 19 Jan 2023 16:50 EST NASA Administrator Bill Nelson, Deputy Administrator Pam Melroy, Bhavya Lal, associate administrator for Technology, Policy, and Strategy, as well as other agency speakers, will participate in the 2023 American Institute of Aeronautics and Astronautics (AIAA) SciTech Forum from Monday, Jan. 23, to Friday, Jan. 27, in National Harbor, Maryland. Match ID: 132 Score: 9.29 source: www.nasa.gov age: 14 days qualifiers: 9.29 nasa
ISS Daily Summary Report – 1/19/2023 Thu, 19 Jan 2023 16:00:44 +0000 Payloads: Food Physiology: A diet briefing was conducted between the crew and the Principal Investigator team in support of the Food Physiology investigation. The Integrated Impact of Diet on Human Immune Response, the Gut Microbiota, and Nutritional Status During Adaptation to Spaceflight (Food Physiology) experiment is designed to characterize the key effects of an enhanced … Match ID: 133 Score: 9.29 source: blogs.nasa.gov age: 14 days qualifiers: 9.29 nasa
NASA Issues Award for Greener, More Fuel-Efficient Airliner of Future Wed, 18 Jan 2023 09:38 EST NASA announced Wednesday it has issued an award to The Boeing Company for the agency’s Sustainable Flight Demonstrator project, which seeks to inform a potential new generation of green single-aisle airliners. Match ID: 134 Score: 9.29 source: www.nasa.gov age: 15 days qualifiers: 9.29 nasa
Three days before astronauts left on Apollo 8, the first-ever flight around the moon, NASA’s safety chief, Jerome Lederer, gave a speech that was at once reassuring and chilling. Yes, he said, the United States’ moon program was safe and well-planned—but even so, “Apollo 8 has 5,600,000 parts and one and one half million systems, subsystems, and assemblies. Even if all functioned with 99.9 percent reliability, we could expect 5,600 defects.”
The mission, in December 1968, was nearly flawless—a prelude to the Apollo 11 landing the next summer. But even today, half a century later, engineers wrestle with the sheer complexity of the machines they build to go to space. NASA’s Artemis I, its Space Launch System rocket mandated by Congress in 2010, endured a host of delays before it finally launched in November 2022. And Elon Musk’sSpaceX may be lauded for its engineering acumen, but it struggled for six years before its first successful flight into orbit.
Relativity envisions 3D-printing facilities someday on the Martian surface, fabricating much of what people from Earth would need to live there.
Is there a better way? An upstart company called Relativity Space is about to try one. Its Terran 1 rocket, the company says, has about a tenth as many parts as comparable launch vehicles do, because it is made through 3D printing. Instead of bending metal and milling and welding, engineers program a robot to deposit layers of metal alloy in place.
Relativity’s first rocket, the company says, is ready to go from launch complex 16 at Cape Canaveral, Fla. When it happens, possibly later this month, the company says it will stream the liftoff on YouTube.
Artist’s concept of Relativity’s planned Terran R rocket. The company says it should be able to carry a 20,000-kilogram payload into low Earth orbit.Relativity
“Over 85 percent of the rocket by mass is 3D printed,” said Scott Van Vliet, Relativity’s head of software engineering. “And what’s really cool is not only are we reducing the amount of parts and labor that go into building one of these vehicles over time, but we’re also reducing the complexity, we’re reducing the chance of failure when you reduce the part count, and you streamline the build process.”
Relativity says it can put together a Terran rocket in two months, compared to two years for some conventionally built ones. The speed and cost of making a prototype—say, for wind-tunnel testing—are reduced because you tell the printer to make a scaled-down model. There is less waste because the process is additive. And if something needs to be modified, you reprogram the 3D printer instead of slow, expensive retooling.
“If you walk into any rocket factory today other than ours,” said Josh Brost, the company’s head of business development, “you still will see hundreds of thousands of parts coming from thousands of vendors, and still being assembled using lots of touch labor and lots of big-fix tools.”
Terran 1, rated as capable of putting a 1,250-kilogram payload in low Earth orbit, is mainly intended as a test bed. Relativity has signed up a variety of future customers for satellite launches, but the first Terran 1 (“Terran” means “earthling”) will not carry a paying customer’s satellite. The first flight has been given the playful name “Good Luck, Have Fun”—GLHF for short. Eventually, if things are going well, Relativity will build a larger booster, called Terran R, which the company hopes will compete with the SpaceX Falcon 9 for launches of up to 20,000 kg. Relativity says the Terran R should be fully reusable, including the upper stage—something that other commercial launch companies have not accomplished. In current renderings, the rocket is, as the company puts it, “inspired by nature,” shaped to slice through the atmosphere as it ascends and comes back for recovery.
A number of Relativity’s top people came from Musk’s SpaceX or Jeff Bezos’s space company, Blue Origin, and, like Musk, they say their vision is a permanent presence on Mars. Brost calls it “the long-term North Star for us.” They say they can envision 3D-printing facilities someday on the Martian surface, fabricating much of what people from Earth would need to live there. “For that to happen,” says Brost, “you need to have manufacturing capabilities that are autonomous and incredibly flexible.”
Relativity’s fourth-generation Stargate 3D printer.Relativity
Just how Relativity will do all these things is a work in progress. The company says its 3D technology will help it work iteratively—finding mistakes as it goes, then correcting them as it prints the next rocket, and the next, and so on.
“In traditional manufacturing, you have to do a ton of work up front and have a lot of the design features done well ahead of time,” says Van Vliet. “You have to invest in fixed tooling that can often take years to build before you’ve actually developed an article for your launch vehicle. With 3D printing, additive manufacturing, we get to building something very, very quickly.”
The next step is to get the first rocket off the pad. Will it succeed? Brost says a key test will be getting through max q—the point of maximum dynamic pressure on the rocket as it accelerates through the atmosphere before the air around it thins out.
“If you look at history, at new space companies doing large rockets, there’s not a single one that’s done their first rocket on their first try. It would be quite an achievement if we were able to achieve orbit on our inaugural launch,” says Brost.
“I’ve been to many launches in my career,” he says, “and it never gets less exciting or nerve wracking to me.”
Match ID: 135 Score: 9.29 source: spectrum.ieee.org age: 20 days qualifiers: 9.29 nasa
NASA to Announce Major Eco-Friendly Aviation Project Update Thu, 12 Jan 2023 16:48 EST Media are invited to a news conference with NASA Administrator Bill Nelson and other agency leadership at 10 a.m. EST on Wednesday, Jan. 18, at NASA Headquarters in Washington. Match ID: 136 Score: 9.29 source: www.nasa.gov age: 21 days qualifiers: 9.29 nasa
La NASA afirma que 2022 es el quinto año más cálido registrado Thu, 12 Jan 2023 10:35 EST La temperatura promedio de la superficie de la Tierra en 2022 empató con 2015 como la quinta más cálida registrada, según un análisis de la NASA. Continuando con la tendencia del calentamiento a largo plazo del planeta, las temperaturas globales en 2022 estuvieron 0,89 grados centígrados (1,6 grados Fahrenheit) por encima del promedio para el perío Match ID: 137 Score: 9.29 source: www.nasa.gov age: 21 days qualifiers: 9.29 nasa
NASA Says 2022 Fifth Warmest Year on Record, Warming Trend Continues Thu, 12 Jan 2023 09:46 EST Earth's average surface temperature in 2022 tied with 2015 as the fifth warmest on record, according to an analysis by NASA. Continuing the planet's long-term warming trend, global temperatures in 2022 were 1.6 degrees Fahrenheit (0.89 degrees Celsius) above the average for NASA's baseline period (1951-1980), scientists from NASA's Goddard Institut Match ID: 138 Score: 9.29 source: www.nasa.gov age: 21 days qualifiers: 9.29 nasa
NASA, NOAA to Announce 2022 Global Temperatures, Climate Conditions Tue, 10 Jan 2023 10:22 EST Climate researchers from NASA and the National Oceanic and Atmospheric Administration (NOAA) will release their annual assessments of global temperatures and discuss the major climate trends of 2022 during a media briefing at 11 a.m. EST Thursday, Jan. 12. Match ID: 139 Score: 9.29 source: www.nasa.gov age: 23 days qualifiers: 9.29 nasa
An experimental, potentially revolutionary all-electric airplane designed by NASA will soon be taking its first test flight, which will mark a major milestone for battery-powered aviation. However, the program already appears destined to fall short of its lofty goal to exploit the unique features of electric propulsion to rewrite the design rules for modern aircraft. Its time and funding has nearly run out.
Part of the agency’s storied X-plane program, the X-57 Maxwell set out with the ambitious goal of tackling two grand challenges in aerospace engineering simultaneously. Not only did it aim to show that an airplane could be powered entirely by electricity, it also planned to demonstrate the significant gains in efficiency and performance that could be made by switching from two large engines to many smaller ones evenly distributed across the wings—a configuration known as a “blown wing.”
The plan was to demonstrate both of these propositions through a series of increasingly advanced test vehicles. Ultimately though, the complexity of the first challenge, compounded by disruptions caused by the COVID-19 pandemic, saw timelines repeatedly pushed back. As a result, the project’s leaders say it no longer has the funding to progress to the latter stages of the program.
“It turned out to be actually a pretty tall order to work through all of those airworthiness, and qualification, and design challenges.” —Sean Clarke, NASA
The first iteration of the X-57, a modified Tecnam P2006T light aircraft whose gas-powered engines have been replaced with electric motors, will take flight this coming spring or possibly summer. (As of early January, NASA is still unclear as to precisely when that maiden voyage will be. NASA officials Spectrum contacted could only narrow the timeframe down to “first half of 2023.”) That will be a significant achievement, making the X-57 one of just a handful of electrically powered aircraft to get off the ground. But the team say they plan to wrap up flight testing by the end of the year and will no longer be building more advanced designs featuring novel wing configurations and distributed propulsion, such as the blown wing.
“We tried to do a very ambitious thing. Trying to do a new type of airframe and a new motor project is not very typical, because those are both very, very challenging endeavors,” says Nick Borer, deputy principal investigator for the X-57 project at the NASA Langley Research Center. “The agency funds a lot of different things and they’ve been very generous with what they’ve provided to us. But there are priorities at the top and eventually, you’ve got to finish up.”
The project’s ultimate goal was to take advantage of the benefits of electric propulsion to reimagine the design of aircraft wings. For instance, in the case of that blown wing: the large number of motors and props on the leading-edge force air at high rates over the wing, which can generate significant lift even at low speeds. This makes it possible to take off from shorter runways and can also allow you to shrink the size of the wing, reducing drag and boosting cruise efficiency.
The design is difficult to achieve with conventional combustion engines, because they are relatively heavy and become increasingly inefficient as they are scaled down in size. The same is not true of electric motors though, which means it’s relatively simple to switch from several large motors to many smaller ones distributed along the wing.
The current iteration of the X-57, pictured here, is powered by two electric motors and is based at the NASA Armstrong Flight Research Center in California.Carla Thomas/NASA
The final design iteration of the X-57 had six small electrically powered propellers across the front of each wing. The wings themselves would be only 40 percent of the size of a conventional P2006T wing. The design also featured two larger motors mounted on the tips of each wing, which would further reduce drag by counteracting the vortices normally produced at the end of each wing. Because the high lift generated by the smaller propellers along the leading edge would only be needed at take-off, these were designed to fold up once at cruising altitude to further reduce drag.
“The whole idea of an X-plane is to do something that has never been done before, and so I think it is just normal to expect that there is a learning curve.” —Sergio Cecutta, SMG Consulting
Altogether these aerodynamic innovations would slash the planes’ power consumption at cruise by as much as a third, according to Borer. Electric motors are also about three times more efficient in terms of their power-to-weight ratio compared to gasoline-burning ones, he adds, so combined these design changes were expected to lead to a roughly fivefold reduction in energy requirements while flying at cruise speeds of around 280 kilometers per hour.
Switching to electric propulsion turned out to be more complicated than envisioned. The team had to completely redesign their battery packs in 2017 to avoid the risk of catastrophic fires. The high voltages and power levels required for electric aviation also posed significant complications, says Borer, requiring several iterations of the systems designed to protect components from electromagnetic interference.
Early on in the project they also found that state-of-the-art transistors able to withstand high power levels couldn’t tolerate the vibrations and temperatures involved in flight. This was resolved only recently by switching to a newer generation of silicon carbide MOSFET modules, says Sean Clarke, principal investigator for the X-57 project at the NASA Armstrong Flight Research Center in California. “It turned out to be actually a pretty tall order to work through all of those airworthiness, and qualification, and design challenges,” he says.
This led to delays that will mean the more ambitious goals of the project may not come to fruition, but Borer hopes that others will be able to pick up from where they left off. The team has been regularly publishing their findings and data as they’ve progressed, he says. They are also actively contributing to standards for electric aviation and are working with regulators to help develop aircraft certification processes. “We’re pushing out everything that we can,” says Borer.
The X-57’s custom-made battery packs installed in the aircraft’s cabin provide all the aircraft’s power, rather than the JET A/A-1 fuel that powers most aviation today. Lauren Hughes/NASA
This information sharing has already borne fruit. NASA’s main subcontractor for the project California-based Empirical Systems Aerospace has been able to commercialize the X-57’s battery pack design, and the agency has a technology-transfer agreement with Virginia-based electric-aircraft designer Electra, which involved the team sharing information on their aerodynamic innovations. The company that NASA initially contracted to build their electric motors, Joby Aviation, has also gone on to develop their own electric vertical take-off and landing (eVTOL) vehicle and is today one of the leaders in the industry.
This is the beauty of a publicly funded effort like the X-57, says Sergio Cecutta, founder and partner at SMG Consulting, who covers the electric-aviation industry. Unlike a private development effort, he says, all of the advances and lessons that have come out of the project will be in the public domain and can spread throughout the industry. And while it may not have achieved its most ambitious goals, Cecutta says it has done exactly what was intended, which was to remove some of the roadblocks holding back electric aviation.
“The whole idea of an X-plane is to do something that has never been done before, and so I think it is just normal to expect that there is a learning curve,” he says. “In the end, you want to lay the groundwork for the industry to become successful, and I think on that metric, the X-57 has been a successful project.”
Match ID: 140 Score: 9.29 source: spectrum.ieee.org age: 26 days qualifiers: 9.29 nasa
NASA Awards Space and Earth Sciences Data Analysis-V Contract Tue, 03 Jan 2023 15:28 EST NASA has awarded the Space and Earth Sciences Data Analysis-V (SESDA-V) contract to ADNET Systems, Inc. of Bethesda, Maryland, to provide Earth and Space Science research and development at the agency’s Goddard Space Flight Center in Greenbelt, Maryland. Match ID: 141 Score: 9.29 source: www.nasa.gov age: 30 days qualifiers: 9.29 nasa
Recently, Andreas Mogensen, now getting ready for his ‘Huginn’ mission to the ISS in 2023, stopped by ESA’s ESOC mission control centre in Darmstadt, Germany, to meet with some of the experts who keep our satellites flying.
Andreas usually works at NASA's Johnson Space Center in Houston as an ISS ‘capcom’, and we don’t often see him in Europe. A few months back, while returning to Germany for some training at ESA’s Astronaut Centre in Cologne, we seized the opportunity to ask him if he’d like to stop over in Darmstadt for a look behind the scenes at mission control, and he immediately answered, ‘yes’!
Andreas’ studied aeronautical engineering with a focus on ‘guidance, navigation and control of spacecraft’ and we thought he’d be delighted to meet with the teams at mission control doing precisely that sort of work for our robotic missions.
We figured he’d also enjoy meeting colleagues from our Space Safety programme, especially the ones working on space debris and space weather, as these are crucial areas that influence the daily life of astronauts on the ISS.
Andreas met with Bruno Sousa and Julia Schwartz, who help keep Solar Orbiter healthy and on track on its mission to gather the closest-ever images of the Sun, observe the solar wind and our Star’s polar regions, helping unravel the mysteries of the solar cycle.
He also met with Stijn Lemmens, one of the analysts keeping tabs on the space debris situation in orbit, and Melanie Heil, a scientist helping ESA understand how space weather and our active Sun can affect missions in orbit and crucial infrastructure – like power grids – on ground.
We hope you enjoy this lively and informative day at mission control as much as Andreas and the teams at ESOC did!
Match ID: 142 Score: 9.29 source: www.esa.int age: 42 days qualifiers: 9.29 nasa
NASA Awards Launch Services Contract for Sentinel-6B Mission Tue, 20 Dec 2022 15:34 EST NASA has selected Space Exploration Technologies (SpaceX) of Hawthorne, California, to provide launch services for the Sentinel-6B mission. Match ID: 143 Score: 9.29 source: www.nasa.gov age: 44 days qualifiers: 9.29 nasa
NASA Awards Modification to Refurbish Instrument for NOAA’s JPSS Fri, 16 Dec 2022 16:00 EST On behalf of the National Oceanic and Atmospheric Administration (NOAA), NASA has awarded a sole source contract modification to Northrop Grumman of Azusa, California, for the Joint Polar Satellite System (JPSS) Advanced Technology Microwave Sounder (ATMS) Engineering Development Unit (EDU) refurbishment. Match ID: 144 Score: 9.29 source: www.nasa.gov age: 48 days qualifiers: 9.29 nasa
NASA Launches International Mission to Survey Earth’s Water Fri, 16 Dec 2022 06:25 EST A satellite built for NASA and the French space agency Centre National d’Études Spatiales (CNES) to observe nearly all the water on our planet’s surface lifted off on its way to low-Earth orbit at 3:46 a.m. PST on Friday. Match ID: 145 Score: 9.29 source: www.nasa.gov age: 48 days qualifiers: 9.29 nasa
NASA Awards Contract to Maintain Webb Telescope Operations Thu, 15 Dec 2022 16:02 EST NASA has selected Northrop Grumman Systems Corporation of Redondo Beach, California, to support the James Webb Space Telescope Phase E – Operations and Sustainment contract. Match ID: 146 Score: 9.29 source: www.nasa.gov age: 49 days qualifiers: 9.29 nasa
NASA, AST & Science Sign Joint Spaceflight Safety Agreement Thu, 15 Dec 2022 16:00 EST NASA and AST & Science, a subsidiary of AST SpaceMobile, Inc., have signed a joint agreement to formalize both parties’ strong interest in the sharing of information to maintain and improve space safety. Match ID: 147 Score: 9.29 source: www.nasa.gov age: 49 days qualifiers: 9.29 nasa
This image was taken on 5 December, flight day 20, after the spacecraft completed a 3 minute 27 second burn to swing around the Moon and back to Earth.
Just before the burn, Orion made its second and final close approach to the Moon at 17:43 CET (16:43 GMT), passing 130 km above the lunar surface.
The burn, which used the European Service Module’s main engine, changed the velocity of the spacecraft by about 1054 km/h. It was the final major engine burn of the Artemis I mission.
Orion is due to splashdown in the Pacific Ocean on 11 December to complete the 25-day Artemis I mission.
“Orion is heading home!” said NASA administrator Bill Nelson. “The lunar flyby enabled the spacecraft to harness the Moon’s gravity and slingshot it back toward Earth for splashdown. Next up, reentry!”
Sadly, but necessarily, the European Service Module’s contribution to Artemis ends 40 minutes before splashdown. Together with the Crew Module Adapter these elements of the Orion spacecraft will detach from the Crew Module and burn up harmlessly in the atmosphere, leaving Orion on its own for the last crucial minutes to splashdown.
Find Artemis I mission updates and flight day logs on ESA’s Orion blog.
Match ID: 149 Score: 9.29 source: www.esa.int age: 56 days qualifiers: 9.29 nasa
A rocket built by Indian startup Skyroot has become the country’s first privately developed launch vehicle to reach space, following a successful maiden flight earlier today. The suborbital mission is a major milestone for India’s private space industry, say experts, though more needs to be done to nurture the fledgling sector.
In the longer run, India’s space industry has ambitions of capturing a significant chunk of the global launch market.
Pawan Kumar Chandana, cofounder of the Hyderabad-based startup, says the success of the launch is a major victory for India’s nascent space industry, but the buildup to the mission was nerve-racking. “We were pretty confident on the vehicle, but, as you know, rockets are very notorious for failure,” he says. “Especially in the last 10 seconds of countdown, the heartbeat was racing up. But once the vehicle had crossed the launcher and then went into the stable trajectory, I think that was the moment of celebration.”
At just 6 meters (20 feet) long and weighing only around 550 kilograms (0.6 tonnes), the Vikram-S is not designed for commercial use. Today’s mission, called Prarambh, which means “the beginning” in Sanskrit, was designed to test key technologies that will be used to build the startup’s first orbital rocket, the Vikram I. The rocket will reportedly be capable of lofting as much as 480 kg up to an 500-km altitude and is slated for a maiden launch next October.
Skyroot cofounder Pawan Kumar Chandana standing in front of the Vikram-S rocket at the Satish Dhawan Space Centre, on the east coast of India.Skyroot
In particular, the mission has validated Skyroot’s decision to go with a novel all-carbon fiber structure to cut down on weight, says Chandana. It also allowed the company to test 3D-printed thrusters, which were used for spin stabilization in Vikram-S but will power the upper stages of its later rockets. Perhaps the most valuable lesson, though, says Chandana, was the complexity of interfacing Skyroot's vehicle with ISRO’s launch infrastructure. “You can manufacture the rocket, but launching it is a different ball game,” he says. “That was a great learning experience for us and will really help us accelerate our orbital vehicle.”
Skyroot is one of several Indian space startups looking to capitalize on recent efforts by the Indian government to liberalize its highly regulated space sector. Due to the dual-use nature of space technology, ISRO has historically had a government-sanctioned monopoly on most space activities, says Rajeswari Pillai Rajagopalan, director of the Centre for Security, Strategy and Technology at the Observer Research Foundation think tank, in New Delhi. While major Indian engineering players like Larsen & Toubro and Godrej Aerospace have long supplied ISRO with components and even entire space systems, the relationship has been one of a supplier and vendor, she says.
But in 2020, Finance Minister Nirmala Sitharaman announced a series of reforms to allow private players to build satellites and launch vehicles, carry out launches, and provide space-based services. The government also created the Indian National Space Promotion and Authorisation Centre (InSpace), a new agency designed to act as a link between ISRO and the private sector, and affirmed that private companies would be able to take advantage of ISRO’s facilities.
The first launch of a private rocket from an ISRO spaceport is a major milestone for the Indian space industry, says Rajagopalan. “This step itself is pretty crucial, and it’s encouraging to other companies who are looking at this with a lot of enthusiasm and excitement,” she says. But more needs to be done to realize the government’s promised reforms, she adds. The Space Activities Bill that is designed to enshrine the country’s space policy in legislation has been languishing in draft form for years, and without regulatory clarity, it’s hard for the private sector to justify significant investments. “These are big, bold statements, but these need to be translated into actual policy and regulatory mechanisms,” says Rajagopalan.
Skyroot’s launch undoubtedly signals the growing maturity of India’s space industry, says Saurabh Kapil, associate director in PwC’s space practice. “It’s a critical message to the Indian space ecosystem, that we can do it, we have the necessary skill set, we have those engineering capabilities, we have those manufacturing or industrialization capabilities,” he says.
The Vikram-S rocket blasting off from the Satish Dhawan Space Centre, on the east coast of India.Skyroot
However, crossing this technical milestone is only part of the challenge, he says. The industry also needs to demonstrate a clear market for the kind of launch vehicles that companies like Skyroot are building. While private players are showing interest in launching small satellites for applications like agriculture and infrastructure monitoring, he says, these companies will be able to build sustainable businesses only if they are allowed to compete for more lucrative government and defense-sector contacts.
In the longer run, though, India’s space industry has ambitions of capturing a significant chunk of the global launch market, says Kapil. ISRO has already developed a reputation for both reliability and low cost—its 2014 mission to Mars cost just US $74 million, one-ninth the cost of a NASA Mars mission launched the same week. That is likely to translate to India’s private space industry, too, thanks to a considerably lower cost of skilled labor, land, and materials compared with those of other spacefaring nations, says Kapil. “The optimism is definitely there that because we are low on cost and high on reliability, whoever wants to build and launch small satellites is largely going to come to India,” he says.
Match ID: 150 Score: 9.29 source: spectrum.ieee.org age: 76 days qualifiers: 9.29 nasa
NASA’s Artemis I mission launched early in the predawn hours this morning, at 1:04 a.m. eastern time, carrying with it the hopes of a space program aiming now to land American astronauts back on the moon. The Orion spacecraft now on its way to the moon also carries with it a lot of CubeSat-size science. (As of press time, some satellites have even begun to tweet.)
And while the objective of Artemis I is to show that the launch system and spacecraft can make a trip to the moon and return safely to Earth, the mission is also a unique opportunity to send a whole spacecraft-load of science into deep space. In addition to the interior of the Orion capsule itself, there are enough nooks and crannies to handle a fair number of CubeSats, and NASA has packed as many experiments as it can into the mission. From radiation phantoms to solar sails to algae to a lunar surface payload, Artemis I has a lot going on.
Most of the variety of the science on Artemis I comes in the form of CubeSats, little satellites that are each the size of a large shoebox. The CubeSats are tucked snugly into berths inside the Orion stage adapter, which is the bit that connects the interim cryogenic propulsion stage to the ESA service module and Orion. Once the propulsion stage lifts Orion out of Earth orbit and pushes it toward the moon, the stage and adapter will separate from Orion, and the CubeSats will launch themselves.
Ten CubeSats rest inside the Orion stage adapter at NASA’s Kennedy Space Center.NASA KSC
While the CubeSats look identical when packed up, each one is totally unique in both hardware and software, with different destinations and mission objectives. There are 10 in total (three weren’t ready in time for launch, which is why there are a couple of empty slots in the image above).
Here is what each one is and does:
While the CubeSats head off to do their own thing, inside the Orion capsule itself will be the temporary home of a trio of mannequins. The first, a male-bodied version provided by NASA, is named Commander Moonikin Campos, after NASA electrical engineer Arturo Campos, who was the guy who wrote the procedures that allowed the Apollo 13 command module to steal power from the lunar module’s batteries, one of many actions that saved the Apollo 13 crew.
Moonikin Campos prepares for placement in the Orion capsule.NASA
Moonikin Campos will spend the mission in the Orion commander’s seat, wearing an Orion crew survival system suit. Essentially itself a spacecraft, the suit is able to sustain its occupant for up to six days if necessary. Moonikin Campos’s job will be to pretend to be an astronaut, and sensors inside him will measure radiation, acceleration, and vibration to help NASA prepare to launch human astronauts in the next Artemis mission.
Helga and Zohar in place on the flight deck of the Orion spacecraft.NASA/DLR
Accompanying Moonikin Campos are two female-bodied mannequins, named Helga and Zohar, developed by the German Aerospace Center (DLR) along with the Israel Space Agency. These are more accurately called “anthropomorphic phantoms,” and their job is to provide a detailed recording of the radiation environment inside the capsule over the course of the mission. The phantoms are female because women have more radiation-sensitive tissue than men. Both Helga and Zohar have over 6,000 tiny radiation detectors placed throughout their artificial bodies, but Zohar will be wearing an AstroRad radiation protection vest to measure how effective it is.
NASA’s Biology Experiment-1 is transferred to the Orion team.NASA/KSC
The final science experiment to fly onboard Orion is NASA’s Biology Experiment-1. The experiment is really just seeing what time in deep space does to some specific kinds of biology, so all that has to happen is for Orion to successfully haul some packages of sample tubes around the moon and back. Samples include:
Plant seeds to characterize how spaceflight affects nutrient stores
Photosynthetic algae to identify genes that contribute to its survival in deep space
Aspergillus fungus to investigate radioprotective effects of melanin and DNA damage response
Yeast used as a model organism to identify genes that enable adaptations to conditions in both low Earth orbit and deep space
There is some concern that because of the extensive delays with the Artemis launch, the CubeSats have been sitting so long that their batteries may have run down. Some of the CubeSats could be recharged, but for others, recharging was judged to be so risky that they were left alone. Even for CubeSats that don’t start right up, though, it’s possible that after deployment, their solar panels will be able to get them going. But at this point, there’s still a lot of uncertainty, and the CubeSats’ earthbound science teams are now pinning their hopes on everything going well after launch.
For the rest of the science payloads, success mostly means Orion returning to Earth safe and sound, which will also be a success for the Artemis I mission as a whole. And assuming it does so, there will be a lot more science to come.
Match ID: 151 Score: 9.29 source: spectrum.ieee.org age: 78 days qualifiers: 9.29 nasa
Experts Available to Discuss NASA Webb Telescope Science Results Tue, 15 Nov 2022 16:41 EST Experts from NASA and other institutions will be available by teleconference at 11 a.m. EST on Thursday, Nov. 17, to answer media questions about early science results from the agency’s James Webb Space Telescope. Match ID: 152 Score: 9.29 source: www.nasa.gov age: 79 days qualifiers: 9.29 nasa
Elon Musk, step aside. You may be the richest rich man in the space business, but you’re not first. Musk’s SpaceX corporation is a powerful force, with its weekly launches and visions of colonizing Mars. But if you want a broader view of how wealthy entrepreneurs have shaped space exploration, you might want to look at George Ellery Hale, James Lick, William McDonald or—remember this name—John D. Hooker.
All this comes up now because SpaceX, joining forces with the billionaire
Jared Isaacman, has made what sounds at first like a novel proposal to NASA: It would like to see if one of the company’s Dragon spacecraft can be sent to service the fabled, invaluable (and aging) Hubble Space Telescope, last repaired in 2009.
Private companies going to the rescue of one of NASA’s crown jewels? NASA’s mantra in recent years has been to let
private enterprise handle the day-to-day of space operations—communications satellites, getting astronauts to the space station, and so forth—while pure science, the stuff that makes history but not necessarily money, remains the province of government. Might that model change?
“We’re working on crazy ideas all the time,” said
Thomas Zurbuchen, NASA’s space science chief. "Frankly, that’s what we’re supposed to do.”
It’s only a six-month feasibility study for now; no money will change hands between business and NASA. But Isaacman, who made his fortune in
payment-management software before turning to space, suggested that if a Hubble mission happens, it may lead to other things. “Alongside NASA, exploration is one of many objectives for the commercial space industry,” he said on a media teleconference. “And probably one of the greatest exploration assets of all time is the Hubble Space Telescope.”
So it’s possible that at some point in the future, there may be a SpaceX Dragon, perhaps with Isaacman as a crew member, setting out to grapple the Hubble, boost it into a higher orbit, maybe even replace some worn-out components to lengthen its life.
Aerospace companies say privately mounted repair sounds like a good idea. So good that they’ve proposed it already.
The Chandra X-ray telescope, as photographed by space-shuttle astronauts after they deployed it in July 1999. It is attached to a booster that moved it into an orbit 10,000 by 100,000 kilometers from Earth.NASA
Northrop Grumman, one of the United States’ largest aerospace contractors, has quietly suggested to NASA that it might service one of the Hubble’s sister telescopes, the Chandra X-ray Observatory. Chandra was launched into Earth orbit by the space shuttle Columbia in 1999 (Hubble was launched from the shuttle Discovery in 1990), and the two often complement each other, observing the same celestial phenomena at different wavelengths.
As in the case of the SpaceX/Hubble proposal, Northrop Grumman’s Chandra study is at an early stage. But there are a few major differences. For one, Chandra was assembled by TRW, a company that has since been bought by Northrop Grumman. And another company subsidiary,
SpaceLogistics, has been sending what it calls Mission Extension Vehicles (MEVs) to service aging Intelsat communications satellites since 2020. Two of these robotic craft have launched so far. The MEVs act like space tugs, docking with their target satellites to provide them with attitude control and propulsion if their own systems are failing or running out of fuel. SpaceLogistics says it is developing a next-generation rescue craft, which it calls a Mission Robotic Vehicle, equipped with an articulated arm to add, relocate, or possibly repair components on orbit.
“We want to see if we can apply this to space-science missions,” says
Jon Arenberg, Northrop Grumman’s chief mission architect for science and robotic exploration, who worked on Chandra and, later, the James Webb Space Telescope. He says a major issue for servicing is the exacting specifications needed for NASA’s major observatories; Chandra, for example, records the extremely short wavelengths of X-ray radiation (0.01–10 nanometers).
“We need to preserve the scientific integrity of the spacecraft,” he says. “That’s an absolute.”
But so far, the company says, a mission seems possible. NASA managers have listened receptively. And Northrop Grumman says a servicing mission could be flown for a fraction of the cost of a new telescope.
New telescopes need not be government projects. In fact, NASA’s chief economist,
Alexander MacDonald, argues that almost all of America’s greatest observatories were privately funded until Cold War politics made government the major player in space exploration. That’s why this story began with names from the 19th and 20th centuries—Hale, Lick, and McDonald—to which we should add Charles Yerkes and, more recently, William Keck. These were arguably the Elon Musks of their times—entrepreneurs who made millions in oil, iron, or real estate before funding the United States’ largest telescopes. (Hale’s father manufactured elevators—highly profitable in the rebuilding after the Great Chicago Fire of 1871.) The most ambitious observatories, MacDonald calculated for his book The Long Space Age, were about as expensive back then as some of NASA’s modern planetary probes. None of them had very much to do with government.
To be sure, government will remain a major player in space for a long time. “NASA pays the cost, predominantly, of the development of new commercial crew vehicles, SpaceX’s Dragon being one,” MacDonald says. “And now that those capabilities exist, private individuals can also pay to utilize those capabilities.” Isaacman doesn’t have to build a spacecraft; he can hire one that SpaceX originally built for NASA.
“I think that creates a much more diverse and potentially interesting space-exploration future than we have been considering for some time,” MacDonald says.
So put these pieces together: Private enterprise has been a driver of space science since the 1800s. Private companies are already conducting on-orbit satellite rescues. NASA hasn’t said no to the idea of private missions to service its orbiting observatories.
And why does John D. Hooker’s name matter? In 1906, he agreed to put up US $45,000 (about $1.4 million today) to make the mirror for a
100-inch reflecting telescope at Mount Wilson, Calif. One astronomer made the Hooker Telescope famous by using it to determine that the universe, full of galaxies, was expanding.
The astronomer’s name was
Edwin Hubble. We’ve come full circle.
Match ID: 154 Score: 9.29 source: spectrum.ieee.org age: 105 days qualifiers: 9.29 nasa
El impacto de DART cambió el movimiento de un asteroide en el espacio Tue, 11 Oct 2022 13:28 EDT El análisis de los datos obtenidos en las últimas dos semanas por el equipo de investigación de la Prueba de redireccionamiento del asteroide doble (DART, por sus siglas en inglés) de la NASA muestra que el impacto cinético de la nave espacial contra su asteroide objetivo, Dimorphos, alteró con éxito la órbita del asteroide. Esto marca la primera v Match ID: 155 Score: 9.29 source: www.nasa.gov age: 114 days qualifiers: 9.29 nasa
NASA Confirms DART Mission Impact Changed Asteroid’s Motion in Space Tue, 11 Oct 2022 13:12 EDT Analysis of data obtained over the past two weeks by NASA’s Double Asteroid Redirection Test (DART) investigation team shows the spacecraft's kinetic impact with its target asteroid, Dimorphos, successfully altered the asteroid’s orbit. This marks humanity’s first time purposely changing the motion of a celestial object and the first full-scale dem Match ID: 156 Score: 9.29 source: www.nasa.gov age: 114 days qualifiers: 9.29 nasa
NASA to Provide Update on DART, World’s First Planetary Defense Test Fri, 07 Oct 2022 15:34 EDT NASA will host a media briefing at 2 p.m. EDT, Tuesday, Oct. 11, to discuss the agency’s Double Asteroid Redirection Test (DART) mission and its intentional collision with its target asteroid, Dimorphos. Match ID: 157 Score: 9.29 source: www.nasa.gov age: 118 days qualifiers: 9.29 nasa
This sponsored article is brought to you by Master Bond.
Master Bond UV22DC80-1 is a nanosilica filled, dual cure epoxy based system. Nanosilica filled epoxy formulations are designed to further improve performance and processing properties.
The specific filler will play a crucial role in determining key parameters such as viscosity, flow, aging characteristics, strength, shrinkage, hardness, and exotherm. As a dual curing system, UV22DC80-1 cures readily upon exposure to UV light, and will cross link in shadowed out areas when heat is added.
See Master Bond's UV22DC80-1 in Action
Dual cure systems are effective for rapidly fixturing parts with the UV portion of the cure, and then concluding the process by adding heat. Watch this video to see a dual cured epoxy in action.
This compound features exceptionally low shrinkage upon cure, outstanding dimensional stability, and resists abrasion. It is not oxygen inhibited. It withstands chemicals such as acids, bases, fuels and solvents. It is electrically insulative with a volume resistivity greater than 1014 ohm-cm. It is optically clear, with a refractive index of 1.52.
The low viscosity ranges from 500 cps to 3500 cps. The temperature serviceability extends from -100°F to 300°F. UV22DC80-1 bonds well to metals, ceramics, glass, rubber, and many plastics. It passes NASA low outgassing certification and is used in high tech applications including aerospace, optical and opto-electronics.
Match ID: 158 Score: 9.29 source: spectrum.ieee.org age: 121 days qualifiers: 9.29 nasa
NASA’s DART Mission Hits Asteroid in First-Ever Planetary Defense Test Mon, 26 Sep 2022 20:09 EDT After 10 months flying in space, NASA’s Double Asteroid Redirection Test (DART) – the world’s first planetary defense technology demonstration – successfully impacted its asteroid target on Monday, the agency’s first attempt to move an asteroid in space. Match ID: 159 Score: 9.29 source: www.nasa.gov age: 129 days qualifiers: 9.29 nasa
Celebrate 'International Observe the Moon Night' with NASA Fri, 23 Sep 2022 10:00 EDT The public is invited to participate in NASA’s celebration of "International Observe the Moon Night" on Saturday, Oct. 1. Match ID: 160 Score: 9.29 source: www.nasa.gov age: 132 days qualifiers: 9.29 nasa
NASA to Host Briefing on Perseverance Mars Rover Mission Operations Mon, 12 Sep 2022 09:49 EDT NASA will host a briefing at 11:30 a.m. EDT (8:30 a.m. PDT) on Thursday, Sept. 15, at the agency’s Jet Propulsion Laboratory in Southern California to provide highlights from the first year and a half of the Perseverance rover’s exploration of Mars. Match ID: 161 Score: 9.29 source: www.nasa.gov age: 143 days qualifiers: 9.29 nasa
La NASA invita a la prensa a la primera prueba de defensa planetaria Tue, 23 Aug 2022 11:47 EDT La misión Prueba de redireccionamiento del asteroide doble (DART, por sus siglas en inglés) de la NASA, la primera en el mundo que pone a prueba una tecnología para defender a la Tierra de posibles peligros de asteroides o cometas, impactará con su objetivo, un asteroide que no supone ninguna amenaza para la Tierra, a las 7:14 pm EDT del lunes 26 d Match ID: 162 Score: 9.29 source: www.nasa.gov age: 163 days qualifiers: 9.29 nasa
NASA Administrator Statement on Agency Authorization Bill Thu, 28 Jul 2022 15:22 EDT NASA Administrator Bill Nelson released this statement Thursday following approval by the U.S. Congress for the NASA Authorization Act of 2022, which is part of the Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022. Match ID: 163 Score: 9.29 source: www.nasa.gov age: 189 days qualifiers: 9.29 nasa
NASA Administrator, Deputy to Attend Farnborough Airshow Fri, 15 Jul 2022 16:13 EDT NASA Administrator Bill Nelson and Deputy Administrator Pam Melroy will attend the Farnborough International Airshow in the United Kingdom on Monday, July 18. Match ID: 164 Score: 9.29 source: www.nasa.gov age: 202 days qualifiers: 9.29 nasa
NASA to Industry: Let’s Develop Flight Tech to Reduce Carbon Emissions Wed, 29 Jun 2022 14:25 EDT NASA announced Wednesday the agency is seeking partners to develop technologies needed to shape a new generation of lower-emission, single-aisle airliners that passengers could see in airports in the 2030s. Match ID: 165 Score: 9.29 source: www.nasa.gov age: 218 days qualifiers: 9.29 nasa
In the latest push for nuclear power in space, the Pentagon’s Defense Innovation Unit (DIU) awarded a contract in May to Seattle-based Ultra Safe Nuclear to advance its nuclear power and propulsion concepts. The company is making a soccer ball–size radioisotope battery it calls EmberCore. The DIU’s goal is to launch the technology into space for demonstration in 2027.
Ultra Safe Nuclear’s system is intended to be lightweight, scalable, and usable as both a propulsion source and a power source. It will be specifically designed to give small-to-medium-size military spacecraft the ability to maneuver nimbly in the space between Earth orbit and the moon. The DIU effort is part of the U.S. military’s recently announced plans to develop a surveillance network in cislunar space.
Besides speedy space maneuvers, the DIU wants to power sensors and communication systems without having to worry about solar panels pointing in the right direction or batteries having enough charge to work at night, says Adam Schilffarth, director of strategy at Ultra Safe Nuclear. “Right now, if you are trying to take radar imagery in Ukraine through cloudy skies,” he says, “current platforms can only take a very short image because they draw so much power.”
Radioisotope power sources are well suited for small, uncrewed spacecraft, adds Christopher Morrison, who is leading EmberCore’s development. Such sources rely on the radioactive decay of an element that produces energy, as opposed to nuclear fission, which involves splitting atomic nuclei in a controlled chain reaction to release energy. Heat produced by radioactive decay is converted into electricity using thermoelectric devices.
Radioisotopes have provided heat and electricity for spacecraft since 1961. The Curiosity and Perseverance rovers on Mars, and deep-space missions including Cassini, New Horizons, and Voyager all use radioisotope batteries that rely on the decay of plutonium-238, which is nonfissile—unlike plutonium-239, which is used in weapons and power reactors.
For EmberCore, Ultra Safe Nuclear has instead turned to medical isotopes such as cobalt-60 that are easier and cheaper to produce. The materials start out inert, and have to be charged with neutrons to become radioactive. The company encapsulates the material in a proprietary ceramic for safety.
Cobalt-60 has a half-life of five years (compared to plutonium-238’s 90 years), which is enough for the cislunar missions that the DOD and NASA are looking at, Morrison says. He says that EmberCore should be able to provide 10 times as much power as a plutonium-238 system, providing over 1 million kilowatt-hours of energy using just a few pounds of fuel. “This is a technology that is in many ways commercially viable and potentially more scalable than plutonium-238,” he says.
One downside of the medical isotopes is that they can produce high-energy X-rays in addition to heat. So Ultra Safe Nuclear wraps the fuel with a radiation-absorbing metal shield. But in the future, the EmberCore system could be designed for scientists to use the X-rays for experiments. “They buy this heater and get an X-ray source for free,” says Schilffarth. “We’ve talked with scientists who right now have to haul pieces of lunar or Martian regolith up to their sensor because the X-ray source is so weak. Now we’re talking about a spotlight that could shine down to do science from a distance.”
Ultra Safe Nuclear’s contract is one of two awarded by the DIU—which aims to speed up the deployment of commercial technology through military use—to develop nuclear power and propulsion for spacecraft. The other contract was awarded to Avalanche Energy, which is making a lunchbox-size fusion device it calls an Orbitron. The device will use electrostatic fields to trap high-speed ions in slowly changing orbits around a negatively charged cathode. Collisions between the ions can result in fusion reactions that produce energetic particles.
Both companies will use nuclear energy to power high-efficiency electric propulsion systems. Electric propulsion technologies such as ion thrusters, which use electromagnetic fields to accelerate ions and generate thrust, are more efficient than chemical rockets, which burn fuel. Solar panels typically power the ion thrusters that satellites use today to change their position and orientation. Schilffarth says that the higher power from EmberCore should give a greater velocity change of 10 kilometers per second in orbit than today’s electric propulsion systems.
Ultra Safe Nuclear is also one of three companies developing nuclear fission thermal propulsion systems for NASA and the Department of Energy. Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is seeking companies to develop a fission-based nuclear thermal rocket engine, with demonstrations expected in 2026.
This article appears in the August 2022 print issue as “Spacecraft to Run on Radioactive Decay.”
Match ID: 166 Score: 9.29 source: spectrum.ieee.org age: 238 days qualifiers: 9.29 nasa
NASA Supports Small Business Research to Power Future Exploration Thu, 26 May 2022 14:01 EDT NASA has selected hundreds of small businesses and dozens of research institutions to develop technology to help drive the future of space exploration, ranging from novel sensors and electronics to new types of software and cutting-edge materials. Match ID: 167 Score: 9.29 source: www.nasa.gov age: 252 days qualifiers: 9.29 nasa
NASA’s TESS Tunes into an All-sky ‘Symphony’ of Red Giant Stars Wed, 04 Aug 2021 17:00 EDT Using NASA’s Transiting Exoplanet Survey Satellite, astronomers have identified a vast collection of pulsating red giant stars that will help us explore our galactic neighborhood. Match ID: 168 Score: 9.29 source: www.nasa.gov age: 547 days qualifiers: 9.29 nasa
Planetary Sleuthing Finds Triple-Star World Mon, 11 Jan 2021 13:40 EST Years after its detection, astronomers have confirmed a planet called KOI-5Ab orbiting in a triple-star system with a skewed configuration. Match ID: 169 Score: 9.29 source: www.nasa.gov age: 752 days qualifiers: 9.29 nasa
NASA Awards SETI Institute Contract for Planetary Protection Support Fri, 10 Jul 2020 12:04 EDT NASA has awarded the SETI Institute in Mountain View, California, a contract to support all phases of current and future planetary protection missions to ensure compliance with planetary protection standards. Match ID: 170 Score: 9.29 source: www.nasa.gov age: 937 days qualifiers: 9.29 nasa
Imagining Another Earth Thu, 28 May 2020 10:27 EDT This artist's concept shows exoplanet Kepler-1649c orbiting around its host red dwarf star. Match ID: 171 Score: 9.29 source: www.nasa.gov age: 980 days qualifiers: 9.29 nasa
AAS Names 29 NASA-Affiliated Legacy Fellows Thu, 30 Apr 2020 09:00 EDT Twenty-nine scientists working at or affiliated with NASA have been named Fellows of the American Astronomical Society (AAS), the major organization of professional astronomers in North America. Match ID: 173 Score: 9.29 source: www.nasa.gov age: 1008 days qualifiers: 9.29 nasa
Rahman, a power expert and professor of electrical and computer engineering at Virginia Tech, is the former chair of the IEEE ad hoc committee on climate change. The committee was formed last year to coordinate the organization’s response to the global crisis.
About one-third of emissions globally are produced through electricity generation, and Rahman said his mission is to help reduce that amount through engineering solutions.
At COP27, he said that even though the first legally binding international treaty on climate change, known as the Paris Agreement, was adopted nearly a decade ago, countries have yet to come to a consensus on how to stop burning fossil fuels, among other issues. Some continue to burn coal, for example, because there are no other economically feasible choices for them.
“We as technologists from IEEE say, ‘If you keep to your positions, you’ll never get an agreement,’” he said. “We have come to offer this six-point portfolio of solutions that everybody can live with. We want to be a solution partner so we can have parties at the table to help solve this problem of high carbon emissions globally.”
The solutions Rahman outlined were the use of proven methods that reduce electricity usage, making coal plants more efficient, using hydrogen and other storage solutions, promoting more renewables, installing new types of nuclear reactors, and encouraging cross-border power transfers.
One action is to use less electricity, Rahman said, noting that dimming lights by 20 percent in homes, office buildings, hotels, and schools could save 10 percent of electricity. Most people wouldn’t even notice the difference in brightness, he said.
Another is switching to LEDs, which use at least 75 percent less energy than incandescent bulbs. LED bulbs cost about five times more, but they last longer, he said. He called on developed countries to provide financial assistance to developing nations to help them replace all their incandescent bulbs with LEDs.
Another energy-saving measure is to raise the temperature of air conditioners by 2 °C. This could save 10 percent of electricity as well, Rahman.
By better controlling lighting, heating, and cooling, 20 percent of energy could be saved without causing anyone to suffer, he said.
Efficient coal-burning plants
Shutting down coal power plants completely is unlikely to happen anytime soon, he predicted, especially since many countries are building new ones that have 40-year life spans. Countries that continue to burn coal should do so in high-efficiency power plants, he said.
One type is the ultrasupercritical coal-fired steam power plant. Conventional coal-fired plants, which make water boil to generate steam that activates a turbine, have an efficiency of about 38 percent. Ultrasupercritical plants operate at temperatures and pressures at which the liquid and gas phases of water coexist in equilibrium. It results in higher efficiencies: about 46 percent. Rahman cited the Eemshaven ultrasupercritical plant, in Groningen, Netherlands—which was built in 2014.
Another efficient option he pointed out is the combined cycle power plant. In its first stage, natural gas is burned in a turbine to make electricity. The heat from the turbine’s exhaust is used to produce steam to turn a turbine in the second stage. The resulting two-stage power plant is at least 25 percent more efficient than a single-stage plant.
“IEEE wants to be a solution partner, not a complaining partner, so we can have both parties at the table to help solve this problem of high carbon emissions globally.”
Another method to make coal-fired power plants more environmentally friendly is to capture the exhausted carbon dioxide and store it in the ground, Rahman said. Such carbon-capture systems are being used in some locations, but he acknowledges that the carbon sequestration process is too expensive for some countries.
Integrating and storing grid and off-grid energy
To properly balance electricity supply and demand on the power grid, renewables should be integrated into energy generation, transmission, and distribution systems from the very start, Rahman said. He added that the energy from wind, solar, and hydroelectric plants should be stored in batteries so the electricity generated from them during off-peak hours isn’t wasted but integrated into energy grids.
He also said low-cost, low-carbon hydrogen fuel should be considered as part of the renewable energy mix. The fuel can be used to power cars, supply electricity, and heat homes, all with zero carbon emissions.
“Hydrogen would help emerging economies meet their climate goals, lower their costs, and make their energy grid more resilient,” he said.
Smaller nuclear power plants
Rahman conceded there’s a stigma that surrounds nuclear power plants because of accidents at Chernobyl, Fukushima, Three Mile Island, and elsewhere. But, he said, without nuclear power, the concept of becoming carbon neutral by 2050 isn’t realistic.
“It’s not possible in the next 25 years except with nuclear power,” he said. “We don’t have enough solar energy and wind energy.”
Small modular reactors could replace traditional nuclear power plants. SMRs are easier and less expensive to build, and they’re safer than today’s large nuclear plants, Rahman said.
Though small, SMRs are powerful. They have an output of up to 300 megawatts of electricity, or about a quarter of the size of today’s typical nuclear plant.
The modular reactors are assembled in factories and shipped to their ultimate location, instead of being built onsite. And unlike traditional nuclear facilities, SMRs don’t need to be located near large bodies of water to handle the waste heat discharge.
SMRs have not taken off, Rahman says, because of licensing and technical issues.
Electricity transfer across national borders
Rahman emphasized the need for more cross-border power transfers, as few countries have enough electricity to supply to all their citizens. Many countries already do so.
“The United States buys power from Canada. France sells energy to Italy, Spain, and Switzerland,” Rahman said. “The whole world is one grid. You cannot transition from coal to solar and vice versa unless you transfer power back and forth.”
None of the solutions IEEE proposed are new or untested, Rahman said, but his goal is to “provide a portfolio of solutions acceptable to and deployable in both the emerging economies and the developed countries—which will allow them to sit at the table together and see how much carbon emission can be saved by creative application of already available technologies so that both parties win at the end of the day.”
Match ID: 175 Score: 7.14 source: spectrum.ieee.org age: 7 days qualifiers: 7.14 mit
There is currently a lot of interest in AI tools designed to help programmers write software. GitHub’s Copilot and Amazon’s CodeWhisperer apply deep-learning techniques originally developed for generating natural-language text by adapting it to generate source code. The idea is that programmers can use these tools as a kind of auto-complete on steroids, using prompts to produce chunks of code that developers can integrate into their software.
Looking at these tools, I wondered: Could we take the next step and take the human programmer
out of the loop? Could a working program be written and deployed on demand with just the touch of a button?
In my day job, I write embedded software for microcontrollers, so I immediately thought of a self-contained handheld device as a demo platform. A screen and a few controls would allow the user to request and interact with simple AI-generated software. And so was born the idea of infinite
Pong for a number of reasons. The gameplay is simple, famously explained on Atari’s original 1972 Pong arcade cabinet in a triumph of succinctness: “Avoid missing ball for high score.” An up button and a down button is all that’s needed to play. As with many classic Atari games created in the 1970s and 1980s, Pong can be written in a relatively few lines of code, and has been implemented as a programming exercise many, many times. This means that the source-code repositories ingested as training data for the AI tools are rich in Pong examples, increasing the likelihood of getting viable results.
I used a US $6
Raspberry Pi Pico W as the core of my handheld device—its built-in wireless allows direct connectivity to cloud-based AI tools. To this I mounted a $9 Pico LCD 1.14 display module. Its 240 x 135 color pixels is ample for Pong, and the module integrates two buttons and a two-axis micro joystick.
My choice of programming language for the Pico was
MicroPython, because it is what I normally use and because it is an interpreted- language code that can be run without the need of a PC-based compiler. The AI coding tool I used was the OpenAI Codex. The OpenAI Codex can be accessed via an API that responds to queries using the Web’s HTTP format, which are straightforward to construct and send using the urequests and ujson libraries available for MicroPython. Using the OpenAI Codex API is free during the current beta period, but registration is required and queries are limited to 20 per minute—still more than enough to accommodate even the most fanatical Pong jockey.
Only two hardware modules are needed–a Rasperry Pi Pico W [bottom left] that supplies the compute power and a plug-in board with a screen and simple controls [top left]. Nothing else is needed except a USB cable to supply power.James Provost
The next step was to create a container program. This program is responsible for detecting when a new version of Pong is requested via a button push and when it, sends a prompt to the OpenAI Codex, receives the results, and launches the game. The container program also sets up a hardware abstraction layer, which handles the physical connection between the Pico and the LCD/control module.
The most critical element of the whole project was creating the prompt that is transmitted to the OpenAI Codex every time we want it to spit out a new version of
Pong. The prompt is a chunk of plain text with the barest skeleton of source code—a few lines outlining a structure common to many video games, namely a list of libraries we’d like to use, and a call to process events (such as keypresses), a call to update the game state based on those events, and a call to display the updated state on the screen.
The code that comes back produces a workable Pong game about 80 percent of the time.
How to use those libraries and fill out the calls is up to the AI. The key to turning this generic structure into a
Pong game are the embedded comments—optional in source code written by humans, really useful in prompts. The comments describe the gameplay in plain English—for example, “The game includes the following classes…Ball: This class represents the ball. It has a position, a velocity, and a debug attributes [sic]. Pong: This class represents the game itself. It has two paddles and a ball. It knows how to check when the game is over.” (My container and prompt code are available on Hackaday.io) (Go to Hackaday.io to play an infinite number of Pong games with the Raspberry Pi Pico W; my container and prompt code are on the site.)
What comes back from the AI is about 300 lines of code. In my early attempts the code would fail to display the game because the version of the MicroPython
framebuffer library that works with my module is different from the framebuffer libraries the OpenAI Codex was trained on. The solution was to add the descriptions of the methods my library uses as prompt comments, for example: “def rectangle(self, x, y, w, h, c).” Another issue was that many of the training examples used global variables, whereas my initial prompt defined variables as attributes scoped to live inside individual classes, which is generally a better practice. I eventually had to give up, go with the flow, and declare my variables as global.
The variations of Pong created by the OpenAI Codex vary widely in ball and paddle size and color and how scores are displayed. Sometimes the code results in an unplayable game, such as at the bottom right corner, where the player paddles have been placed on top of each other.James Provost
The code that comes back from my current prompt produces a workable
Pong game about 80 percent of the time. Sometimes the game doesn’t work at all, and sometimes it produces something that runs but isn’t quite Pong, such as when it allows the paddles to be moved left and right in addition to up and down. Sometimes it’s two human players, and other times you play against the machine. Since it is not specified in the prompt, Codex takes either of the two options. When you play against the machine, it’s always interesting to see how Codex has implemented that part of code logic.
So who is the author of this code? Certainly there are
legal disputes stemming from, for example, how this code should be licensed, as much of the training set is based on open-source software that imposes specific licensing conditions on code derived from it. But licenses and ownership are separate from authorship, and with regard to the latter I believe it belongs to the programmer who uses the AI tool and verifies the results, as would be the case if you created artwork with a painting program made by a company and used their brushes and filters.
As for my project, the next step is to look at more complex games. The 1986 arcade hit
Arkanoid on demand, anyone?
Match ID: 176 Score: 3.57 source: spectrum.ieee.org age: 8 days qualifiers: 3.57 mit
This sponsored article is brought to you by COMSOL.
“Laws, Whitehouse received five minutes signal. Coil signals too weak to relay. Try drive slow and regular. I have put intermediate pulley. Reply by coils.”
Sound familiar? The message above was sent through the first transatlantic telegraph cable between Newfoundland and Ireland, way back in 1858. (“Whitehouse” refers to the chief electrician of the Atlantic Telegraph Company at the time, Wildman Whitehouse.) Fast forward to 2014: The bottom of the ocean is home to nearly 300 communications cables, connecting countries and providing internet communications around the world. Fast forward again: As of 2021, there are an estimated 1.3 million km of submarine cables (Figure 1) in service, ranging from a short 131 km cable between Ireland and the U.K. to the 20,000 km cable that connects Asia with North America and South America. We know what the world of submarine cables looks like today, but what about the future?
Moving Wind Power Offshore
The offshore wind (OFW) industry is one of the most rapidly advancing sources of power around the world. It makes sense: Wind is stronger and more consistent over the open ocean than it is on land. Some wind farms are capable of powering 500,000 homes or more. Currently, Europe leads the market, making up almost 80 percent of OFW capacity. However, the worldwide demand for energy is expected to increase by 20 percent in 10 years, with a large majority of that demand supplied by sustainable energy sources like wind power.
Offshore wind farms (Figure 2) are made up of networks of turbines. These networks include cables that connect wind farms to the shore and supply electricity to our power grid infrastructure (Figure 3). Many OFW farms are made up of grounded structures, like monopiles and other types of bottom-fixed wind turbines. The foundations for these structures are expensive to construct and difficult to install in deep sea environments, as the cables have to be buried in the seafloor. Installation and maintenance is easier to accomplish in shallow waters.
Wind turbines for offshore wind farms are starting to be built further out into the ocean. This creates a new need for well-designed subsea cables that can reach longer distances, survive in deeper waters, and better connect our world with sustainable power.
The future of offshore wind lies in wind farms that float on ballasts and moorings, with the cables laid directly on the seafloor. Floating wind farms are a great solution when wind farms situated just off the coast grow crowded. They can also take advantage of the bigger and more powerful winds that occur further out to sea. Floating wind farms are expected to grow more popular over the next decade. This is an especially attractive option for areas like the Pacific Coast of the United States and the Mediterranean, where the shores are deeper, as opposed to the shallow waters of the Atlantic Coast of the U.S., U.K., and Norway. One important requirement of floating OFW farms is the installation of dynamic, high-capacity submarine cables that are able to effectively harness and deliver the generated electricity to our shores.
Design Factors for Resilient Subsea Cables
Ever experienced slower than usual internet? Failure of a subsea cable may be to blame. Cable failures of this kind are a common — and expensive — occurrence, whether from the damage of mechanical stress and strain caused by bedrock, fishing trawlers, anchors, and problems with the cable design itself. As the offshore wind industry continues to grow, our need to develop power cables that can safely and efficiently connect these farms to our power grid grows as well.
Before fixing or installing a submarine cable, which can cost billions of dollars, cable designers have to ensure that designs will perform as intended in undersea conditions. Today, this is typically done with the help of computational electromagnetics modeling. To validate cable simulation results, international standards are used, but these standards have not been able to keep up with recent advancements in computational power and the simulation software’s growing capabilities. Hellenic Cables, including its subsidiary FULGOR, use the finite element method (FEM) to analyze their cable designs and compare them to experimental measurements, often getting better results than what the international standards can offer.
Updated Methodology for Calculating Cable Losses
The International Electrotechnical Commission (IEC) provides standards for electrical cables, including Standard 60287 1-1 for calculating cable losses and current ratings. One problem with the formulation used in Standard 60287 is that it overestimates cable losses — especially the losses in the armor of three-core (3C) submarine cables. Cable designers are forced to adopt a new methodology for performing these analyses, and the team at Hellenic Cables recognizes this. “With a more accurate and realistic model, significant optimization margins are expected,” says Dimitrios Chatzipetros, team leader of the Numerical Analysis group at Hellenic Cables. The new methodology will enable engineers to reduce cable cross sections, thereby reducing their costs, which is the paramount goal for cable manufacturing.
An electric cable is a complex device to model. The geometric structure consists of three main power cores that are helically twisted with a particular lay length, and hundreds of additional wires — screen or armor wires — that are twisted with a second or third lay length. This makes it difficult to generate the mesh and solve for the electromagnetic fields. “This is a tedious 3D problem with challenging material properties, because some of the elements are ferromagnetic,” says Andreas Chrysochos, associate principal engineer in the R&D department of Hellenic Cables.
In recent years, FEM has made a giant leap when it comes to cable analysis. The Hellenic Cables team first used FEM to model a full cable section of around 30 to 40 meters in length. This turned out to be a huge numerical challenge that can only realistically be solved on a supercomputer. By switching to periodic models with a periodic length equal to the cable’s cross pitch, the team reduced the problem from 40 meters down to 2–4 meters. Then they introduced short-twisted periodicity, which reduces the periodic length of the model from meters to centimeters, making it much lighter to solve. “The progress was tremendous,” says Chrysochos. (Figure 4)
Although the improvements that FEM brings to cable analysis are great, Hellenic Cables still needs to convince its clients that their validated results are more realistic than those provided by the current IEC standard. Clients are often already aware of the fact that IEC 60287 overestimates cable losses, but results visualization and comparison to actual measurements can build confidence in project stakeholders. (Figure 5)
Finite Element Modeling of Cable Systems
Electromagnetic interference (EMI) presents several challenges when it comes to designing cable systems — especially the capacitive and inductive couplings between cable conductors and sheaths. For one, when calculating current ratings, engineers need to account for power losses in the cable sheaths during normal operation. In addition, the overvoltages on cable sheaths need to be within acceptable limits to meet typical health and safety standards.
As Chrysochos et al. discuss in “Capacitive and Inductive Coupling in Cable Systems – Comparative Study between Calculation Methods” (Ref. 3), there are three main approaches when it comes to calculating these capacitive and inductive couplings. The first is the complex impedance method (CIM), which calculates the cable system’s currents and voltages while neglecting its capacitive currents. This method also assumes that the earth return path is represented by an equivalent conductor. Another common method is electromagnetic transients program (EMT) software, which can be used to analyze electromagnetic transients in power systems using both time- and frequency-domain models.
The third method, FEM, is the foundation of the COMSOL Multiphysics software. The Hellenic Cables team used COMSOL Multiphysics and the add-on AC/DC Module to compute the electric fields, currents, and potential distribution in conducting media. “The AC/DC Module and solvers behind it are very robust and efficient for these types of problems,” says Chrysochos.
The Hellenic Cables team compared the three methods — CIM, EMT software, and FEM (with COMSOL Multiphysics) — when analyzing an underground cable system with an 87/150 kV nominal voltage and 1000 mm2 cross section (Figure 6). They modeled the magnetic field and induced current density distributions in and around the cable system’s conductors, accounting for the bonding type with an external electrical circuit. The results between all three methods show good agreement for the cable system for three different configurations: solid bonding, single-point bonding, and cross bonding (Figure 7). This demonstrates that FEM can be applied to all types of cable configurations and installations when taking into account both capacitive and inductive coupling.
The Hellenic Cables team also used FEM to study thermal effects in subsea cables, such as HVAC submarine cables for offshore wind farms, as described in “Review of the Accuracy of Single Core Equivalent Thermal Model for Offshore Wind Farm Cables” (Ref. 4). The current IEC Standard 60287 1-1 includes a thermal model, and the team used FEM to identify its weak spots and improve its accuracy. First, they validated the current IEC model with finite element analysis. They found that the current standards do not account for the thermal impact of the cable system’s metallic screen materials, which means that the temperature can be underestimated by up to 8°C. Deriving analytical, correcting formulas based on several FEM models, the team reduced this discrepancy to 1°C! Their analysis also highlights significant discrepancies between the standard and the FEM model, especially when the corresponding sheath thickness is small, the sheath thermal conductivity is high, and the power core is large. This issue is particularly important for OFW projects, as the cables involved are expected to grow larger and larger.
Further Research into Cable Designs
In addition to studying inductive and capacitive coupling and thermal effects, the Hellenic Cables team evaluated other aspects of cable system designs, including losses, thermal resistance of surrounding soil, and grounding resistance, using FEM and COMSOL Multiphysics. “In general, COMSOL Multiphysics is much more user friendly and efficient, such as when introducing temperature-dependent losses in the cable, or when presenting semi-infinite soil and infinite element domains. We found several ways to verify what we already know about cables, their thermal performance, and loss calculation,” says Chatzipetros.
The conductor size of a subsea or terrestrial cable affects the cost of the cable system. This is often a crucial aspect of an offshore wind farm project. To optimize the conductor size, designers need to be able to accurately determine the cable’s losses. To do so, they first turned to temperature. Currents induced in a cable’s magnetic sheaths yield extra losses, which contribute to the temperature rise of the conductor.
When calculating cable losses, the current IEC standard does not consider proximity effects in sheath losses. If cable cores are in close proximity (say, for a wind farm 3C cable), the accuracy of the loss calculation is reduced. Using FEM, the Hellenic Cables team was able to study how conductor proximity effects influence losses generated in sheaths in submarine cables with lead-sheathed cores and a nonmagnetic armor. They then compared the IEC standard with the results from the finite element analysis, which showed better agreement with measured values from an experimental setup (Figure 8). This research was discussed in the paper “Induced Losses in Non-Magnetically Armoured HVAC Windfarm Export Cables” (Ref. 5).
Thermal Resistance of Soil
Different soil types have different thermal insulating characteristics, which can severely limit the amount of heat dissipated from the cable, thereby reducing its current-carrying capacity. This means that larger conductor sizes are needed to transmit the same amount of power in areas with more thermally adverse soil, causing the cable’s cost to increase.
In the paper “Rigorous calculation of external thermal resistance in non-uniform soils” (Ref. 6), the Hellenic Cables team used FEM to calculate the effective soil thermal resistance for different cable types and cable installation scenarios (Figure 9). First, they solved for the heat transfer problem under steady-state conditions with arbitrary temperatures at the cable and soil surfaces. They then evaluated the effective thermal resistance based on the heat dissipated by the cable surface into the surrounding soil.
Simulations were performed for two types of cables: a typical SL-type submarine cable with 87/150 kV, a 1000 mm2 cross section, and copper conductors, as well as a typical terrestrial cable with 87/150 kV, a 1200 mm2 cross section, and aluminum conductors. The team analyzed three different cable installation scenarios (Figure 10).
The first scenario is when a cable is installed beneath a horizontal layer, such as when sand waves are expected to gradually add to the seafloor’s initial level after installation. The second is when a cable is installed within a horizontal layer, which occurs when the installation takes place in a region with horizontal directional drilling (HDD). The third scenario is when a cable is installed within a backfilled trench, typical for regions with unfavorable thermal behavior, in order to reduce the impact of the soil on the temperature rise of the cable. The numerical modeling results prove that FEM can be applied to any material or shape of multilayer or backfilled soil, and that the method is compatible with the current rating methodology in IEC Standard 60287.
The evaluation of grounding resistance is important to ensure the integrity and secure operation of cable sheath voltage limiters (SVLs) when subject to earth potential rise (EPR). In order to calculate grounding resistance, engineers need to know the soil resistivity for the problem at hand and have a robust calculation method, like FEM.
The Hellenic Cables team used FEM to analyze soil resistivity for two sites: one in northern Germany and one in southern Greece. As described in the paper “Evaluation of Grounding Resistance and Its Effect on Underground Cable Systems” (Ref. 7), they found that the apparent resistivity of the soil is a monotonic function of distance, and that a two-layer soil model is sufficient for their modeling problem (Figure 11). After finding the resistivity, the team calculated the grounding resistance for a single-rod scenario (as a means of validation). After that, they proceeded with a complex grid, which is typical of cable joint pits found in OWFs. For both scenarios, they found the EPR at the substations and transition joint pit, as well as the maximum voltage between the cable sheath and local earth (Figure 12). The results demonstrate that FEM is a highly accurate calculation method for grounding resistance, as they show good agreement with both numerical data from measurements and electromagnetic transient software calculations (Figure 13).
A Bright and Windy Future
The Hellenic Cables team plans to continue the important work of further improving all of the cable models they have developed. The team has also performed research into HVDC cables, which involve XLPE insulation and voltage source converter (VSC) technology. HVDC cables can be more cost efficient for systems installed over long distances.
Like the wind used to power offshore wind farms, electrical cable systems are all around us. Even though we cannot always see them, they are working hard to ensure we have access to a high-powered and well-connected world. Optimizing the designs of subsea and terrestrial cables is an important part of building a sustainable future.
M. Hatlo, E. Olsen, R. Stølan, J. Karlstrand, “Accurate analytic formula for calculation of losses in three-core submarine cables,” Jicable, 2015.
S. Sturm, A. Küchler, J. Paulus, R. Stølan, F. Berger, “3D-FEM modelling of losses in armoured submarine power cables and comparison with measurements,” CIGRE Session 48, 2020.
A.I. Chrysochos et al., “Capacitive and Inductive Coupling in Cable Systems – Comparative Study between Calculation Methods”, 10th International Conference on Insulated Power Cables, Jicable, 2019.
D. Chatzipetros and J.A. Pilgrim, “Review of the Accuracy of Single Core Equivalent Thermal Model for Offshore Wind Farm Cables”, IEEE Transactions on Power Delivery, Vol. 33, No. 4, pp. 1913–1921, 2018.
D. Chatzipetros and J.A. Pilgrim, “Induced Losses in Non-Magnetically Armoured HVAC Windfarm Export Cables”, IEEE International Conference on High Voltage Engineering and Application (ICHVE), 2018.
A.I. Chrysochos et al., “Rigorous calculation of external thermal resistance in non-uniform soils”, Cigré Session 48, 2020.
A.I. Chrysochos et al., “Evaluation of Grounding Resistance and Its Effect on Underground Cable Systems”, Mediterranean Conference on Power Generation, Transmission , Distribution and Energy Conversion, 2020.
Match ID: 177 Score: 3.57 source: spectrum.ieee.org age: 8 days qualifiers: 3.57 mit
The IEEE Board of Directors has nominated Life Fellow Roger Fujii and Senior Member Kathleen Kramer as candidates for IEEE president-elect.
The winner of this year’s election will serve as IEEE president in 2025. For more information about the election, president-elect candidates, and petition process, visit the IEEE election website.
Life Fellow Roger Fujii
Nominated by the IEEE Board of Directors
Fujii is president of Fujii Systems of Rancho Palos Verdes, Calif., which designs critical systems. Before starting his company, Fujii was vice president at Northrop Grumman’s engineering division in San Diego.
His area of expertise is certifying critical systems. He has been a guest lecturer at California State University, the University of California, and Xiamen University.
An active IEEE volunteer, Fujii most recently chaired the IEEE financial transparency reporting committee and the IEEE ad hoc committee on IEEE in 2050. The ad hoc committee envisioned scenarios to gain a global perspective of what the world might look like in 2050 and beyond and what potential futures might mean for IEEE.
He was 2016 president of the IEEE Computer Society, 2021 vice president of the IEEE Technical Activities Board, and 2012–2014 director of Division VIII.
Fujii received the 2020 Richard E. Merwin Award, the IEEE Computer Society’s highest-level volunteer service award.
Senior Member Kathleen Kramer
Nominated by the IEEE Board of Directors
Kramer is a professor of electrical engineering at the University of San Diego, where she served as chair of the EE department and director of engineering from 2004 to 2013. As director she provided academic leadership for engineering programs and developed new programs.
Her areas of interest include multisensor data fusion, intelligent systems, and cybersecurity in aerospace systems.
She has written or coauthored more than 100 publications.
Kramer has worked for several companies including Bell Communications Research, Hewlett-Packard, and Viasat.
She is a distinguished lecturer for the IEEE Aerospace and Electronic Systems Society and has given talks on signal processing, multisensor data fusion, and neural systems. She leads the society’s technical panel on cybersecurity.
Kramer earned bachelor’s degrees in electrical engineering and physics in 1986 from Loyola Marymount University, in Los Angeles. She earned master’s and doctoral degrees in EE in 1991 from Caltech.
Match ID: 178 Score: 3.57 source: spectrum.ieee.org age: 9 days qualifiers: 3.57 mit
This is the tenth in a
series of articles exploring the major technological and social challenges that must be addressed as we move from vehicles with internal-combustion engines to electric vehicles at scale. In reviewing each article, readers should bear in mind Nobel Prize–winning physicist Richard Feynman’s admonition: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”
Perhaps, but getting the vast majority of 111 million
US households who own one or more light duty internal combustion vehicles to switch to EVs is going to take time. Even if interest in purchasing an EV is increasing, close to 70 percent of Americans are still leaning towards buying an ICE vehicles as their next purchase. In the UK, only 14 percent of drivers plan to purchase an EV as their next car.
Even when there is an expressed interest in purchasing a battery electric or hybrid vehicle, it often did not turn into an actual purchase. A
2022 CarGurus survey found that 35 percent of new car buyers expressed an interest in purchasing a hybrid, but only 13 percent eventually did. Similarly, 22 percent expressed interest in a battery electric vehicle (BEV), but only 5 percent bought one.
Each potential EV buyer assesses their individual needs against the benefits and risks an EV offers. However, until mainstream public confidence reaches the point where the perceived combination of risks of a battery electric vehicle purchase (range, affordability, reliability and behavioral changes) match that of an ICE vehicle, then EV purchases are going to be the exception rather than the norm.
Arguments over how much range is needed are contentious. There are some who argue that because
95 percent of American car trips are 30 miles or less, a battery range of 250 miles or less is all that is needed. They also point out that this would reduce the price of the EV, since batteries account for about 30 percent of an EVs total cost. In addition, using smaller batteries would allow more EVs to be built, and potentially relieve pressure on the battery supply chain. If longer trips are needed, well, “bring some patience and enjoy the charging experience” seems to be the general advice.
While perhaps logical, these arguments are not going to influence typical buying decisions much. The first question potential EV buyers are going to ask themselves is, “Am I going to be paying more for a compromised version of mobility?” says Alexander Edwards, President of
Strategic Vision, a research-based consultancy that aims to understand human behavior and decision-making.
Driver’s side view of 2024 Chevrolet Equinox EV 3LT.Chevrolet
Edwards explains potential customers do not have
range anxietyper se: If they believe they require a vehicle that must go 400 miles before stopping, “even if once a month, once a quarter, or once a year,” all vehicles that cannot meet that criteria will be excluded from their buying decision. Range anxiety, therefore, is more a concern for EV owners. Edwards points out that regarding range, most BEV owners own at least one ICE vehicle to meet their long-distance driving needs.
What exactly is the “range” of a BEV is itself becoming a heated point of contention. While ICE vehicles driving ranges are affected by weather and driving conditions, the effects are well-understood after decades of experience. This experience is lacking with non-EV owners. Extreme heat and cold negatively
affect EV battery ranges and charging time, as do driving speeds and terrain.
Peter Rawlinson serves as the CEO and CTO of Lucid.Lucid
Some automakers are reticent to say how much range is affected under differing conditions. Others, like Ford’s CEO Jim Farley, freely admits, “If you’re pulling 10,000 pounds, an electric truck is not the right solution. And 95 percent of our customers tow more than 10,000 pounds.” GM, though, is promising it will meet heavier towing requirements with its 2024 Chevrolet Silverado EV. However, Lucid Group CEO Peter Rawlinson in a non-too subtle dig at both Ford and GM said, “The correct solution for an affordable pickup truck today is the internal combustion engine.”
Ford’s Farley foresees that the heavy-duty truck segment will be sticking with ICE trucks for a while, as “it will probably go hydrogen fuel cell before it goes pure electric.” Many in the auto industry are warning that realistic BEV range numbers under varying conditions
need to be widely published, else risk creating a backlash against EVs in general.
Price is another EV purchase risk that is comparable to EV range. Buying a new car is the second most expensive purchase a consumer makes behind buying a house. Spending nearly
100 percent of an annual US median household income on an unfamiliar technology is not a minor financial ask.
That is one reason why legacy automakers and EV start-ups are attempting to follow
Tesla’s success in the luxury vehicle segment, spending much of their effort producing vehicles that are “above the median average annual US household income, let alone buyer in new car market,” Strategic Vision’s Edwards says. On top of the twenty or so luxury EVs already or soon to be on the market, Sony and Honda recently announced that they would be introducing yet another luxury EV in 2026.
It is true that there are some EVs that will soon appear in the competitive price range of ICE vehicles like the low-end
GM EV Equinox SUV presently priced around $30,000 with a 280-mile range. How long GM will be able to keep that price in the face of battery cost increases and inflationary pressure, is anyone’s guess. It has already started to increase the cost of its Chevrolet Bolt EVs, which it had slashed last year, “due to ongoing industry-related pricing pressures.”
The Lucid Air’s price ranges from $90,000 to $200,000 depending on options.Lucid.
Analysts believe Tesla intends to
spark an EV price war before its competitors are ready for one. This could benefit consumers in the short-term, but could also have long-term downside consequences for the EV industry as a whole. Tesla fired its first shot over its competitors’ bows with a recently announced price cut from $65,990 to $52,990 for its basic Model Y, with a range of 330 miles. That makes the Model Y cost-competitive with Hyundai’s $45,500 IONIQ 5 e-SUV with 304 miles of range.
Tesla’s pricing power could be hard to counter, at least in the short term. Ford’s cheapest F-150 Lightning Pro is now $57,869 compared to $41,769 a year ago due to what Ford
says are “ongoing supply chain constraints, rising material costs and other market factors.” The entry level F-150 XL with an internal combustion engine has risen in the past year from about $29,990 to $33,695 currently.
Carlos Tavares, CEO of Stellantis.Stellantis
Automakers like Stellantis, freely acknowledge that EVs are too expensive for most buyers, with
Stellantis CEO Carlos Tavares even warning that if average consumers can’t afford EVs as ICE vehicle sales are banned, “There is potential for social unrest.” However, other automakers like BMW are quite unabashed about going after the luxury market which it terms “white hot.” BMW’s CEO Oliver Zipse does say the company will not leave the “lower market segment,” which includes the battery electric iX1 xDrive30 that retails for A$82,900 in Australia and slightly lower elsewhere. It is not available in the United States.
The fact that luxury EVs are
more profitable no doubt helps keep automakers focused on that market. Ford’s very popular Mustang Mach-E is having trouble maintaining profitability, for instance, which has forced Ford to raise its base price from $43,895 to $46,895. Even in the Chinese market where smaller EV sales are booming, profits are not. Strains on profitability for automakers and their suppliers may increase further as battery metals prices increase, warns data analysis company S&P Global Mobility.
Jim Rowan, Volvo Cars’ CEO and President.Volvo Cars
Interestingly, a 2019
Massachusetts Institute of Technology (MIT) study predicted that as EVs became more widespread, battery prices would climb because the demand for lithium and other battery metals would rise sharply. As a result, the study indicated EV/ICE price parity was likely closer to 2030 with the expectation that new battery chemistries would be introduced by then.
Many argue, however, that
total cost of ownership (TCO) should be used as the EV purchase decision criterion rather than sticker price. Total cost of ownership of EVs is generally less than an ICE vehicle over its expected life since they have lower maintenance costs and electricity is less expensive per mile than gasoline, and tax incentives and rebates help a lot as well.
However, how long it takes to hit the break-even point
depends on many factors, like the cost differential of a comparable ICE vehicle, depreciation, taxes, insurance costs, the cost of electricity/petrol in a region, whether charging takes place at home, etc. And TCO rapidly loses it selling point appeal if electricity prices go up, however, as is happening in the UK and in Germany.
Even if the total cost of ownership is lower for an EV, a potential EV customer may not be interested if meeting today’s monthly auto payments is difficult. Extra costs like needing to install a fast charger at home, which can add
several thousand dollars more, or higher insurance costs, which could add an extra $500-$600 a year, may also be seen as buying impediment and can change the TCO equation.
Reliability and other major tech risks
To perhaps distract wary EV buyers from range and affordability issues, the automakers have focused their efforts on highlighting EV performance.
Raymond Roth, a director at financial advisory firm Stout Risius Ross, observes among automakers, “There’s this arms race right now of best in class performance” being the dominant selling point.
This “wow” experience is being pursued by every EV automaker.
Mercedes CEO Kallenius, for example, says to convince its current luxury vehicle owners to an EV, “the experience for the customer in terms of the torque, the performance, everything [must be] fantastic.” Nissan, which seeks a more mass market buyer, runs commercials exclaiming, “Don’t get an EV for the ‘E’, but because it will pin you in your seat, sparks your imagination and takes your breath away.”
Ford believes it will earn $20 billion, Stellantis some $22.5 billion and GM $20 to $25 billion from paid software-enabled vehicle features by 2030.
EV reliability issues may also take one’s breath away. Reliability is “extremely important” to new-car buyers,
according to a 2022 report from Consumer Reports (CR). Currently, EV reliability is nothing to brag about. CR’s report says that “On average, EVs have significantly higher problem rates than internal combustion engine (ICE) vehicles across model years 2019 and 2020.” BEVs dwell at the bottom of the rankings.
Reliability may prove to be an Achilles heel to automakers like GM and Ford. GM CEO Mary Barra has very publicly promised that GM would no longer build “
crappy cars.” The ongoing problems with the Chevy Bolt undercuts that promise, and if its new Equinox EV has issues, it could hurt sales. Ford has reliability problems of its own, paying $4 billion in warranty costs last year alone. Its e-Mustang has been subject to several recalls over the past year. Even perceived quality-leader Toyota has been embarrassed by wheels falling off weeks after the introduction of its electric bZ4X SUV, the first in a new series “bZ”—beyond zero—electric vehicles.
A Tesla caught up in a mudslide in Silverado Canyon, Calif., on March 10, 2021. Jae C. Hong/AP Photo
Another reliability risk-related issue is getting an EV repaired when something goes awry, or there is an accident. Right now, there is a dearth of EV-certified mechanics and repair shops. The
UK Institute of the Motor Industry (IMI) needs 90,000 EV-trained technicians by 2030. The IMI estimates that less than 7 percent of the country’s automotive service workforce of 200,000 vehicle technicians is EV qualified. In the US, the situation is not better. The National Institute for Automotive Service Excellence (ASE), which certifies auto repair technicians, says the US has 229,000 ASE-certified technicians. However, there are only some 3,100 certified for electric vehicles. With many automakers moving to reduce their dealership networks, resolving problems that over-the-air (OTA) software updates cannot fix might be troublesome.
Furthermore, the costs and time needed to repair an EV are higher than for ICE vehicles,
according to the data analytics company CCC. Reasons include a greater need to use original equipment manufacturer (OEM) parts and the cost of scans/recalibration of the advanced driver assistance systems, which have been rising for ICE vehicles as well. Furthermore, technicians need to ensure battery integrity to prevent potential fires.
And some of batteries along with their battery management systems need work. Two examples: Recalls involving the GM Bolt and Hyundai Kona, with the former likely to cost GM $1.8 billion and Hyundai $800 million to fix, according to
Stout’s 2021 Automotive Defect and Recall Report. Furthermore, the battery defect data compiled by Stout indicates “incident rates are rising as production is increasing and incidents commonly occur across global platforms,” with both design and manufacturing defects starting to appear.
For a time in New York City, one had to be a licensed engineer to drive a steam-powered auto. In some aspects, EV drivers return to these roots. This might change over time, but for now it is a serious issue.” —John Leslie King
CCC data indicate that when damaged, battery packs do need replacement after a crash, and more than 50 percent of such vehicles were deemed a total loss by the insurance companies. EVs also need to revisit the repair center more times after they’ve been repaired than ICE vehicles, hinting at the increased difficulty in repairing them. Additionally, EV tire tread wear
needs closer inspection than on ICE vehicles. Lastly, as auto repair centers need to invest in new equipment to handle EVs, these costs will be passed along to customers for some time.
The risk has reached the attention of the
US Office of the National Cyber Director, which recently held a forum of government and automaker, suppliers and EV charging manufacturers focusing on “cybersecurity issues in the electric vehicle (EV) and electric vehicle supply equipment (EVSE) ecosystem.” The concern is that EV uptake could falter if EV charging networks are not perceived as being secure.
A sleeper risk that may explode into a massive problem is an EV owner’s right-to-repair their vehicle. In 2020, Massachusetts passed a law that allows a vehicle owner to take it to whatever repair shop they wish and gave independent repair shops the right to access the real-time vehicle data for diagnosis purposes. Auto dealers have sued to overturn the law, and some auto makers like Subaru and Kia have
disabled the advanced telematic systems in cars sold in Massachusetts, often without telling new customers about it. GM and Stellantis have also said they cannot comply with the Massachusetts law, and are not planning to do so because it would compromise their vehicles’ safety and cybersecurity. The Federal Trade Commission is looking into the right-to-repair issue, and President Biden has come out in support of it.
You expect me to do what, exactly?
Failure to change consumer behavior poses another major risk to the EV transition. Take charging. It requires a new consumer behavior in terms of
understanding how and when to charge, and what to do to keep an EV battery healthy. The information on the care and feeding of a battery as well as how to maximize vehicle range can resemble a manual for owning a new, exotic pet. It does not help when an automaker like Ford tells its F-150 Lightning owners they can extend their driving range by relying on the heated seats to stay warm instead of the vehicle’s climate control system.
Keeping in mind such issues, and how one might work around them, increases a driver’s cognitive load—things that must be remembered in case they must be acted on. “Automakers spent decades reducing cognitive load with dash lights instead of gauges, or automatic instead of manual transmissions,” says
University of Michigan professor emeritus John Leslie King, who has long studied human interactions with machines.
King notes, “In the early days of automobiles, drivers and chauffeurs had to monitor and be able to fix their vehicles. They were like engineers. For a time in New York City, one had to be a licensed engineer to drive a steam-powered auto. In some aspects, EV drivers return to these roots. This might change over time, but for now it is a serious issue.”
The first-ever BMW iX1 xDrive30, Mineral White metallic, 20“ BMW Individual Styling 869i BMW AG
This cognitive load keeps changing as well. For instance, “common knowledge” about when EV owners should charge is not set in concrete. The long-standing mantra for charging EV batteries has been do so at home from at night when electricity rates were low and stress on the electric grid was low. Recent research from Stanford University says this is wrong, at least for Western states.
research shows that electricity rates should encourage EV charging during the day at work or at public chargers to prevent evening grid peak demand problems, which could increase by as much as 25 percent in a decade. The Wall Street Journal quotes the study’s lead author Siobhan Powell as saying if everyone were charging their EVs at night all at once, “it would cause really big problems.”
Asking EV owners to refrain from charging their vehicles at home during the night is going to be difficult, since EVs are being sold on the convenience of charging at home.
Transportation Secretary Pete Buttigieg emphasized this very point when describing how great EVs are to own, “And the main charging infrastructure that we count on is just a plug in the wall.”
Another behavior change risk relates to automakers’ desired EV owner post-purchase buying behavior. Automakers see EV (and ICE vehicle) advanced software and connectivity as a gateway to a
software-as-a-service model to generate new, recurring revenue streams across the life of the vehicle. Automakers seem to view EVs as razors through which they can sell software as the razor blades. Monetizing vehicle data and subscriptions could generate $1.5 trillion by 2030, according to McKinsey.
VW thinks that it will generate “triple-digit-millions” in future sales through selling customized subscription services, like offering autonomous driving on a pay-per-use basis. It envisions customers would be willing to
pay 7 euros per hour for the capability. Ford believes it will earn $20 billion, Stellantis some $22.5 billion and GM $20 to $25 billion from paid software-enabled vehicle features by 2030.
Already for ICE vehicles, BMW is reportedly
offering an $18 a month subscription (or $415 for “unlimited” access) for heated front seats in multiple countries, but not the U.S. as of yet. GM has started charging $1,500 for a three-year “optional” OnStar subscription on all Buick and GMC vehicles as well as the Cadillac Escalade SUV whether the owner uses it or not. And Sony and Honda have announced their luxury EV will be subscription-based, although they have not defined exactly what this means in terms of standard versus paid-for features. It would not be surprising to see it follow Mercedes’ lead. The automaker will increase the acceleration of its EQ series if an owner pays a $1,200 a year subscription fee.
Essentially, automakers are trying to normalize paying for what used to be offered as standard or even an upgrade option. Whether they will be successful is debatable, especially in the U.S. “No one is going to pay for subscriptions,” says Strategic Vision’s Edwards, who points out that
microtransactions are absolutely hated in the gaming community. Automakers risk a major consumer backlash by using them.
To get to EV at scale, each of the EV-related range, affordability, reliability and behavioral changes risks will need to be addressed by automakers and
policy makers alike. With dozens of new battery electric vehicles becoming available for sale in the next two years, potential EV buyers now have a much great range of options than previously. The automakers who manage EV risks best— along with offering compelling overall platform performance—will be the ones starting to claw back some of their hefty EV investments.
No single risk may be a deal breaker for an early EV adopter, but for skeptical ICE vehicle owners, each risk is another reason not to buy, regardless of perceived benefits offered. If EV-only families are going to be the norm, the benefits of purchasing EVs will need to be above—and the risks associated with owning will need to match or be below—those of today’s and future ICE vehicles.
In the next articles of this series, we’ll explore the changes that may be necessary to personal lifestyles to achieve 2050 climate goals.
Match ID: 179 Score: 3.57 source: spectrum.ieee.org age: 10 days qualifiers: 3.57 mit
This sponsored article is brought to you by COMSOL.
The 1985 action-adventure TV series MacGyver showcased the life of Angus MacGyver, a secret agent who solved problems using items he had on hand. For example, in one episode, he made a heat shield out of used refrigerator parts. In another, he made a fishing lure with a candy wrapper. More than three decades later, the show still has relevance. The verb MacGyver, to design something in a makeshift or creative way, was added to the Oxford English Dictionary in 2015.
Try putting your MacGyver skills to the test: If you were handed some CDs, what would you make out of them? Reflective wall art, mosaic ornaments, or a wind chime, perhaps? What about a miniaturized water treatment plant?
This is what a team of engineers and researchers are doing at Eden Tech, a company based in Paris, France, that specializes in the development of microfluidics technology. Within their R&D department, Eden Cleantech, they are developing a compact, energy-saving water treatment system to help tackle the growing presence of micropollutants in wastewater. To analyze the performance of their AKVO system (named after the Latin word for water, aqua), which is made from CDs, Eden Tech turned to multiphysics simulation.
Contaminants of Emerging Concern
“There are many ways micropollutants make it into wastewater,” says Wei Zhao, a senior chemical engineer and chief product officer at Eden Tech. The rise of these microscopic chemicals in wastewater worldwide is a result of daily human activities. For instance, when we wash our hands with soap, wipe down our sinks with cleaning supplies, or flush medications out of our bodies, various chemicals are washed down the drain and end up in sewage systems. Some of these chemicals are classified as micropollutants, or contaminants of emerging concern (CECs). In addition to domestic waste, agricultural pollution and industrial waste are also to blame for the rise of micropollutants in our waterways.
Micropollutants are added to the world’s lakes, rivers, and streams every day. Many conventional wastewater treatment plants are not equipped to remove these potentially hazardous chemical residues from wastewater.
Unfortunately, many conventional wastewater treatment plants (WWTP, Figure 1) are not designed to remove these contaminants. Therefore, they are often reintroduced to various bodies of water, including rivers, streams, lakes, and even drinking water. Although the risk they pose to human and environmental health is not fully understood, the increasing number of pollution found in the world’s bodies of water is of concern.
With this growing problem in mind, Eden Tech got to work on developing a solution, thus AKVO was born. Each AKVO CD core is designed to have a diameter of 15 cm and a thickness of 2 mm. One AKVO cartridge is composed of stacked CDs of varying numbers, combined to create a miniaturized factory. One AKVO core treats 0.5 to 2 m3 water/day, which means that an AKVO system composed of 10,000 CDs can treat average municipal needs. This raises the question: How can a device made from CDs decontaminate water?
A Sustainable Wastewater Treatment Method
A single AKVO system (Figure 2) consists of a customizable cartridge filled with stacked CDs that each have a microchannel network inscribed on them. It removes undesirable elements in wastewater, like micropollutants, by circulating the water in its microchannel networks. These networks are energy savvy because they only require a small pump to circulate and clean large volumes of water. The AKVO system’s cartridges can easily be replaced, with Eden Tech taking care of their recycling.
AKVO’s revolutionary design combines photocatalysis and microfluidics into one compact system. Photocatalysis, a type of advanced oxidation process (AOP), is a fast and effective way to remove micropollutants from wastewater. Compared to other AOPs, it is considered safer and more sustainable because it is powered by a light source. During photocatalysis, light is absorbed by photocatalysts that have the ability to create electron-hole pairs, which generate free hydroxyl radicals that are able to react with target pollutants and degrade them. The combination of photocatalysis and microfluidics for the treatment of wastewater has never been done before. “It is a very ambitious project,” said Zhao. “We wanted to develop an innovative method in order to provide an environmentally friendly, efficient way to treat wastewater.” AKVO’s current design did not come easy, as Zhao and his team faced several design challenges along the way.
Overcoming Design Challenges
When in use, a chemical agent (catalyst) and wastewater are dispersed through AKVO’s microchannel walls. The purpose of the catalyst, titanium dioxide in this case, is to react with the micropollutants and help remove them in the process. However, AKVO’s fast flow rate complicates this action. “The big problem is that [AKVO] has microchannels with fast flow rates, and sometimes when we put the chemical agent inside one of the channels’ walls, the micropollutants in the wastewater cannot react efficiently with the agent,” said Zhao. In order to increase the opportunity of contact between the micropollutants and the immobilized chemical agent, Zhao and his team opted to use a staggered herringbone micromixer (SHM) design for AKVO’s microchannel networks (Figure 3).
To analyze the performance of the SHM design to support chemical reactions for micropollutant degradation, Zhao used the COMSOL Multiphysics software.
Simulating Chemical Reactions for Micropollutant Degradation
In his work, Zhao built two different models in COMSOL Multiphysics (Figure 4), named the Explicit Surface Adsorption (ESA) model and the Converted Surface Concentration (CSC) model. Both of these models account for chemical and fluid phenomena.
In both models, Zhao found that AKVO’s SHM structure creates vortices in the flow moving through it, which enables the micropollutants and the chemical agent to have a longer reaction period and enhances the mass transfer between each fluid layer. However, the results of the ESA model displayed that the design purified about 50 percent of the micropollutants under treatment, fewer than what Zhao expected.
Unlike the ESA model (Figure 5), in the CSC model, it is assumed that there is no adsorption limitation. Therefore, as long as a micropollutant arrives at the surface of a catalyst, a reaction happens, which has been discussed in existing literature (Ref. 1). In this model, Zhao analyzed how the design performed for the degradation of six different micropollutants, including gemfibrozil, ciprofloxacin, carbamazepine, clofibric acid, bisphenol A, and acetaminophen (Figure 6). The results of this model were in line with what Zhao expected, with more than 95 percent of the micropollutants being treated.
“We are really satisfied with the results of COMSOL Multiphysics. My next steps will be focused on laboratory testing [of the AKVO prototype]. We are expecting to have our first prototype ready by the beginning of 2022,” said Zhao. The prototype will eventually be tested at hospitals and water treatment stations in the south of France.
Using simulation for this project has helped the Eden Tech team save time and money. Developing a prototype of a microfluidic system, like AKVO, is costly. To imprint microchannel networks on each of AKVO’s CDs, a microchannel photomask is needed. According to Zhao, to fabricate one photomask would cost about €3000 (3500 USD). Therefore, it is very important that they are confident that their system works well prior to its fabrication. “COMSOL Multiphysics has really helped us validate our models and our designs,” said Zhao.
Pioneer in the Treatment of Micropollutants
In 2016, Switzerland introduced legislation mandating that wastewater treatment plants remove micropollutants from wastewater. Their goal? Filter out over 80 percent of micropollutants at more than 100 Swiss WWTPs. Following their lead, many other countries are currently thinking of how they want to handle the growing presence of these contaminants in their waterways. AKVO has the potential to provide a compact, environmentally friendly way to help slow this ongoing problem.
The next time you go to throw out an old CD, or any other household item for that matter, ask yourself: What would MacGyver do? Or, better yet: What would Eden Tech do? You might be holding the building blocks for their next innovative design.
C. S. Turchi, D. F. Ollis, “Photocatalytic degradation of organic water contaminants: Mechanisms involving hydroxyl radical attack,” Journal of Catalysis, Vol. 122, p. 178, 1990.
MacGyver is a registered trademark of CBS Studios Inc. COMSOL AB and its subsidiaries and products are not affiliated with, endorsed by, sponsored by, or supported by CBS Studios Inc.
Match ID: 180 Score: 3.57 source: spectrum.ieee.org age: 10 days qualifiers: 3.57 mit
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
With the historic Kunming-Montreal Agreement of 18 December 2022, more than 200 countries agreed to halt and reverse biodiversity loss. But becoming nature-positive is an ambitious goal, also held back by the lack of efficient and accurate tools to capture snapshots of global biodiversity. This is a task where robots, in combination with environmental DNA (eDNA) technologies, can make a difference.
Our recent findings show a new way to sample surface eDNA with a drone, which could be helpful in monitoring biodiversity in terrestrial ecosystems. The eDrone can land on branches and collect eDNA from the bark using a sticky surface. The eDrone collected surface eDNA from the bark of seven different trees, and by sequencing the collected eDNA we were able to identify 21 taxa, including insects, mammals, and birds.
How can we bring limbed robots into real-world environments to complete challenging tasks? Dr. Dimitrios Kanoulas and the team at UCL Computer Science’s Robot Perception and Learning Lab are exploring how we can use autonomous and semi-autonomous robots to work in environments that humans cannot.
When it rains, it pours—and we’re designing the Waymo Driver to handle it. See how shower tests, thermal chambers, and rugged tracks at our closed-course facilities ensure our system can navigate safely, no matter the forecast.
This sponsored article is brought to you by COMSOL.
Over 80 million magnetic resonance imaging (MRI) scans are conducted worldwide every year. MRI systems come in many different shapes and sizes, and are identified by their magnetic field strength. These scanners can range from below 0.55 tesla (T) to 3 T and beyond, where tesla is the unit for the static magnetic field strength. For patients with implanted metallic medical devices, the strong magnetic fields generated by MRI systems can pose several safety concerns.
For instance, high-powered magnets generate forces and torques that can cause the implant to migrate and potentially harm the patient. In addition, the gradient coils in MRI systems, used for spatial localization, can cause gradient-induced heating, vibrations, stimulation of the tissue, and device malfunction. Lastly, the large radiofrequency (RF) coil in MRI systems can cause the electrically conductive implant to electromagnetically resonate (called the “antenna effect”), resulting in RF-induced heating that can potentially burn the patient (Ref. 1).
MED Institute, a full-service contract research organization (CRO) for the medical device industry, is using multiphysics simulation to better understand the effects of RF-induced heating of medically implanted devices for patients that need MRI scans (Ref. 2).
Standardized Test Methods for Medical Devices
MED Institute provides support throughout the entire product development cycle. Its MRI Safety team helps manufacturers evaluate and perform physical testing of their medical devices for safety and compliance in the MRI environment (Figure 1). The team works closely with the Food and Drug Administration (FDA), which oversees the development of medical products to ensure safe and effective use. Furthermore, the team complies with the standards of the American Society for Testing and Materials (ASTM) and International Organization for Standardization (ISO). Specifically, it follows the ASTM F2182 standard to measure RF-induced heating of a medical implant within a gel phantom (Figure 2) and follows ISO/TS 10974 to evaluate electrically active implantable medical devices (AIMD) during MRI.
The gel phantom used for testing is a rectangular acrylic container filled with a conductive gel that approximates the thermal and electrical properties of average human tissue (Ref. 3). The phantom is placed on the patient table inside the RF coil of an MRI scanner and fiber optic temperature probes (1 mm in diameter) are attached to the device before submerging it into the gel. The probes measure the temperature changes experienced by the device during the MRI scan. This type of physical experiment is used often, but it poses some potential problems. For instance, movement within the phantom can introduce uncertainty into the experiment, and inaccurate probe placement can lead to invalid results. In addition, depending on the materials of construction and their magnetic susceptibility, magnetic force could also be an issue (Ref. 4).
To help address these issues, the team at MED Institute uses computational modeling and simulation as an alternative to physical testing. David Gross, PhD, PE, Director of MRI Safety Evaluations and Engineering Simulations, leads a team of analysts that use simulation to gain a better understanding of physics-based problems. He says, “The simulation provides us with 3D temperature contours anywhere within a volume of interest; we are not limited to discrete point-probe measurements, and we do not have to worry about the inaccuracies of the equipment or uncertainty of probe placement from the experiment.”
The team has experience conducting these simulations for closed-bore MRI systems, in which a patient is contained in a compact tube. The team is now using simulation to perform these same analyses for open-bore systems (Figure 3), which have wider physical access, making them beneficial for “imaging pediatric, bariatric, geriatric and claustrophobic patients”, as is explained on the MED Institute website (Ref. 5).
Multiphysics Simulation for RF-Induced Heating
With COMSOL Multiphysics, MED Institute is able to evaluate the RF-induced temperature rise of implants and compare the results of various sizes and constructs of a device within a product family to determine a worst-case configuration. The analysts at MED can import a CAD file of a client’s device using the CAD Import Module, an add-on to COMSOL Multiphysics. In terms of RF-induced heating, the team uses the RF Module and Heat Transfer Module add-on products to combine the physics of electromagnetics with transient heat transfer. For analyzing electromagnetics, the RF Module enables the use of Maxwell’s equations to solve for the wave equation at every point within the model that is impacted by electromagnetic fields. This is done in a steady-state frequency domain, which is then sequentially coupled to the transient heat transfer. With the Heat Transfer Module, the team is also able to solve heat conduction equations.
In the example below, MED Institute imported a CAD file of a knee implant into the COMSOL Multiphysics software. The geometry of the implant included a stem extension, tibial tray, femoral tray, and other components. All of these components can have various sizes and can be assembled in various ways, and patients with the implant can be scanned in various MRI systems that create different electromagnetic fields. With the overwhelming amount of permutations that these variables can produce, it is often not clear which configuration would result in the worst-case RF-induced heating.
“With our Medical Device Development Tool (MDDT), we can not only augment physical testing but even replace it with simulation in some cases. The immediate, positive results are that our clients are able to have their products evaluated quicker and at less cost because we are able to rely on the simulation.”
—David Gross, MED Institute Director of MRI Safety Evaluations and Engineering Simulations
“This is where the use of simulation comes in; you focus your efforts on the primary factors that can change the resonance of a particular implant,” Gross says. By using the COMSOL software, the organization is able to better understand the relative bounds of where it would expect to see resonance and how the device behaves under different electromagnetic fields. This helps with performing sensitivity analyses, where the team can test what causes the change in resonance, such as modifying the diameter of the stem or other components of the implant. For this particular case, the team ran hundreds of simulations to determine the worst-case device size and worst-case RF frequency.
Using worst-case analysis is crucial in the verification process because it allows manufacturers to test different factors for a wide range of devices — such as determining which size brings the most complications — rather than conducting physical testing for every variant of one product (Ref. 6). “Performing multiple physical experiments becomes very expensive and time-consuming, especially when you account for the hourly cost of using a physical MRI scanner,” says Gross.
As shown in Figure 4, the electric field in the gel phantom of a 1.2 T open-bore system (upper left) is very different from a 1.5 T closed-bore system (upper right). The knee implant was simulated in both systems, where the results show a different resonance and maximum temperature rise at the end of the stem (lower images).
Using COMSOL allowed the team to better understand how a device behaves under electromagnetic fields. With these results, the team was then able to determine where they should place temperature probes while physically testing the device in an actual MRI system to obtain temperature rise results.
FDA Qualification of MED Institute’s Virtual MRI Safety Evaluations
MED Institute’s experience with using simulation to test RF-induced heating of medical devices has inspired development of a promising new simulation tool that accelerates the product development cycle. The MED Institute team submitted this simulation tool to the FDA’s Medical Device Development Tool (MDDT) program, which allows the FDA to evaluate new tools with the purpose of furthering medical products and studies. As stated on the FDA website, “The MDDT program is a way for the FDA to qualify tools that medical device sponsors can choose to use in the development and evaluation of medical devices.” (Ref. 7) Once qualified, the FDA recognizes the tool as an official MDDT.
In November 2021, MED Institute was granted FDA qualification of its MDDT, “Virtual MRI Safety Evaluations of Medical Devices”. This is an evaluation process that involves using multiphysics modeling and simulation to test the interactions of medical devices in an MRI environment. The tool is used for modeling an RF coil of an MRI system, ASTM gel phantom, and a medical device placed within the gel. Simulation is then used to analyze the electromagnetics and the heat that generates around the device (Ref. 8).
After testing is complete, the labeling of the device is described by ASTM 2503 or, if it is an electrically active implant, by the ISO 10974 test. The labeling is placed on the device packaging and inside the instructions for use (IFU) so that an MRI technologist or radiologist can see the relevant information for a patient with an implanted device.
“With our MDDT, we can not only augment physical testing but even replace it with simulation in some cases,” says Gross.
Modeling and Simulation Support from the FDA
Over the years, MED Institute has evaluated many medical devices for MRI safety with COMSOL Multiphysics simulations. It has found that COMSOL is a powerful and efficient platform for solving complex multiphysics problems. “The immediate, positive results are that our clients are able to have their products evaluated quicker and at less cost because we are able to rely on the simulation. It does not require them to send us the actual product to test for RF-induced heating,” says Gross.
The FDA has been supportive of computational modeling and is willing to evaluate and accept data from simulation in lieu of physical testing. “It is important for medical device sponsors to know that they have the encouragement and support of the Agency,” Gross says. MED Institute has had the privilege of working alongside the FDA for many years for the benefit of patients. “It goes to show that they are invested and believe in the power of modeling and simulation,” Gross adds.
Almost 95 percent of the leaders said incorporating technologies that would help their organization become more sustainable and energy efficient was a top priority.
The executives said they thought telecommunications, transportation, energy, and financial services would be the areas most affected by technology this year.
They also shared what areas would benefit from 5G implementation.
The impact of 5G
Almost all of the tech leaders agreed that 5G is likely to impact vehicle connectivity and automation the most. They said areas that will benefit from 5G include remote learning and education; telemedicine; live streaming of sports and other entertainment programs; day-to-day communications; and transportation and traffic control.
About 95 percent said satellites that are used to provide connectivity in rural areas will enable devices with 5G to connect from anywhere at any time. In aninterview with IEEE Transmitter about the results, IEEE Senior Member Eleanor Watson predicted that the space satellites will be game-changers because they “enable leapfrogging off the need to build very expensive terrestrial infrastructure. They’re also the ultimate virtual private network—VPN—for extrajurisdictional content access.”
Automation through AI and digital twins
Nearly all the tech leaders—98 percent—said routine tasks and processes such as data analysis will be automated thanks to AI-powered autonomous collaborative software and mobile robots, allowing workers to be more efficient and effective.
The same percentage agreed that digital twin technology and virtual simulations that more efficiently design, develop, and test prototypes and manufacturing processes will become more important. A digital twin is a virtual model of a real-world object, machine, or system that can be used to assess how the real-world counterpart is performing.
Meetings in the metaverse
The leaders are considering ways to use the metaverse in their operations. Ninety-one percent said they plan to use the technology for corporate training sessions, conferences, and hybrid meetings. They said that 5G and ubiquitous connectivity, virtual reality headsets, and augmented reality glasses will be important for advancing the development of the metaverse.
Companies are looking to the metaverse to help them with their sustainable development goals. IEEE Senior Member Daozhuang Lin told IEEE Transmitter that “metaverse-related technology will be a major contributor to reducing carbon emissions because it allows technologists and engineers to perform simulations, rather than relying on real-world demonstrations that run on traditional energy.” But for the technology to really take off, the respondents said, more innovations are needed in 5G and ubiquitous connectivity, virtual-reality headsets, augmented-reality glasses, and haptic devices.
Read more about IEEE members’ insight on the survey results on IEEE Transmitter.
Match ID: 183 Score: 3.57 source: spectrum.ieee.org age: 14 days qualifiers: 3.57 mit
A powerful trillion-watt laser shot at the sky can generate lightning rods in the air that can guide lightning strikes to keep them from causing havoc, a new study finds.
To date, the most common and effective form of protection against lightning is the lightning rod invented by Benjamin Franklin in 1752. These pointed electrically conductive metal rods intercept lightning strikes and guide their electric current safely to the ground.
However, a key drawback of a conventional lightning rod is that the radius of its area of protection is roughly equal to its height. Since there are practical limits to how tall one can build a lightning rod, this means they may not prove useful at protecting large areas, including sensitive infrastructure such as airports, rocket launchpads and nuclear power plants, says study senior author Jean-Pierre Wolf, a physicist at the University of Geneva.
“This is the first demonstration that lightning can be controlled by a laser.” —Jean-Pierre Wolf, University of Geneva
Scientists first suggested using lasers to generate lightning rods in the air nearly 50 years ago. “The idea is to create a very long lightning rod with the laser,” Wolf says.
In the new study, researchers conducted experiments during the summer of 2021 at the top of Mount Säntis, which at 2,502 meters above sea level, is the highest mountain in the Alpstein massif of northeastern Switzerland. The laser was activated every time storms were forecast between June and September, with air traffic closed over the area during these tests.
Wolf and his colleagues sought to protect a 124-meter transmitter tower equipped with a traditional lightning rod at the summit belonging to telecommunications provider Swisscom. This tower is struck by lightning about 100 times a year, and scientists had previously equipped it with multiple sensors to analyze these strikes.
Near the tower, the researchers installed a near-infrared laser the size of a large car. It fired pulses each packing about a half-joule of energy and a picosecond (trillionth of a second) long roughly a thousand times a second, with a peak power of a terawatt (trillion watts). (It also shot a visible green beam to help show the laser’s path.)
“Imagine transporting a 10-ton laser to 2,500-meter altitude on a mountain with helicopters, making it run in very harsh conditions, tracking lightning in extreme weather like winds up to 200 kilometers per hour, heavy rain, hail, temperatures varying from -10 degrees to 20 degrees Celsius in the same day, and then, when it works, you get a massive lightning bolt some tens of meters next to you—and you’re so happy,” Wolf says.
The laser pulses can alter the refractive index of the air—the quality of a material that controls how quickly light travels within it. This can make the air behave like a series of lenses.
After crossing this lensing air, the intense, short laser pulses can rapidly ionize and heat air molecules, expelling them from the path of the beam at supersonic speeds. This leaves behind a channel of low-density air for roughly a millisecond. These “filaments” possess high electric conductivity and can thus serve as lightning rods, and can range up to 100 meters long. The researchers could adjust the laser to create filaments that appear up to a kilometer from the machine.
In experiments, the scientists created filaments above, but near, the tip of the tower’s lightning rod. This essentially boosted the rod’s height by at least 30 meters, extending its area of protection so that lightning would not strike parts of the tower otherwise outside the rod’s shelter, says study lead author Aurélien Houard, a research scientist at the Superior National School of Advanced Techniques in Paris.
The laser operated for more than six hours during thunderstorms happening within three kilometers of the tower. The tower was hit by at least 16 lightning flashes, all of which streaked upward.
Four of these flashes occurred while the laser was operating. High-speed camera footage and radio and X-ray detectors showed the laser helped guide the course of these discharges. One of these guided strikes was recorded on camera and revealed it followed the laser path for nearly 60 meters.
During tests carried out on the summit of Mt. Säntis by Jean-Pierre Wolf and Aurélien Houard’s team, the scientists noted that lightning discharges followed laser beams for several dozen meters before reaching the Swisscom telecommunications tower (in red and white).Xavier Ravinet/UNIGE
“This is the first demonstration that lightning can be controlled by a laser,” Wolf says.
Although lab experiments had suggested that lasers could help guide lightning strikes, previous experiments failed to do so in the field over the past 20 or so years. Wolf, Houard and their colleagues suggest their new work may have succeeded because of the pulse rate of their laser was hundreds of times greater than prior attempts. The more pulses are used, the greater the chance one might successfully intercept all of the activity leading up to a lightning flash. In addition, higher pulse rates are likely better at keeping filaments electrically conductive, they added.
Wolf noted their work is not geoengineering research. “We are not modifying the climate,” he says. “We deflect lightning to protect areas.”
In the long term, the scientists would like to use lasers to extend lightning rods by 500 meters. In addition, they would likely to run experiments at sites such as airports and rocket launchpads, Wolf notes.
The researchers detailed their findings 16 January in the journal Nature Photonics.
Match ID: 184 Score: 3.57 source: spectrum.ieee.org age: 17 days qualifiers: 3.57 mit
All the Data Apple Collects About You—and How to Limit It Mon, 16 Jan 2023 12:00:00 +0000 Cupertino puts privacy first in a lot of its products. But the company still gathers a bunch of your information. Match ID: 185 Score: 3.57 source: www.wired.com age: 17 days qualifiers: 3.57 mit
This sponsored article is brought to you by COMSOL.
History teaches that the Industrial Revolution began in England in the mid-18th century. While that era of sooty foundries and mills is long past, manufacturing remains essential — and challenging. One promising way to meet modern industrial challenges is by using additive manufacturing (AM) processes, such as powder bed fusion and other emerging techniques. To fulfill its promise of rapid, precise, and customizable production, AM demands more than just a retooling of factory equipment; it also calls for new approaches to factory operation and management.
That is why Britain’s Manufacturing Technology Centre (MTC) has enhanced its in-house metal powder bed fusion AM facility with a simulation model and app to help factory staff make informed decisions about its operation. The app, built using the Application Builder in the COMSOL Multiphysics software, shows the potential for pairing a full-scale AM factory with a so-called “digital twin” of itself.
“The model helps predict how heat and humidity inside a powder bed fusion factory may affect product quality and worker safety,” says Adam Holloway, a technology manager within the MTC’s modeling team. “When combined with data feeds from our facility, the app helps us integrate predictive modeling into day-to-day decision-making.” The MTC project demonstrates the benefits of placing simulation directly into the hands of today’s industrial workforce and shows how simulation could help shape the future of manufacturing.
“We’re trying to present the findings of some very complex calculations in a simple-to-understand way. By creating an app from our model, we can empower staff to run predictive simulations on laptops during their daily shifts.” —Adam Holloway, MTC Technology Manager
Additive Manufacturing for Aerospace With DRAMA
To help modern British factories keep pace with the world, the MTC promotes high-value manufacturing throughout the United Kingdom. The MTC is based in the historic English industrial city of Coventry (Figure 2), but its focus is solely on the future. That is why the team has committed significant human and technical resources to its National Centre for Additive Manufacturing (NCAM).
“Adopting AM is not just about installing new equipment. Our clients are also seeking help with implementing the digital infrastructure that supports AM factory operations,” says Holloway. “Along with enterprise software and data connectivity, we’re exploring how to embed simulation within their systems as well.”
The NCAM’s Digital Reconfigurable Additive Manufacturing for Aerospace (DRAMA) project provides a valuable venue for this exploration. Developed in concert with numerous manufacturers, the DRAMA initiative includes the new powder bed fusion AM facility mentioned previously. With that mini factory as DRAMA’s stage, Holloway and his fellow simulation specialists play important roles in making its production of AM aerospace components a success.
Making Soft Material Add Up to Solid Objects
What makes a manufacturing process “additive”, and why are so many industries exploring AM methods? In the broadest sense, an additive process is one where objects are created by adding material layer by layer, rather than removing it or molding it. A reductive or subtractive process for producing a part may, for example, begin with a solid block of metal that is then cut, drilled, and ground into shape. An additive method for making the same part, by contrast, begins with empty space! Loose or soft material is then added to that space (under carefully controlled conditions) until it forms the desired shape. That pliable material must then be solidified into a durable finished part.
Different materials demand different methods for generating and solidifying additive forms. For example, common 3D printers sold to consumers produce objects by unspooling warm plastic filament, which bonds to itself and becomes harder as it cools. By contrast, the metal powder bed fusion process (Ref. 1) begins with, as its name suggests, a powdered metal which is then melted by applied heat and re-solidified when it cools. A part produced via the metal powder bed fusion process can be seen in Figure 3.
How Heat and Humidity Affect Metal Powder Bed Fusion
“The market opportunities for AM methods have been understood for a long time, but there have been many obstacles to large-scale adoption,” Holloway says. “Some of these obstacles can be overcome during the design phase of products and AM facilities. Other issues, such as the impact of environmental conditions on AM production, must be addressed while the facility is operating.”
For instance, maintaining careful control of heat and humidity is an essential task for the DRAMA team. “The metal powder used for the powder bed fusion process (Figure 4) is highly sensitive to external conditions,” says Holloway. “This means it can begin to oxidize and pick up ambient moisture even while it sits in storage, and those processes will continue as it moves through the facility. Exposure to heat and moisture will change how it flows, how it melts, how it picks up an electric charge, and how it solidifies,” he says. “All of these factors can affect the resulting quality of the parts you’re producing.”
Careless handling of powdered metal is not just a threat to product quality. It can threaten the health and safety of workers as well. “The metal powder used for AM processes is flammable and toxic, and as it dries out, it becomes even more flammable,” Holloway says. “We need to continuously measure and manage humidity levels, as well as how loose powder propagates throughout the facility.”
To maintain proper atmospheric conditions, a manufacturer could augment its factory’s ventilation with a full climate control system, but that could be prohibitively expensive. The NCAM estimated that it would cost nearly half a million English pounds to add climate control to its relatively modest facility. But what if they could adequately manage heat and humidity without adding such a complicated system?
Responsive Process Management with Multiphysics Modeling
Perhaps using multiphysics simulation for careful process management could provide a cost-effective alternative. “As part of the DRAMA program, we created a model of our facility using the computational fluid dynamics (CFD) capabilities of the COMSOL software. Our model (Figure 5) uses the finite element method to solve partial differential equations describing heat transfer and fluid flow across the air domain in our facility,” says Holloway. “This enabled us to study how environmental conditions would be affected by multiple variables, from the weather outside, to the number of machines operating, to the way machines were positioned inside the shop. A model that accounts for those variables helps factory staff adjust ventilation and production schedules to optimize conditions,” he explains.
A Simulation App that Empowers Factory Staff
The DRAMA team made their model more accessible by building a simulation app of it with the Application Builder in COMSOL Multiphysics (Figure 6). “We’re trying to present the findings of some very complex calculations in a simple-to-understand way,” Holloway explains. “By creating an app from our model, we can empower staff to run predictive simulations on laptops during their daily shifts.”
The app user can define relevant boundary conditions for the beginning of a factory shift and then make ongoing adjustments. Over the course of a shift, heat and humidity levels will inevitably fluctuate. Perhaps factory staff should alter the production schedule to maintain part quality, or maybe they just need to open doors and windows to improve ventilation. Users can change settings in the app to test the possible effects of actions like these. For example, Figure 8 presents isothermal surface plots that show the effect that opening the AM machines’ build chambers has on air temperature, while Figure 9 shows how airflow is affected by opening the facility doors.
A Step Toward a “Factory-Level Digital Twin”
While the current app is an important step forward, it does still require workers to manually input relevant data. Looking ahead, the DRAMA team envisions something more integral, and therefore, more powerful: a “digital twin” for its AM facility. A digital twin, as described by Ed Fontes in a 2019 post on the COMSOL Blog (Ref. 2), is “a dynamic, continuously updated representation of a real physical product, device, or process.” It is important to note that even the most detailed model of a system is not necessarily its digital twin.
“To make our factory environment model a digital twin, we’d first provide it with ongoing live data from the actual factory,” Holloway explains. “Once our factory model was running in the background, it could adjust its forecasts in response to its data feeds and suggest specific actions based on those forecasts.”
“We want to integrate our predictive model into a feedback loop that includes the actual factory and its staff. The goal is to have a holistic system that responds to current factory conditions, uses simulation to make predictions about future conditions, and seamlessly makes self-optimizing adjustments based on those predictions,” Holloway says. “Then we could truly say we’ve built a digital twin for our factory.”
Simulation at Work on the Factory Floor
As an intermediate step toward building a full factory-level digital twin, the DRAMA simulation app has already proven its worth. “Our manufacturing partners may already see how modeling can help with planning an AM facility, but not really understand how it can help with operation,” Holloway says. “We’re showing the value of enabling a line worker to open up the app, enter in a few readings or import sensor data, and then quickly get a meaningful forecast of how a batch of powder will behave that day.”
Beyond its practical insights for manufacturers, the overall project may offer a broader lesson as well: By pairing its production line with a dynamic simulation model, the DRAMA project has made the entire operation safer, more productive, and more efficient. The DRAMA team has achieved this by deploying the model where it can do the most good — into the hands of the people working on the factory floor.
Each January, the editors of
IEEE Spectrum offer up some predictions about technical developments we expect to be in the news over the coming year. You’ll find a couple dozen of those described in the following special report. Of course, the number of things we could have written about is far higher, so we had to be selective in picking which projects to feature. And we’re not ashamed to admit, gee-whiz appeal often shaped our choices.
The technical challenge of missile defense has been compared with that of hitting a bullet with a bullet. Then there is the still tougher economic challenge of using an expensive interceptor to kill a cheaper target—like hitting a lead bullet with a golden one.
Maybe trouble and money could be saved by shooting down such targets with a laser. Once the system was designed, built, and paid for, the cost per shot would be low. Such considerations led planners at the Pentagon to seek a solution from Lockheed Martin, which has just delivered a 300-kilowatt laser to the U.S. Army. The new weapon combines the output of a large bundle of fiber lasers of varying frequencies to form a single beam of white light. This laser has been undergoing tests in the lab, and it should see its first field trials sometime in 2023. General Atomics, a military contractor in San Diego, is also developing a laser of this power for the Army based on what’s known as the distributed-gain design, which has a single aperture.
Both systems offer the prospect of being inexpensive to use. The electric bill itself would range “from US $5 to $10,” for a pulse lasting a few seconds, says Michael Perry, the vice president in charge of laser systems for General Atomics.
Why are we getting ray guns only now, more than a century after H.G. Wells imagined them in his sci-fi novel The War of the Worlds? Put it down partly to the rising demand for cheap antimissile defense, but it’s mainly the result of technical advances in high-energy lasers.
The old standby for powerful lasers employed chemical reactions in flowing gas. That method was clumsy, heavy, and dangerous, and the laser itself became a flammable target for enemies to attack. The advantage was that these chemical lasers could be made immensely powerful, a far cry from the puny pulsed ruby lasers that wowed observers back in the 1960s by punching holes in razor blades (at power levels jocularly measured in “gillettes”).
“With lasers, if you can see it, you can kill it.” —Robert Afzal, Lockheed Martin
By 2014, fiber lasers had reached the point where they could be considered for weapons, and one 30-kW model was installed on the USS Ponce, where it demonstrated the ability to shoot down speedboats and small drones at relatively close range. The 300-kW fiber lasers being employed now in the two Army projects emit about 100 kW in optical power, enough to burn through much heftier targets (not to mention quite a few gillettes) at considerable distances.
“A laser of that class can be effective against a wide variety of targets, including cruise missiles, mortars, UAVs, and aircraft,” says Perry. “But not reentry vehicles [launched by ballistic missiles].” Those are the warheads, and to ward them off, he says, you’d probably have to hit the rocket when it’s still in the boost phase, which would mean placing your laser in orbit. Laser tech is still far from performing such a feat.
Even so, these futuristic weapons will no doubt find plenty of applications in today’s world. Israel made news in April by field-testing an airborne antimissile laser called Iron Beam, a play on the name Iron Dome, the missile system it has used to down rockets fired from Gaza. The laser system, reportedly rated at about 100 kW, is still not in service and hasn’t seen combat, but one day it may be able to replace some, if not all, of Iron Dome’s missiles with photons. Other countries have similar capabilities, or say they do. In May, Russia said it had used a laser to incinerate a Ukrainian drone from 5 kilometers away, a claim that Ukraine’s president, Volodymyr Zelenskyy, derided.
Not all ray guns must be lasers, though. In March, Taiwan News reported that Chinese researchers had built a microwave weapon that in principle could be placed in orbit from where its 5-megawatt pulses could fry the electronic heart of an enemy satellite. But making such a machine in the lab is quite different from operating it in the field, not to mention in outer space, where supplying power and removing waste heat constitute major problems.
Because lasers performance falls off in bad weather, they can’t be relied on by themselves to defend critically important targets. They must instead be paired with kinetic weapons—missiles or bullets—to create a layered defense system.
“With lasers, if you can see it, you can kill it; typically rain and snow are not big deterrents,” says Robert Afzal, an expert on lasers at Lockheed Martin. “But a thundercloud—that’s hard.”
Afzal says that the higher up a laser is placed, the less interference it will face, but there is a trade-off. “With an airplane you have the least amount of resources—least volume, least weight—that is available to you. On a ship, you have a lot more resources available, but you’re in the maritime atmosphere, which is pretty hazy, so you may need a lot more power to get to the target. And the Army is in between: It deals with closer threats, like rockets and mortars, and they need a deep magazine, because they deal with a lot more targets.”
In every case, the point is to use expensive antimissile missiles only when you must. Israel opted to pursue laser weapons in part because its Iron Dome missiles cost so much more than the unguided, largely homemade rockets they defend against. Some of the military drones that Russia and Ukraine are now flying wouldn’t break the budget of the better-heeled sort of hobbyist. And it would be a Pyrrhic victory indeed to shoot them from the sky with projectiles so costly that you went broke.
This article appears in the January 2023 print issue as “Economics Drives a Ray-Gun Resurgence .”
A dozen more tech milestones to watch for in 2023.
Match ID: 188 Score: 3.57 source: spectrum.ieee.org age: 60 days qualifiers: 3.57 mit
Dow, S&P 500 finish lower Thursday, kick off final month of a brutal year on a down note Thu, 01 Dec 2022 16:04:53 -0500 U.S. stocks finished mostly lower on Thursday, kicking off the final month of a brutal year for investors on a downbeat note. The Dow Jones Industrial Average fell about 194 points, or 0.6%, ending near 34,395. The S&P 500 index fell 0.1%, while the Nasdaq Composite Index rose 0.1%, according to FactSet. Stocks rallied sharply on Wednesday after Federal Reserve Chairman Jerome Powell indicated the central bank may soon downsize its pace or rate hikes after a series of jumbo increases of 75 basis points to the Fed's policy rate. That has brought the benchmark rate to a range of 3.75% to 4%, its highest level in 15 years. But signs that U.S. inflation may be falling after being stuck near a 40-year high have encouraged Fed officials and investors, with the 10-year Treasury rate falling to 3.6% Thursday, its lowest yield in about two months, according to Dow Jones Market Data. The next big economic item for investors will be the release on Friday of October jobs data, which could help determine the size of the Fed's next rate hike during its Dec. 13-14 Federal Open Market Committee meeting. The odds currently favor a 50 basis point increase. Match ID: 189 Score: 3.57 source: www.marketwatch.com age: 63 days qualifiers: 3.57 mit
Optical fiber has long since replaced copper wiring in core information networks. But that’s not the case for free-space optical (FSO) communications using optical lasers to transmit data through the air. Despite FSO having the potential to provide orders of magnitude more data capacity compared with that of the traditional radio-frequency communications space missions currently rely on, the technology has been stuck on the launch pad because of atmospheric interference that can absorb and scatter the signals, as well as the strict acquisition and tracking requirements for communicating between ground stations and orbiting satellites.
“We’ve been able to maintain a robust single-mode fiber coupling resulting in an uninterrupted 100-gigabits-per-second optical-data link,” says Shane Walsh, team leader of the project. “We do this by tracking the drone at angular rates up to 1.5 degrees a second—the equivalent of tracking a satellite in low Earth orbit (LEO).”
With the greater data capacity of coherent communications and its compatibility with standard fiber optics, Walsh says the way is now open to developing terabits-per-second communications between LEO satellites and suitably equipped ground stations. “You can think of it as taking ground-to-space communications from dial-up speeds to superfast broadband speeds,” he adds.
“This multidisciplinary approach by the researchers and the test results are impressive,” says Alan Willner, a professor of electrical engineering specializing in optical communications at the University of Southern California. “They appear to have mitigated some of the key issues with free-space optical communications such as communicating through turbulence, and in pointing and tracking at speeds needed to communicate with low-orbiting satellites.”
Benjamin Dix-Matthews, who is researching the optics for the project, describes the setup used. A PlaneWave Instruments L-350 direct-drive mount is employed to enable tracking of the target. Attached to it is an optical breadboard housing the tracking and acquisition systems. These include a GPS module for initial tracking, a closed-loop machine-vision (MV) system that provides intermediate acquisition and tracking, and a tip-tilt adaptive optics (AO) system consisting of a 2-inch-diameter mirror connected to a commercial piezo tip/tilt platform.
“We’re reaching the limits in what we can do, at least not without a lot of pain, in communicating using radio frequencies. So we will likely adopt new optical technologies. And I don’t see any obvious showstoppers to further advances using the researchers’ approach.” —Alan Willner
“The tip/tilt AO system operates at 200 hertz,” says Dix-Matthews. “It plays a dual role of correcting beam wander of the outgoing beam to maintain pointing accuracy, and it also corrects the angle of arrival of the return beam to maintain fiber-coupling efficiency.”
Given the challenges “of tracking and dealing with turbulence, coupling light into a single-mode fiber is no easy matter,” says Willner. “That the researchers are able to do so is noteworthy.”
To test the technology, Walsh says they set up the terminal on the roof of the Institute’s physics building, 34 meters above sea level. To simulate the angular motion of a LEO satellite, they employed a drone outfitted with a gimbal-mounted corner cube retroreflector that returns the 1,550-nm signal, four green beacon LEDs for machine-vision tracking, and a camera for orienting the gimbal. Also included is GPS and a barometric altimeter to relay initial coordinates to the optical terminal.
The drone flew at an altitude of 120 meters at a line-of-sight distance of up to 700 meters for a laser-beam-folded link length of 1.4 kilometers. Initially, the gimbal was adjusted manually by the pilot so that the beacons were oriented toward the mount. At the next stage, the GPS module transmitted the drone’s position to the terminal computer, which enabled software to point the terminal at the target. With the target located, the MV loop closed and the mount-pointing was adjusted to track the drone beacons. The tip/tilt loop was then closed to provide fine-scale tracking. The MV and tip/tilt loops were run concurrently to maintain tracking and to correct for beam wander and wind buffeting.
“We conducted some 30 test flights, flying the drone in passes simulating the tracking rates needed for free-space optical links to satellites in LEO,” says Walsh. “And in spite of atmospheric turbulence and macroscopic motion, we were able to sustain a 100-gigabit-per-second link.”
“There’s a good reason why space agencies and major corporations are interested in free-space optical communications,” says Willner. “We’re reaching the limits in what we can do, at least not without a lot of pain, in communicating using radio frequencies. So we will likely adopt new optical technologies. And I don’t see any obvious showstoppers to further advances using the researchers approach.”
The terminal needs to be optimized further, and the MV system will require changes for satellite use. Walsh says the next step is to test the technology with an aircraft flying at higher altitudes and after that, testing with a satellite would begin.
In addition, the researchers are developing a purpose-built optical-communications ground station that they believe will lead to the commercialization of the technology. To this end, they are working with three space-related companies in Australia, says Walsh. “So we anticipate receiving our first LEO downlink sometime in 2023, and hope to provide commercial high-data-rate coherent optical communications to and from space in the next few years.”
Match ID: 190 Score: 3.57 source: spectrum.ieee.org age: 63 days qualifiers: 3.57 mit
Neuberger wins clearance to manage assets in China for Chinese residents Mon, 28 Nov 2022 12:39:44 -0500 Neuberger Berman said Monday it became the second global institution to receive final approval from the China Securities Regulatory Commission (CSRC) to launch a wholly owned, newly established mutual fund business in China. Neuberger Berman will now be allowed to manage local assets for local clients, which has not been allowed previously. BlackRock Inc. was the first to receive approval. Patrick Liu, CEO of Neuberger Berman Fund Management (China) (FMC), said the country's commitment to opening up to high-quality financial services "will bring significant opportunities for local investors." Michelle Wei will become chief investment officer - equities of the FMC. Match ID: 191 Score: 3.57 source: www.marketwatch.com age: 66 days qualifiers: 3.57 mit
Are you looking for a way to create content that is both effective and efficient? If so, then you should consider using an AI content generator. AI content generators are a great way to create content that is both engaging and relevant to your audience.
There are a number of different AI content generator tools available on the market, and it can be difficult to know which one is right for you. To help you make the best decision, we have compiled a list of the top 10 AI content generator tools that you should use in 2022.
Jasper is a content writing and content generation tool that uses artificial intelligence to identify the best words and sentences for your writing style and medium in the most efficient, quick, and accessible way.
It's trusted by 50,000+ marketers for creating engaging marketing campaigns, ad copy, blog posts, and articles within minutes which would traditionally take hours or days. Special Features:
Blog posts have been optimized for search engines and rank high on Google and other search engines. This is a huge plus for online businesses that want to generate traffic to their website through content marketing.
99.9% Original Content and guarantees that all content it generates will be original, so businesses can focus on their online reputation rather than worrying about penalties from Google for duplicate content.
Long-Form Article Writing – Jasper.ai is also useful for long-form writing, allowing users to create articles of up to 10,000 words without any difficulty. This is ideal for businesses that want to produce in-depth content that will capture their audience’s attention.
Generates a wide variety of content types
Guarantees 100% unique and free-plagiarism content
Copy.ai is a content writing tool that enables its users to create marketing copy, social media posts, Facebook Ads, and many more formats by using more than 90 templates such as Bullet Points to Blogs, General Ads, Hook Text, etc.
The utility of this service can be used for short-term or format business purposes such as product descriptions, website copy, market copy, and sales reports.
Provides a large set of templates where you can input the data and the AI will generate Templates with around 10 or more options to make it easy for the user to choose.
Smooth and efficient user experience with chrome extension where one can easily transfer information from Copy.ai to a content management forum, Google docs, etc without having to switch tabs.
Generates content in 25 languages where your input and output language may differ if you are not a native English speaker.
The best option for short-length content generation such as market copy, sales reports, blogs, etc.
Facebook community and email support for users to understand the AI better and to interact with other users.
Beginner-friendly user experience with various templates to help the process of content generation.
Free plan and no credit card required.
The free plan from Copy AI is a welcome sight, however, it is just suitable for testing the software.
Free Trial – 7 days with 24/7 email support and 100 runs per day.
Pro Plan: $49 and yearly, it will cost you $420 i.e. $35 per month.
Wait! I've got a pretty sweet deal for you. Sign up through the link below, and you'll get (7,000 Free Words Plus 40% OFF) if you upgrade to the paid plan within four days.
Just like Outranking, Frase is an AI that helps you research, create and optimize your content to make it high quality within seconds. Frase works on SEO optimization where the content is made to the liking of search engines by optimizing keywords and keywords.
Generate full-length, optimized content briefs in seconds and review the main keywords, headers, and concepts in your SEO competitors’ content in one intuitive research panel.
Write high-converting, SEO-optimized copy and make writer’s block a thing of the past with automated outlines, blog introductions, product descriptions, FAQs, and more.
An intuitive text editor that uses a topic model to score your content Optimization against your competitors.
A dashboard that automatically identifies and categorizes your best content opportunities. Frase uses your Google Search Console data to serve up actionable insights about what you should work on next.
Unlike Outranking, the interface to Frase is very user-friendly and accessible.
Users who are content writers and have to research get a lot of time to write and ideate instead of juggling from one website to another as data can be easily accessed on Frase for research on a topic.
Optimizing content with keyword analysis and SEO optimization has been made easier with Frase's Content Optimization.
Reports on competitors' websites help in optimizing our own articles and websites.
Content briefs make research very easy and efficient.
The paid plans are a bit pricey because they include many tools for content optimization.
Frase provides two plans for all users and a customizable plan for an enterprise or business.
Solo Plan: $14.99/Month and $12/Month if billed yearly with 4 Document Credits for 1 user seat.
Basic Plan: $44.99/month and $39.99/month if billed yearly with 30 Document Credits for 1 user seat.
Team Plan: $114.99/month and $99.99/month if billed yearly for unlimited document credits for 3 users.
*SEO Add-ons and other premium features for $35/month irrespective of the plan.
4. Article Forge — Popular Blog Writing Software for Efficiency and Affordability
Article Forge is another content generator that operates quite differently from the others on this list. Unlike Jasper.ai, which requires you to provide a brief and some information on what you want it to write this tool only asks for a keyword. From there, it’ll generate a complete article for you.
Article Forge integrates with several other software, including WordAi, RankerX, SEnuke TNG, and SEO Autopilot.
The software takes information from high-ranking websites and then creates more credible articles to rank well in search engines.
If you want to generate content regularly, Article Forge can help. You can set it up to automatically generate articles based on your specific keyword or topic. Or, if you need a lot of content quickly, you can use the bulk content feature to get many articles in a short period.
Excellent for engaging with readers on multiple CMS platforms
No spinner content. Create multiple unique articles
Extremely quick and efficient
One of the cheapest options online
You need to pay attention to the content since it’s not always on point
Only ideal for decent-quality articles – if you’re lucky
What’s excellent about Article Forge is they provide a 30-day money-back guarantee. You can choose between a monthly or yearly subscription. Unfortunately, they offer a free trial and no free plan:
Basic Plan: $27/Month
This plan allows users to produce up to 25k words each month. This is excellent for smaller blogs or those who are just starting.
Standard Plan: $57/month)
This plan allows users to produce up to 250k words each month. This is excellent for smaller blogs or those who are just starting.
Unlimited Plan: $117/month
If you’re looking for an unlimited amount of content, this is the plan for you. You can create as many articles as you want, and there’s no word limit.
It’s important to note that Article Forge guarantees that all content generated through the platform passes Copyscape.
Rytr.me is a free AI content generator perfect for small businesses, bloggers, and students. The software is easy to use and can generate SEO-friendly blog posts, articles, and school papers in minutes.
Rytr can be used for various purposes, from writing blog posts to creating school papers. You can also generate captions for social media, product descriptions, and meta descriptions.
Rytr supports writing for over 30 languages, so you can easily create content in your native language.
The AI helps you write content in over 30 tones to find the perfect tone for your brand or project.
Rytr has a built-in plagiarism checker that ensures all your content is original and plagiarism free.
Easy to use
Creates unique content
It supports over 30 languages
Multi-tone writing capabilities
It can be slow at times
Grammar and flow could use improvement
Rytr offers a free plan that comes with limited features. It covers up to 5,000 characters generated each month and has access to the built-in plagiarism checker. If you want to use all the features of the software, you can purchase one of the following plans:
Saver Plan: $9/month, $90/year
Generate 100k characters per month
Access 40+ use-cases
Write in 30+ languages
Access 20+ tones
Built-in plagiarism checker
Generate up to 20 images per month with AI
Access to premium community
Create your own custom use-case
Unlimited Plan: $29/month, $290/year
Generate UNLIMITED* characters per month
Access 40+ use-cases
Write in 30+ languages
Access 20+ tones
Built-in plagiarism checker
Generate up to 100 images per month with AI
Access to premium community
Create your own custom use-case
Dedicated account manager
Priority email & chat support
6. Writesonic — Best AI Article Writing Software with a Grammar and Plagiarism Checker
Writesonic is a free, easy-to-use AI content generator. The software is designed to help you create copy for marketing content, websites, and blogs. It's also helpful for small businesses or solopreneurs who need to produce content on a budget.
The tone checker, is a great feature that helps you ensure that your content is consistent with your brand’s voice. This is excellent for crafting cohesive and on-brand content.
The grammar checker is another valuable tool that helps you produce error-free content.
The plagiarism checker is a great way to ensure that your content is original.
Writesonic is free with limited features. The free plan is more like a free trial, providing ten credits. After that, you’d need to upgrade to a paid plan. Here are your options:
Access to all the short-form content templates like Facebook ads, product descriptions, paragraphs, and more.
Awesome tools to help you write short and long-form content like blog posts, ebooks, and more.
7. CopySmith — Produces Quality Content in Seconds
CopySmith is an AI content generator that can be used to create personal and professional documents, blogs, and presentations. It offers a wide range of features including the ability to easily create documents and presentations.
CopySmith also has several templates that you can use to get started quickly.
This software allows you to create product descriptions, landing pages, and more in minutes.
Offers rewritten content that is both unique and plagiarism free.
This feature helps you create product descriptions for your Shopify store that are SEO-friendly and attractive to customers.
This is an excellent tool for new content ideas.
Excellent for generating eCommerce-ready content
No credit card is required for the free trial
The blog content isn’t the best
Better suited for short copy
CopySmith offers a free trial with no credit card required. After the free trial, the paid plans are as follows:
Starter Plan: $19/month
Get 50 credits monthly with up to 20 plagiarism checks.
Professional Plan: $59/month
Upgrade to 400 credits per month with up to 100 plagiarism checks.
Enterprise – Create a custom-tailored plan by contacting the sales team.
8. Hypotenuse.ai — Best AI Writing Software for E-Commerce and Product Descriptions
Hypotenuse.ai is a free online tool that can help you create AI content. It's great for beginners because it allows you to create videos, articles, and infographics with ease. The software has a simple and easy-to-use interface that makes it perfect for new people looking for AI content generation.
You can create custom-tailored copy specific to your audience’s needs. This is impressive since most free AI content generators do not offer this feature.
Hypotenuse takes data from social media sites, websites, and more sources to provide accurate information for your content.
If you’re selling a product online, you can use Hypotenuse to create automated product descriptions that are of high quality and will help you sell more products.
Excellent research capabilities
Automated product descriptions
No free plan
Hypotenuse doesn’t offer a free plan. Instead, it offers a free trial period where you can take the software for a run before deciding whether it’s the right choice for you or not. Other than that, here are its paid options:
Starter Plan: $29/month
This plan comes with 100 credits/month with 25k Words with one user seat. It’s an excellent option for individuals or small businesses.
Growth Plan: $59/month
This plan comes with 350 credits/month with 87.5k words and 1 user seat. It’s perfect for larger businesses or agencies.
Enterprise – pricing is custom, so don’t hesitate to contact the company for more information.
9. Kafkai — Leading AI Writing Tool for SEOs and Marketers
Kafkai is an AI content generator and writing software that produces niche-specific content on a wide variety of topics. It offers a user-friendly interface, as well as a high degree of personalization.
Kafkai offers a host of features that make it SEO-ready, including the ability to add keywords and tags to your content.
Kafkai is designed explicitly for creating niche-specific content, which can be a significant advantage for businesses or bloggers looking to target a specific audience.
Kafkai produces high-quality content, a significant advantage for businesses or bloggers looking to set themselves apart from the competition.
Kafkai offers a unique feature that allows you to seed content from other sources, which can be a significant time-saver when creating content.
Quick results with high efficiency
You can add seed content and phrases
It can be used to craft complete articles
Its long-form-content generator isn’t very high quality
Kafkai comes with a free trial to help you understand whether it’s the right choice for you or not. Additionally, you can also take a look at its paid plans:
Writer Plan: $29/month Create 100 articles per month. $0.29/article
Newsroom Plan $49/month – Generate 250 articles a month at $0.20 per article.
Printing Press Plan: $129 /month Create up to 1000 articles a month at roughly $0.13/article.
Industrial Printer Plan: ($199 a month) – Generate 2500 articles each month for $0.08/article.
Peppertype.ai is an online AI content generator that’s easy to use and best for small business owners looking for a powerful copy and content writing tool to help them craft and generate various content for many purposes.
You can choose from various pre-trained templates to create your content. This can save you a lot of time since you don’t have to spend time designing your templates or starting entirely from scratch.
Peppertype offers various copywriting frameworks to help you write better content.
Peppertype is lightweight and easy to use. This makes it perfect for beginners who want to get started with AI content generation.
Peppertype’s autocorrect feature automatically corrects your grammar and spelling mistakes as you type. This ensures that your content is free of errors.
Peppertype tracks user engagement data to help you create content that resonates with your audience.
It doesn’t have a steep learning curve
It helps users to create entirely original content
The basic plan comes with access to all of their frameworks and templates
Built-in style editor
More hits than misses on content generated
Tons of typos and grammatical errors
Unfortunately, Peppertype.ai isn’t free. However, it does have a free trial to try out the software before deciding whether it’s the right choice for you. Here are its paid plans:
50,000 words included
40+ content types
Notes and Text Editor
Access to templates
Active customer support
Team Plan: $199/month
Everything included in the Personal
Collaborate & share results
Request custom content types
Enterprise – pricing is custom, so please contact the company for more information.
It is no longer a secret that humans are getting overwhelmed with the daily task of creating content. Our lives are busy, and the process of writing blog posts, video scripts, or other types of content is not our day job. In comparison, AI writers are not only cheaper to hire, but also perform tasks at a high level of excellence. This article explores 10 writing tools that used AI to create better content choose the one which meets your requirements and budget but in my opinion Jasper ai is one of the best tools to use to make high-quality content.
If you have any questions ask in the comments section
Note: Don't post links in your comments
Note: This article contains affiliate links which means we make a small commission if you buy any premium plan from our link.
Match ID: 192 Score: 3.57 source: www.crunchhype.com age: 79 days qualifiers: 3.57 mit
Collisions with birds are a serious problem for commercial aircraft, costing the industry billions of dollars and killing thousands of animals every year. New research shows that a robotic imitation of a peregrine falcon could be an effective way to keep them out of flight paths.
Worldwide, so-called birdstrikes are estimated to cost the civil aviation industry almost US $1.4 billion annually. Nearby habitats are often deliberately made unattractive to birds, but airports also rely on a variety of deterrents designed to scare them away, such as loud pyrotechnics or speakers that play distress calls from common species.
However, the effectiveness of these approaches tends to decrease over time, as the birds get desensitized by repeated exposure, says Charlotte Hemelrijk, a professor on the faculty of science and engineering at the University of Groningen, in the Netherlands. Live hawks or blinding lasers are also sometimes used to disperse flocks, she says, but this is controversial as it can harm the animals, and keeping and training falcons is not cheap.
“The birds don’t distinguish [RobotFalcon] from a real falcon, it seems.” —Charlotte Hemelrijk, University of Groningen
In an effort to find a more practical and lasting solution, Hemelrijk and colleagues designed a robotic peregrine falcon that can be used to chase flocks away from airports. The device is the same size and shape as a real hawk, and its fiberglass and carbon-fiber body has been painted to mimic the markings of its real-life counterpart.
Rather than flapping like a bird, the RobotFalcon relies on two small battery-powered propellers on its wings, which allows it to travel at around 30 miles per hour for up to 15 minutes at a time. A human operator controls the machine remotely from a hawk’s-eye perspective via a camera perched above the robot’s head.
To see how effective the RobotFalcon was at scaring away birds, the researchers tested it against a conventional quadcopter drone over three months of field testing, near the Dutch city of Workum. They also compared their results to 15 years of data collected by the Royal Netherlands Air Force that assessed the effectiveness of conventional deterrence methods such as pyrotechnics and distress calls.
In a paper published in the Journal of the Royal Society Interface, the team showed that the RobotFalcon cleared fields of birds faster and more effectively than the drone. It also kept birds away from fields longer than distress calls, the most effective of the conventional approaches.
There was no evidence of birds getting habituated to the RobotFalcon over three months of testing, says Hemelrijk, and the researchers also found that the birds exhibited behavior patterns associated with escaping from predators much more frequently with the robot than with the drone. “The way of reacting to the RobotFalcon is very similar to the real falcon,” says Hemelrijk. “The birds don’t distinguish it from a real falcon, it seems.”
Other attempts to use hawk-imitating robots to disperse birds have had less promising results, though. Morgan Drabik-Hamshare, a research wildlife biologist at the DoA, and her colleagues published a paper in Scientific Reports last year that described how they pitted a robotic peregrine falcon with flapping wings against a quadcopter and a fixed-wing remote-controlled aircraft.
They found the robotic falcon was the least effective of the three at scaring away turkey vultures, with the quadcopter scaring the most birds off and the remote-controlled plane eliciting the quickest response. “Despite the predator silhouette, the vultures did not perceive the predator UAS [unmanned aircraft system] as a threat,” Drabik-Hamshare wrote in an email.
Zihao Wang, an associate lecturer at the University of Sydney, in Australia, who develops UAS for bird deterrence, says the RobotFalcon does seem to be effective at dispersing flocks. But he points out that its wingspan is nearly twice the diagonal length of the quadcopter it was compared with, which means it creates a much larger silhouette when viewed from the birds’ perspective. This means the birds could be reacting more to its size than its shape, and he would like to see the RobotFalcon compared with a similar size drone in the future.
The unique design also means the robot requires an experienced and specially trained operator, Wang adds, which could make it difficult to roll out widely. A potential solution could be to make the system autonomous, he says, but it’s unclear how easy this would be.
Hemelrijk says automating the RobotFalcon is probably not feasible, both due to strict regulations around the use of autonomous drones near airports as well as the sheer technical complexity. Their current operator is a falconer with significant experience in how hawks target their prey, she says, and creating an autonomous system that could recognize and target bird flocks in a similar way would be highly challenging.
But while the need for skilled operators is a limitation, Hemelrijk points out that most airports already have full-time staff dedicated to bird deterrence, who could be trained. And given the apparent lack of habituation and the ability to chase birds in a specific direction—so that they head away from runways—she thinks the robotic falcon could be a useful addition to their arsenal.
This article appears in the February 2023 print issue as “Robotic Falcon Is the Scarecrow of the Skies.”
Match ID: 194 Score: 3.57 source: spectrum.ieee.org age: 88 days qualifiers: 3.57 mit
Each contender is taking a different approach to space-based cellular service. The Apple offering uses the existing satellite bandwidth Globalstar once used for messaging offerings, but without the need for a satellite-specific handset. The AST project and another company, Lynk Global, would use a dedicated network of satellites with larger-than-normal antennas to produce a 4G, 5G, and someday 6G cellular signal compatible with any existing 4G-compatible phone (as detailed in other recent IEEESpectrum coverage of space-based 5G offerings). Assuming regulatory approval is forthcoming, the technology would work first in equatorial regions and then across more of the planet as these providers expand their satellite constellations. T-Mobile and Starlink’s offering would work in the former PCS band in the United States. SpaceX, like AST and Lynk, would need to negotiate access to spectrum on a country-by-country basis.
Apple’s competitors are unlikely to see commercial operations before 2024.
“Regulators have not decided on the power limits from space, what concerns there are about interference, especially across national borders. There’s a whole bunch of regulatory issues that simply haven’t been thought about to date.” —Tim Farrar, telecommunications consultant
The T-Mobile–Starlink announcement is “in some ways an endorsement” of AST and Lynk’s proposition, and “in other ways a great threat,” says telecommunications consultant Tim Farrar of Tim Farrar Associates in Menlo Park, Calif. AST and Lynk have so far told investors they expect their national mobile network operator partners to charge per use or per day, but T-Mobile announced that they plan to include satellite messaging in the 1,900-megahertz range in their existing services. Apple said their Emergency SOS via Satellite service would be free the first two years for U.S. and Canadian iPhone 14 buyers, but did not say what it would cost after that. For now, the Globalstar satellites it is using cannot offer the kind of broadband bandwidth AST has promised, but Globalstar has reported to investors orders for new satellites that might offer new capabilities, including new gateways.
Even under the best conditions—a clear view of the sky—users will need 15 seconds to send a message via Apple’s service. They will also have to follow onscreen guidance to keep the device pointed at the satellites they are using. Light foliage can cause the same message to take more than a minute to send. Ashley Williams, a satellite engineer at Apple who recorded the service’s announcement, also mentioned a data-compression algorithm and a series of rescue-related suggested auto-replies intended to minimize the amount of data that users would need to send during a rescue.
Meanwhile, AST SpaceMobile says it aims to launch an experimental satellite Saturday, 10 September, to test its cellular broadband offering.
Last month’s T-Mobile-SpaceX announcement “helped the world focus attention on the huge market opportunity for SpaceMobile, the only planned space-based cellular broadband network. BlueWalker 3, which has a 693 sq ft array, is scheduled for launch within weeks!” tweeted AST SpaceMobile CEO Abel Avellan on 25 August. The size of the array matters because AST SpaceMobile has so far indicated in its applications for experimental satellite licenses that it intends to use lower radio frequencies (700–900 MHz) with less propagation loss but that require antennas much larger than conventional satellites carry.
So far government agencies have issued licenses for thousands of low-Earth-orbiting satellites, which have the biggest impact on astronomers. Even with the constellations starting to form, satellite-cellular telecommunications companies are still open to big regulatory risks. “Regulators have not decided on the power limits from space, what concerns there are about interference, especially across national borders. There’s a whole bunch of regulatory issues that simply haven’t been thought about to date,” Farrar says.
The orbiting Tiangong-2 space lab has transmitted quantum-encryption keys to four ground stations, researchers reported on 18 August. The same network of ground stations is also able to receive quantum keys from the orbiting Micius satellite, which is in a much higher orbit, using the space station as a repeater. It comes just after the late July launch of Jinan 1, China’s second quantum-encrypting satellite, by the University of Science and Technology of China. USTC told the Xinhua News Agency that the new satellite is one-sixth the mass of its 2016 predecessor.
“The launch is significant,” says physicist Paul Kwiat of the University of Illinois in Urbana-Champaign, because it means the team are starting to build, not just plan, a quantum network. USTC researchers did not reply to IEEE Spectrum’s request for comments.
In quantum-key distribution (QKD), the quantum states of a single photon, such as polarization, encode and distribute random information that can be used to encrypt a classical message. Because it is impossible to copy the quantum state without changing it, senders and recipients can verify that their transmission got through without tampering or reading by third parties. In some scenarios it involves sending just one well-described photon at a time, but single photons are difficult to produce, and in this case, researchers used an attenuated laser to send small pulses that might also come out a couple of photons at a time, or not at all.
The USTC research team, led by Jian-Wei Pan, had already established quantum-key distribution from Micius to a single ground station in 2017, not long after the 2016 launch of the satellite. The work that Pan and colleagues reported this month, but which took place in 2018 and 2019, is a necessary step for building a constellation of quantum-encryption-compatible satellites across a range of orbits, to ensure more secure long-distance communications.
Several other research groups have transmitted quantum keys, and others are now building microsatellites for the same purpose. However, the U.S. National Security Agency’s site about QKD lists several technical limitations, such as requiring an initial verification of the counterparty’s identity, the need for special equipment, the cost, and the risk of hardware-based security vulnerabilities. In the absence of fixes, the NSA does not anticipate approving QKD for national security communications.
“A quantum network with entangled nodes is the thing that would be really interesting, enabling distributed quantum computing and sensing, but that’s a hard thing to make. Being able to do QKD is a necessary but not sufficient first step,” Kwiat says. The USTC experiments are a chance to establish many technical abilities, such as the precise control of the pulse duration and direction of the lasers involved, or the ability to accurately transfer and measure the quantum signals to the standard necessary for a more complex quantum network.
That is a step ahead of the many other QKD efforts made so far on laboratory benchtops, over ground-to-ground cables, or aboard balloons or aircraft. “You have to do things very differently if you’re not allowed to fiddle with something once it’s launched into space,” Kwiat says.
The marketing industry is turning to artificial intelligence (AI) as a way to save time and execute smarter, more personalized campaigns. 61% of marketers say AI software is the most important aspect of their data strategy.
If you’re late to the AI party, don’t worry. It’s easier than you think to start leveraging artificial intelligence tools in your marketing strategy. Here are 11 AI marketing tools every marketer should start using today.
Jasper is a content writing and content generation tool that uses artificial intelligence to identify the best words and sentences for your writing style and medium in the most efficient, quick, and accessible way.
It's trusted by 50,000+ marketers for creating engaging marketing campaigns, ad copy, blog posts, and articles within minutes which would traditionally take hours or days. Special Features:
Blog posts have been optimized for search engines and rank high on Google and other search engines. This is a huge plus for online businesses that want to generate traffic to their website through content marketing.
99.9% Original Content and guarantees that all content it generates will be original, so businesses can focus on their online reputation rather than worrying about penalties from Google for duplicate content.
Long-Form Article Writing – Jasper.ai is also useful for long-form writing, allowing users to create articles of up to 10,000 words without any difficulty. This is ideal for businesses that want to produce in-depth content that will capture their audience’s attention.
Wait! I've got a pretty sweet deal for you. Sign up through the link below, and you'll get(10k Free Credits)
Personalize is an AI-powered technology that helps you identify and produce highly targeted sales and marketing campaigns by tracking the products and services your contacts are most interested in at any given time. The platform uses an algorithm to identify each contact’s top three interests, which are updated in real-time based on recent site activity.
Identifies top three interests based on metrics like time on page, recency, and frequency of each contact
Works with every ESP and CRM
Easy to get up and running in days
Enterprise-grade technology at a low cost for SMBs
3. Seventh Sense
Seventh Sense provides behavioral analytics that helps you win attention in your customers’ overcrowded email inboxes. Choosing the best day and time to send an email is always a gamble. And while some days of the week generally get higher open rates than others, you’ll never be able to nail down a time that’s best for every customer. Seventh Sense eases your stress of having to figure out the perfect send-time and day for your email campaigns. The AI-based platform figures out the best timing and email frequency for each contact based on when they’re opening emails. The tool is primarily geared toward HubSpot and Marketo customers
AI determines the best send-time and email frequency for each contact
Connects with HubSpot and Marketo
Phrasee uses artificial intelligence to help you write more effective subject lines. With its AI-based Natural Language Generation system, Phrasee uses data-driven insights to generate millions of natural-sounding copy variants that match your brand voice. The model is end-to-end, meaning when you feed the results back to Phrasee, the prediction model rebuilds so it can continuously learn from your audience.
Instantly generates millions of human-sounding, brand-compliant copy variants
Creates tailored language models for every customer
Learns what your audience responds to and rebuilds the prediction model every time
5. Hubspot Seo
HubSpot Search Engine Optimization (SEO) is an integral tool for the Human Content team. It uses machine learning to determine how search engines understand and categorize your content. HubSpot SEO helps you improve your search engine rankings and outrank your competitors. Search engines reward websites that organize their content around core subjects, or topic clusters. HubSpot SEO helps you discover and rank for the topics that matter to your business and customers.
Helps you discover and rank topics that people are searching for
Provides suggestions for your topic clusters and related subjects
Integrates with all other HubSpot content tools to help you create a well-rounded content strategy
When you’re limited to testing two variables against each other at a time, it can take months to get the results you’re looking for. Evolv AI lets you test all your ideas at once. It uses advanced algorithms to identify the top-performing concepts, combine them with each other, and repeat the process to achieve the best site experience.
Figures out which content provides the best performance
Lets you test multiple ideas in a single experiment instead of having to perform many individual tests over a long period
Lets you try all your ideas across multiple pages for full-funnel optimization
Offers visual and code editors
Acrolinx is a content alignment platform that helps brands scale and improves the quality of their content. It’s geared toward enterprises – its major customers include big brands like Google, Adobe, and Amazon - to help them scale their writing efforts. Instead of spending time chasing down and fixing typos in multiple places throughout an article or blog post, you can use Acrolinx to do it all right there in one place. You start by setting your preferences for style, grammar, tone of voice, and company-specific word usage. Then, Acrolinx checks and scores your existing content to find what’s working and suggest areas for improvement. The platform provides real-time guidance and suggestions to make writing better and strengthen weak pages.
Reviews and scores existing content to ensure it meets your brand guidelines
Finds opportunities to improve your content and use automation to shorten your editorial process.
Integrates with more than 50 tools and platforms, including Google Docs, Microsoft Word, WordPress, and most web browsers.
MarketMuse uses an algorithm to help marketers build content strategies. The tool shows you where to target keywords to rank in specific topic categories, and recommends keywords you should go after if you want to own particular topics. It also identifies gaps and opportunities for new content and prioritizes them by their probable impact on your rankings. The algorithm compares your content with thousands of articles related to the same topic to uncover what’s missing from your site.
The built-in editor shows how in-depth your topic is covered and what needs improvement
Finds gaps and opportunities for new content creation, prioritized by their probable impact and your chance of ranking
Copilot is a suite of tools that help eCommerce businesses maintain real-time communication with customers around the clock at every stage of the funnel. Promote products, recover shopping carts and send updates or reminders directly through Messenger.
Integrate Facebook Messenger directly with your website, including chat history and recent interactions for a fluid customer service experience
Run drip messenger campaigns to keep customers engaged with your brand
Send abandoned carts, out-of-stock, restock, preorder, order status, and shipment notifications to contacts
Send branded images, promotional content, or coupon codes to those who opt in
Collect post-purchase feedback, reviews, and customer insight
Demonstrate social proof on your website with a widget, or push automatic Facebook posts sharing recent purchases
Display a promotional banner on your website to capture contacts instantly
Yotpo’s deep learning technology evaluates your customers’ product reviews to help you make better business decisions. It identifies key topics that customers mention related to your products—and their feelings toward them. The AI engine extracts relevant reviews from past buyers and presents them in smart displays to convert new shoppers. Yotpo also saves you time moderating reviews. The AI-powered moderation tool automatically assigns a score to each review and flags reviews with negative sentiment so you can focus on quality control instead of manually reviewing every post.
Makes it easy for shoppers to filter reviews and find the exact information they’re looking for
Analyzes customer feedback and sentiments to help you improve your products
Integrates with most leading eCommerce platforms, including BigCommerce, Magento, and Shopify.
11. Albert AI
Albert is a self-learning software that automates the creation of marketing campaigns for your brand. It analyzes vast amounts of data to run optimized campaigns autonomously, allowing you to feed in your own creative content and target markets, and then use data from its database to determine key characteristics of a serious buyer. Albert identifies potential customers that match those traits, and runs trial campaigns on a small group of customers—with results refined by Albert himself—before launching it on a larger scale.
Albert plugs into your existing marketing technology stack, so you still have access to your accounts, ads, search, social media, and more. Albert maps tracking and attribution to your source of truth so you can determine which channels are driving your business.
Breaks down large amounts of data to help you customize campaigns
Plugs into your marketing technology stack and can be used across diverse media outlets, including email, content, paid media, and mobile
There are many tools and companies out there that offer AI tools, but this is a small list of resources that we have found to be helpful. If you have any other suggestions, feel free to share them in the comments below this article. As marketing evolves at such a rapid pace, new marketing strategies will be invented that we haven't even dreamed of yet. But for now, this list should give you a good starting point on your way to implementing AI into your marketing mix.
Note: This article contains affiliate links, meaning we make a small commission if you buy any premium plan from our link.
Match ID: 197 Score: 3.57 source: www.crunchhype.com age: 204 days qualifiers: 3.57 mit
If you want to pay online, you need to register an account and provide credit card information. If you don't have a credit card, you can pay with bank transfer. With the rise of cryptocurrencies, these methods may become old.
Imagine a world in which you can do transactions and many other things without having to give your personal information. A world in which you don’t need to rely on banks or governments anymore. Sounds amazing, right? That’s exactly what blockchain technology allows us to do.
It’s like your computer’s hard drive. blockchain is a technology that lets you store data in digital blocks, which are connected together like links in a chain.
Blockchain technology was originally invented in 1991 by two mathematicians, Stuart Haber and W. Scot Stornetta. They first proposed the system to ensure that timestamps could not be tampered with.
A few years later, in 1998, software developer Nick Szabo proposed using a similar kind of technology to secure a digital payments system he called “Bit Gold.” However, this innovation was not adopted until Satoshi Nakamoto claimed to have invented the first Blockchain and Bitcoin.
So, What is Blockchain?
A blockchain is a distributed database shared between the nodes of a computer network. It saves information in digital format. Many people first heard of blockchain technology when they started to look up information about bitcoin.
Blockchain is used in cryptocurrency systems to ensure secure, decentralized records of transactions.
Blockchain allowed people to guarantee the fidelity and security of a record of data without the need for a third party to ensure accuracy.
To understand how a blockchain works, Consider these basic steps:
Blockchain collects information in “blocks”.
A block has a storage capacity, and once it's used up, it can be closed and linked to a previously served block.
Blocks form chains, which are called “Blockchains.”
More information will be added to the block with the most content until its capacity is full. The process repeats itself.
Each block in the chain has an exact timestamp and can't be changed.
Let’s get to know more about the blockchain.
How does blockchain work?
Blockchain records digital information and distributes it across the network without changing it. The information is distributed among many users and stored in an immutable, permanent ledger that can't be changed or destroyed. That's why blockchain is also called "Distributed Ledger Technology" or DLT.
Here’s how it works:
Someone or a computer will transacts
The transaction is transmitted throughout the network.
A network of computers can confirm the transaction.
When it is confirmed a transaction is added to a block
The blocks are linked together to create a history.
And that’s the beauty of it! The process may seem complicated, but it’s done in minutes with modern technology. And because technology is advancing rapidly, I expect things to move even more quickly than ever.
A new transaction is added to the system. It is then relayed to a network of computers located around the world. The computers then solve equations to ensure the authenticity of the transaction.
Once a transaction is confirmed, it is placed in a block after the confirmation. All of the blocks are chained together to create a permanent history of every transaction.
How are Blockchains used?
Even though blockchain is integral to cryptocurrency, it has other applications. For example, blockchain can be used for storing reliable data about transactions. Many people confuse blockchain with cryptocurrencies like bitcoin and ethereum.
Blockchain already being adopted by some big-name companies, such as Walmart, AIG, Siemens, Pfizer, and Unilever. For example, IBM's Food Trust uses blockchain to track food's journey before reaching its final destination.
Although some of you may consider this practice excessive, food suppliers and manufacturers adhere to the policy of tracing their products because bacteria such as E. coli and Salmonella have been found in packaged foods. In addition, there have been isolated cases where dangerous allergens such as peanuts have accidentally been introduced into certain products.
Tracing and identifying the sources of an outbreak is a challenging task that can take months or years. Thanks to the Blockchain, however, companies now know exactly where their food has been—so they can trace its location and prevent future outbreaks.
Blockchain technology allows systems to react much faster in the event of a hazard. It also has many other uses in the modern world.
What is Blockchain Decentralization?
Blockchain technology is safe, even if it’s public. People can access the technology using an internet connection.
Have you ever been in a situation where you had all your data stored at one place and that one secure place got compromised? Wouldn't it be great if there was a way to prevent your data from leaking out even when the security of your storage systems is compromised?
Blockchain technology provides a way of avoiding this situation by using multiple computers at different locations to store information about transactions. If one computer experiences problems with a transaction, it will not affect the other nodes.
Instead, other nodes will use the correct information to cross-reference your incorrect node. This is called “Decentralization,” meaning all the information is stored in multiple places.
Blockchain guarantees your data's authenticity—not just its accuracy, but also its irreversibility. It can also be used to store data that are difficult to register, like legal contracts, state identifications, or a company's product inventory.
Pros and Cons of Blockchain
Blockchain has many advantages and disadvantages.
Accuracy is increased because there is no human involvement in the verification process.
One of the great things about decentralization is that it makes information harder to tamper with.
Safe, private, and easy transactions
Provides a banking alternative and safe storage of personal information
Data storage has limits.
The regulations are always changing, as they differ from place to place.
It has a risk of being used for illicit activities
Frequently Asked Questions About Blockchain
I’ll answer the most frequently asked questions about blockchain in this section.
Is Blockchain a cryptocurrency?
Blockchain is not a cryptocurrency but a technology that makes cryptocurrencies possible. It's a digital ledger that records every transaction seamlessly.
Is it possible for Blockchain to be hacked?
Yes, blockchain can be theoretically hacked, but it is a complicated task to be achieved. A network of users constantly reviews it, which makes hacking the blockchain difficult.
What is the most prominent blockchain company?
Coinbase Global is currently the biggest blockchain company in the world. The company runs a commendable infrastructure, services, and technology for the digital currency economy.
Who owns Blockchain?
Blockchain is a decentralized technology. It’s a chain of distributed ledgers connected with nodes. Each node can be any electronic device. Thus, one owns blockhain.
What is the difference between Bitcoin and Blockchain technology?
Bitcoin is a cryptocurrency, which is powered by Blockchain technology while Blockchain is a distributed ledger of cryptocurrency
What is the difference between Blockchain and a Database?
Generally a database is a collection of data which can be stored and organized using a database management system. The people who have access to the database can view or edit the information stored there. The client-server network architecture is used to implement databases. whereas a blockchain is a growing list of records, called blocks, stored in a distributed system. Each block contains a cryptographic hash of the previous block, timestamp and transaction information. Modification of data is not allowed due to the design of the blockchain. The technology allows decentralized control and eliminates risks of data modification by other parties.
Blockchain has a wide spectrum of applications and, over the next 5-10 years, we will likely see it being integrated into all sorts of industries. From finance to healthcare, blockchain could revolutionize the way we store and share data. Although there is some hesitation to adopt blockchain systems right now, that won't be the case in 2022-2023 (and even less so in 2026). Once people become more comfortable with the technology and understand how it can work for them, owners, CEOs and entrepreneurs alike will be quick to leverage blockchain technology for their own gain. Hope you like this article if you have any question let me know in the comments section
FOLLOW US ON TWITTER
Follow @AdilAhmad_c Match ID: 198 Score: 3.57 source: www.crunchhype.com age: 290 days qualifiers: 3.57 mit
ProWritingAid VS Grammarly: When it comes to English grammar, there are two Big Players that everyone knows of: the Grammarly and ProWritingAid. but you are wondering which one to choose so here we write a detail article which will help you to choose the best one for you so Let's start
What is Grammarly?
Grammarly is a tool that checks for grammatical errors, spelling, and punctuation.it gives you comprehensive feedback on your writing. You can use this tool to proofread and edit articles, blog posts, emails, etc.
Grammarly also detects all types of mistakes, including sentence structure issues and misused words. It also gives you suggestions on style changes, punctuation, spelling, and grammar all are in real-time. The free version covers the basics like identifying grammar and spelling mistakes
whereas the Premium version offers a lot more functionality, it detects plagiarism in your content, suggests word choice, or adds fluency to it.
Features of Grammarly
Spelling and Word Suggestion: Grammarly detects basic to advance grammatical errors and also help you why this is an error and suggest to you how you can improve it
Create a Personal Dictionary: The Grammarly app allows you to add words to your personal dictionary so that the same mistake isn't highlighted every time you run Grammarly.
Different English Style: Check to spell for American, British, Canadian, and Australian English.
Plagiarism: This feature helps you detect if a text has been plagiarized by comparing it with over eight billion web pages.
Wordiness: This tool will help you check your writing for long and hard-to-read sentences. It also shows you how to shorten sentences so that they are more concise.
Passive Voice: The program also notifies users when passive voice is used too frequently in a document.
Punctuations: This feature flags all incorrect and missing punctuation.
Repetition: The tool provides recommendations for replacing the repeated word.
Proposition: Grammarly identifies misplaced and confused prepositions.
Plugins: It offers Microsoft Word, Microsoft Outlook, and Google Chrome plugins.
What is ProWritingAid?
ProWritingAid is a style and grammar checker for content creators and writers. It helps to optimize word choice, punctuation errors, and common grammar mistakes, providing detailed reports to help you improve your writing.
ProWritingAid can be used as an add-on to WordPress, Gmail, and Google Docs. The software also offers helpful articles, videos, quizzes, and explanations to help improve your writing.
Features of ProWriting Aid
Here are some key features of ProWriting Aid:
Grammar checker and spell checker: This tool helps you to find all grammatical and spelling errors.
Find repeated words: The tool also allows you to search for repeated words and phrases in your content.
Context-sensitive style suggestions: You can find the exact style of writing you intend and suggest if it flows well in your writing.
Check the readability of your content: Pro Writing Aid helps you identify the strengths and weaknesses of your article by pointing out difficult sentences and paragraphs.
Sentence Length: It also indicates the length of your sentences.
Check Grammatical error: It also checks your work for any grammatical errors or typos, as well.
Overused words: As a writer, you might find yourself using the same word repeatedly. ProWritingAid's overused words checker helps you avoid this lazy writing mistake.
Consistency: Check your work for inconsistent usage of open and closed quotation marks.
Echoes: Check your writing for uniformly repetitive words and phrases.
Difference between Grammarly and Pro-Writing Aid
Grammarly and ProWritingAid are well-known grammar-checking software. However, if you're like most people who can't decide which to use, here are some different points that may be helpful in your decision.
Grammarly vs ProWritingAid
Grammarly is a writing enhancement tool that offers suggestions for grammar, vocabulary, and syntax whereas ProWritingAid offers world-class grammar and style checking, as well as advanced reports to help you strengthen your writing.
Grammarly provides Android and IOS apps whereas ProWritingAid doesn't have a mobile or IOS app.
Grammarly offers important suggestions about mistakes you've made whereas ProWritingAid shows more suggestions than Grammarly but all recommendations are not accurate
Grammarly has a more friendly UI/UX whereas the ProWritingAid interface is not friendly as Grammarly.
Grammarly is an accurate grammar checker for non-fiction writing whereas ProWritingAid is an accurate grammar checker for fiction writers.
Grammarly finds grammar and punctuation mistakes, whereas ProWritingAid identifies run-on sentences and fragments.
Grammarly provides 24/7 support via submitting a ticket and sending emails. ProWritingAid’s support team is available via email, though the response time is approximately 48 hours.
Grammarly offers many features in its free plan, whereas ProWritingAid offers some basic features in the free plan.
Grammarly does not offer much feedback on big picture writing; ProWritingAid offers complete feedback on big picture writing.
Grammarly is a better option for accuracy, whereas ProWritingAid is better for handling fragmented sentences and dialogue. It can be quite useful for fiction writers.
ProWritingAid VS Grammarly: Pricing Difference
ProWritingAid comes with three pricing structures. The full-year cost of ProWritingAid is $79, while its lifetime plans cost $339. You also can opt for a monthly plan of $20.
Grammarly offers a Premium subscription for $30/month for a monthly plan $20/month for quarterly and $12/month for an annual subscription.
The Business plan costs $12.50 per month for each member of your company.
ProWritingAid vs Grammarly – Pros and Cons
It allows you to fix common mistakes like grammar and spelling.
Offers most features in the free plan
Allows you to edit a document without affecting the formatting.
Active and passive voice checker
Plagiarism checker (paid version)
Proofread your writing and correct all punctuation, grammar, and spelling errors.
Allows you to make changes to a document without altering its formatting.
Helps users improve vocabulary
Browser extensions and MS word add-ons
Available on all major devices and platforms
Grammarly will also offer suggestions to improve your style.
Enhance the readability of your sentence
Free mobile apps
Offers free version
Supports only English
Customer support only via email
Limits to 150,000 words
Subscription plans can be a bit pricey
Plagiarism checker is only available in a premium plan
Doesn’t offer a free trial
No refund policy
The free version is ideal for basic spelling and grammatical mistakes, but it does not correct advanced writing issues.
Some features are not available for Mac.
It offers more than 20 different reports to help you improve your writing.
Less expensive than other grammar checkers.
This tool helps you strengthen your writing style as it offers big-picture feedback.
ProWritingAid has a life plan with no further payments required.
Compatible with Google Docs!
Prowritingaid works on both Windows and Mac.
They offer more integrations than most tools.
Editing can be a little more time-consuming when you add larger passages of text.
ProWritingAid currently offers no mobile app for Android or iOS devices.
Plagiarism checker is only available in premium plans.
All recommendations are not accurate
Summarizing the Ginger VS Grammarly: My Recommendation
As both writing assistants are great in their own way, you need to choose the one that suits you best.
For example, go for Grammarly if you are a non-fiction writer
Go for ProWritingAid if you are a fiction writer.
ProWritingAid is better at catching errors found in long-form content. However, Grammarly is more suited to short blog posts and other similar tasks.
ProWritingAid helps you clean up your writing by checking for style, structure, and content while Grammarly focuses on grammar and punctuation.
Grammarly has a more friendly UI/UX whereas; ProWritingAid offers complete feedback on big picture writing.
Both ProWritingAid and Grammarly are awesome writing tools, without a doubt. but as per my experience, Grammarly is a winner here because Grammarly helps you to review and edit your content. Grammarly highlights all the mistakes in your writing within seconds of copying and pasting the content into Grammarly’s editor or using the software’s native feature in other text editors.
Not only does it identify tiny grammatical and spelling errors, it tells you when you overlook punctuations where they are needed. And, beyond its plagiarism-checking capabilities, Grammarly helps you proofread your content. Even better, the software offers a free plan that gives you access to some of its features.
Match ID: 199 Score: 3.57 source: www.crunchhype.com age: 326 days qualifiers: 3.57 mit
Are you searching for an ecomerce platform to help you build an online store and sell products?
In this Sellfy review, we'll talk about how this eCommerce platform can let you sell digital products while keeping full control of your marketing.
And the best part? Starting your business can be done in just five minutes.
Let us then talk about the Sellfy platform and all the benefits it can bring to your business.
What is Sellfy?
Sellfy is an eCommerce solution that allows digital content creators, including writers, illustrators, designers, musicians, and filmmakers, to sell their products online. Sellfy provides a customizable storefront where users can display their digital products and embed "Buy Now" buttons on their website or blog. Sellfy product pages enable users to showcase their products from different angles with multiple images and previews from Soundcloud, Vimeo, and YouTube. Files of up to 2GB can be uploaded to Sellfy, and the company offers unlimited bandwidth and secure file storage. Users can also embed their entire store or individual project widgets in their site, with the ability to preview how widgets will appear before they are displayed.
Sellfy is a powerful e-commerce platform that helps you personalize your online storefront. You can add your logo, change colors, revise navigation, and edit the layout of your store. Sellfy also allows you to create a full shopping cart so customers can purchase multiple items. And Sellfy gives you the ability to set your language or let customers see a translated version of your store based on their location.
Sellfy gives you the option to host your store directly on its platform, add a custom domain to your store, and use it as an embedded storefront on your website. Sellfy also optimizes its store offerings for mobile devices, allowing for a seamless checkout experience.
Sellfy allows creators to host all their products and sell all of their digital products on one platform. Sellfy also does not place storage limits on your store but recommends that files be no larger than 5GB. Creators can sell both standard and subscription-based products in any file format that is supported by the online marketplace. Customers can purchase products instantly after making a purchase – there is no waiting period.
You can organize your store by creating your product categories, sorting by any characteristic you choose. Your title, description, and the image will be included on each product page. In this way, customers can immediately evaluate all of your products. You can offer different pricing options for all of your products, including "pay what you want," in which the price is entirely up to the customer. This option allows you to give customers control over the cost of individual items (without a minimum price) or to set pricing minimums—a good option if you're in a competitive market or when you have higher-end products. You can also offer set prices per product as well as free products to help build your store's popularity.
Sellfy is ideal for selling digital content, such as ebooks. But it does not allow you to copyrighted material (that you don't have rights to distribute).
Sellfy offers several ways to share your store, enabling you to promote your business on different platforms. Sellfy lets you integrate it with your existing website using "buy now" buttons, embed your entire storefront, or embed certain products so you can reach more people. Sellfy also enables you to connect with your Facebook page and YouTube channel, maximizing your visibility.
Payments and security
Sellfy is a simple online platform that allows customers to buy your products directly through your store. Sellfy has two payment processing options: PayPal and Stripe. You will receive instant payments with both of these processors, and your customer data is protected by Sellfy's secure (PCI-compliant) payment security measures. In addition to payment security, Sellfy provides anti-fraud tools to help protect your products including PDF stamping, unique download links, and limited download attempts.
Marketing and analytics tools
The Sellfy platform includes marketing and analytics tools to help you manage your online store. You can send email product updates and collect newsletter subscribers through the platform. With Sellfy, you can also offer discount codes and product upsells, as well as create and track Facebook and Twitter ads for your store. The software's analytics dashboard will help you track your best-performing products, generated revenue, traffic channels, top locations, and overall store performance.
To expand functionality and make your e-commerce store run more efficiently, Sellfy offers several integrations. Google Analytics and Webhooks, as well as integrations with Patreon and Facebook Live Chat, are just a few of the options available. Sellfy allows you to connect to Zapier, which gives you access to hundreds of third-party apps, including tools like Mailchimp, Trello, Salesforce, and more.
Sellfy has its benefits and downsides, but fortunately, the pros outweigh the cons.
It takes only a few minutes to set up an online store and begin selling products.
You can sell your products on a single storefront, even if you are selling multiple product types.
Sellfy supports selling a variety of product types, including physical items, digital goods, subscriptions, and print-on-demand products.
Sellfy offers a free plan for those who want to test out the features before committing to a paid plan.
You get paid the same day you make a sale. Sellfy doesn't delay your funds as some other payment processors do.
Print-on-demand services are available directly from your store, so you can sell merchandise to fans without setting up an integration.
You can conduct all store-related activities via the mobile app and all online stores have mobile responsive designs.
Everything you need to make your website is included, including a custom domain name hosting, security for your files, and the ability to customize your store
The file security features can help you protect your digital property by allowing you to put PDF stamps, set download limits, and SSL encryption.
Sellfy provides unlimited support.
Sellfy provides simple and intuitive tax and VAT configuration settings.
Marketing strategies include coupons, email marketing, upselling, tracking pixels, and cart abandonment.
Although the free plan is helpful, but it limits you to only 10 products.
Payment plans often require an upgrade if you exceed a certain sales amount per year.
The storefront designs are clean, but they're not unique templates for creating a completely different brand image.
Sellfy's branding is removed from your hosted product when you upgrade to the $49 per month Business plan.
The free plan does not allow for selling digital or subscription products.
In this article, we have taken a look at some of the biggest benefits associated with using sellfy for eCommerce. Once you compare these benefits to what you get with other platforms such as Shopify, you should find that it is worth your time to consider sellfy for your business. After reading this article all of your questions will be solved but if you have still some questions let me know in the comment section below, I will be happy to answer your questions.
Note: This article contains affiliate links which means we make a small commission if you buy sellfy premium plan from our link.
Match ID: 200 Score: 3.57 source: www.crunchhype.com age: 327 days qualifiers: 3.57 mit
Content creation is one of the biggest struggles for many marketers and business owners. It often requires both time and financial resources, especially if you plan to hire a writer. Today, we have a fantastic opportunity to use other people's products by purchasing Private Label Rights.
To find a good PLR website, first, determine the type of products you want to acquire. One way to do this is to choose among membership sites or PLR product stores. Following are 10 great sites that offer products in both categories.
What are PLR websites?
Private Label Rights (PLR) products are digital products that can be in the form of an ebook, software, online course videos, value-packed articles, etc. You can use these products with some adjustments to sell as your own under your own brand and keep all the money and profit yourself without wasting your time on product creation. The truth is that locating the best website for PLR materials can be a time-consuming and expensive exercise. That’s why we have researched, analyzed, and ranked the best 10 websites:
PLR.me is of the best places to get PLR content in 2021-2022. It offers a content marketing system that comes with courses, brandable tools, and more. It is the most trusted PLR website, among other PLR sites. The PLR.me platform features smart digital caching PLR tools for health and wellness professionals. The PLR.me platform, which was built on advanced caching technology, has been well-received by big brands such as Toronto Sun and Entrepreneur. The best thing about this website is its content marketing automation tools.
Pay-as-you-go Plan – $22
100 Monthly Plan – $99/month
400 Annual Plan – $379/year
800 Annual Plan – $579/year
2500 Annual Plan – $990/year
Access over 15,940+ ready-to-use PLR coaching resources.
Content marketing and sliding tools are provided by the site.
You can create courses, products, webinars, emails, and nearly anything else you can dream of.
You can cancel your subscription anytime.
Compared to other top PLR sites, this one is a bit more expensive.
InDigitalWorks is a leading private label rights membership website established in 2008. As of now, it has more than 100,000 members from around the globe have joined the platform. The site offers thousands of ready-to-be-sold digital products for online businesses in every single niche possible. InDigitalWorks features hundreds of electronic books, software applications, templates, graphics, videos that you can sell right away.
3 Months Plan – $39
1 Year Plan – $69
Lifetime Plan – $79
IndigitalWorks promotes new authors by providing them with 200 free products for download.
Largest and most reputable private label rights membership site.
20000+ digital products
137 training videos provided by experts to help beginners set up and grow their online presence for free.
10 GB of web hosting will be available on a reliable server.
Fewer people are experiencing the frustration of not getting the help they need.
BuyQualityPLR’s website is a Top PLR of 2021-2022! It's a source for major Internet Marketing Products and Resources. Whether you’re an Affiliate Marketer, Product Creator, Course Seller, BuyQualityPLR can assist you in the right direction. You will find several eBooks and digital products related to the Health and Fitness niche, along with a series of Security-based products. If you search for digital products, Resell Rights Products, Private Label Rights Products, or Internet Marketing Products, BuyQualityPLR is among the best websites for your needs.
Free PLR articles packs, ebooks, and other digital products are available
Price ranges from 3.99$ to 99.9$
Everything on this site is written by professionals
The quick download features available
Doesn't provide membership.
Offers thousand of PLR content in many niches
Valuable courses available
You can't buy all content because it doesn't provide membership
The IDPLR website has helped thousands of internet marketers since 2008. This website follows a membership approach and allows you to gain access to thousands of PLR products in different niches. The best thing about this site is the quality of the products, which is extremely impressive. This is the best PLR website of 2021-2022, offering over 200k+ high-quality articles. It also gives you graphics, templates, ebooks, and audio.
3 Months ACCESS: $39
1 YEAR ACCESS: $69
LIFETIME ACCESS: $79
You will have access to over 12,590 PLR products.
You will get access to training tutorials and Courses in a Gold membership.
10 GB of web hosting will be available on a reliable server.
You will receive 3D eCover Software
It offers an unlimited download limit
Most important, you will get a 30 day money-back guarantee
A few products are available for free membership.
PLRmines is a leading digital product library for private label rights products. The site provides useful information on products that you can use to grow your business, as well as licenses for reselling the content. You can either purchase a membership or get access through a free trial, and you can find unlimited high-quality resources via the site's paid or free membership. Overall, the site is an excellent resource for finding outstanding private label rights content.
Lifetime membership: $97
4000+ ebooks from top categories
Members have access to more than 660 instructional videos covering all kinds of topics in a membership area.
You will receive outstanding graphics that are ready to use.
They also offer a variety of helpful resources and tools, such as PLR blogs, WordPress themes, and plugins
The free membership won't give you much value.
Super-Resell is another remarkable provider of PLR material. The platform was established in 2009 and offers valuable PLR content to users. Currently, the platform offers standard lifetime memberships and monthly plans at an affordable price. Interested users can purchase up to 10,000 products with digital rights or rights of re-sale. Super-Resell offers a wide range of products such as readymade websites, article packs, videos, ebooks, software, templates, and graphics, etc.
6 Months Membership: $49.90
Lifetime membership: $129
It offers you products that come with sales pages and those without sales pages.
You'll find thousands of digital products that will help your business grow.
Daily News update
The company has set up an automatic renewal system. This can result in costs for you even though you are not using the service.
7. Unstoppable PLR
UnStoppablePLR was launched in 2006 by Aurelius Tjin, an internet marketer. Over the last 15 years, UnStoppablePLR has provided massive value to users by offering high-quality PLR content. The site is one of the best PLR sites because of its affordability and flexibility.
Regular Price: $29/Month
You’ll get 30 PLR articles in various niches for free.
100% money-back guarantee.
Members get access to community
It gives you access to professionally designed graphics and much more.
People often complain that not enough PLR products are released each month.
8. Resell Rights Weekly
Resell Rights Weekly, a private label rights (PLR) website, provides exceptional PLR content. It is among the top free PLR websites that provide free membership. You will get 728+ PLR products completely free and new products every single week. The Resell Rights Weekly gives you free instant access to all products and downloads the ones you require.
Gold Membership: $19.95/Month
Lots of products available free of cost
Free access to the members forum
The prices for the products at this PLR site are very low quality compared to other websites that sell the same items.
MasterResellRights was established in 2006, and it has helped many successful entrepreneurs. Once you join MasterResellRights, you will get access to more than 10,000 products and services from other members. It is one of the top PLR sites that provide high-quality PLR products to members across the globe. You will be able to access a lot of other membership privileges at no extra price. The website also provides PLR, MRR, and RR license products.
⦁Access more than 10,000 high-quality, PLR articles in different niches. ⦁Get daily fresh new updates ⦁Users get 8 GB of hosting space ⦁You can pay using PayPal
⦁Only members have access to the features of this site.
BigProductStore is a popular private label rights website that offers tens of thousands of digital products. These include software, videos, video courses, eBooks, and many others that you can resell, use as you want, or sell and keep 100% of the profit. The PLR website updates its product list daily. It currently offers over 10,000 products. The site offers original content for almost every niche and when you register as a member, you can access the exclusive products section where you can download a variety of high-quality, unique, and exclusive products.
Monthly Plan: $19.90/Month 27% off
One-Time-Payment: $98.50 50% off
Monthly Ultimate: $29.90/Month 36% off
One-Time-Payment Ultimate: $198.50 50% off
You can use PLR products to generate profits, give them as bonuses for your affiliate promotion campaign, or rebrand them and create new unique products.
Lifetime memberships for PLR products can save you money if you’re looking for a long-term solution to bulk goods.
The website is updated regularly with fresh, quality content.
Product descriptions may not provide much detail, so it can be difficult to know just what you’re downloading.
Some product categories such as WP Themes and articles are outdated.
Match ID: 201 Score: 3.57 source: www.crunchhype.com age: 341 days qualifiers: 3.57 mit
Are you looking for a new graphic design tool? Would you like to read a detailed review of Canva? As it's one of the tools I love using. I am also writing my first ebook using canva and publish it soon on my site you can download it is free. Let's start the review.
Canva is a free graphic design web application that allows you to create invitations, business cards, flyers, lesson plans, banners, and more using professionally designed templates. You can upload your own photos from your computer or from Google Drive, and add them to Canva's templates using a simple drag-and-drop interface. It's like having a basic version of Photoshop that doesn't require Graphic designing knowledge to use. It’s best for nongraphic designers.
Who is Canva best suited for?
Canva is a great tool for small business owners, online entrepreneurs, and marketers who don’t have the time and want to edit quickly.
To create sophisticated graphics, a tool such as Photoshop can is ideal. To use it, you’ll need to learn its hundreds of features, get familiar with the software, and it’s best to have a good background in design, too.
Also running the latest version of Photoshop you need a high-end computer.
So here Canva takes place, with Canva you can do all that with drag-and-drop feature. It’s also easier to use and free. Also an even-more-affordable paid version is available for $12.95 per month.
Free vs Pro vs Enterprise Pricing plan
The product is available in three plans: Free, Pro ($12.99/month per user or $119.99/year for up to 5 people), and Enterprise ($30 per user per month, minimum 25 people).
Free plan Features
250,000+ free templates
100+ design types (social media posts, presentations, letters, and more)
100+ million premium and stock photos, videos, audio, and graphics
610,000+ premium and free templates with new designs daily
Access to Background Remover and Magic Resize
Create a library of your brand or campaign's colors, logos, and fonts with up to 100 Brand Kits
Remove image backgrounds instantly with background remover
Resize designs infinitely with Magic Resize
Save designs as templates for your team to use
100GB of cloud storage
Schedule social media content to 8 platforms
Enterprise Plan Features
Everything Pro has plus:
Establish your brand's visual identity with logos, colors and fonts across multiple Brand Kits
Control your team's access to apps, graphics, logos, colors and fonts with brand controls
Built-in workflows to get approval on your designs
Set which elements your team can edit and stay on brand with template locking
Log in with single-sign on (SSO) and have access to 24/7 Enterprise-level support.
How to Use Canva?
To get started on Canva, you will need to create an account by providing your email address, Google, Facebook or Apple credentials. You will then choose your account type between student, teacher, small business, large company, non-profit, or personal. Based on your choice of account type, templates will be recommended to you.
You can sign up for a free trial of Canva Pro, or you can start with the free version to get a sense of whether it’s the right graphic design tool for your needs.
When you sign up for an account, Canva will suggest different post types to choose from. Based on the type of account you set up you'll be able to see templates categorized by the following categories: social media posts, documents, presentations, marketing, events, ads, launch your business, build your online brand, etc.
Start by choosing a template for your post or searching for something more specific. Search by social network name to see a list of post types on each network.
Next, you can choose a template. Choose from hundreds of templates that are ready to go, with customizable photos, text, and other elements.
You can start your design by choosing from a variety of ready-made templates, searching for a template matching your needs, or working with a blank template.
Canva has a lot to choose from, so start with a specific search.if you want to create business card just search for it and you will see alot of templates to choose from
Inside the Canva designer, the Elements tab gives you access to lines and shapes, graphics, photos, videos, audio, charts, photo frames, and photo grids.The search box on the Elements tab lets you search everything on Canva.
To begin with, Canva has a large library of elements to choose from. To find them, be specific in your search query. You may also want to search in the following tabs to see various elements separately:
The Photos tab lets you search for and choose from millions of professional stock photos for your templates.
You can replace the photos in our templates to create a new look. This can also make the template more suited to your industry.
You can find photos on other stock photography sites like pexel, pixabay and many more or simply upload your own photos.
When you choose an image, Canva’s photo editing features let you adjust the photo’s settings (brightness, contrast, saturation, etc.), crop, or animate it.
When you subscribe to Canva Pro, you get access to a number of premium features, including the Background Remover. This feature allows you to remove the background from any stock photo in library or any image you upload.
The Text tab lets you add headings, normal text, and graphical text to your design.
When you click on text, you'll see options to adjust the font, font size, color, format, spacing, and text effects (like shadows).
Canva Pro subscribers can choose from a large library of fonts on the Brand Kit or the Styles tab. Enterprise-level controls ensure that visual content remains on-brand, no matter how many people are working on it.
Create an animated image or video by adding audio to capture user’s attention in social news feeds.
If you want to use audio from another stock site or your own audio tracks, you can upload them in the Uploads tab or from the more option.
Want to create your own videos? Choose from thousands of stock video clips. You’ll find videos that range upto 2 minutes
You can upload your own videos as well as videos from other stock sites in the Uploads tab.
Once you have chosen a video, you can use the editing features in Canva to trim the video, flip it, and adjust its transparency.
On the Background tab, you’ll find free stock photos to serve as backgrounds on your designs. Change out the background on a template to give it a more personal touch.
The Styles tab lets you quickly change the look and feel of your template with just a click. And if you have a Canva Pro subscription, you can upload your brand’s custom colors and fonts to ensure designs stay on brand.
If you have a Canva Pro subscription, you’ll have a Logos tab. Here, you can upload variations of your brand logo to use throughout your designs.
With Canva, you can also create your own logos. Note that you cannot trademark a logo with stock content in it.
Publishing with Canva
With Canva, free users can download and share designs to multiple platforms including Instagram, Facebook, Twitter, LinkedIn, Pinterest, Slack and Tumblr.
Canva Pro subscribers can create multiple post formats from one design. For example, you can start by designing an Instagram post, and Canva's Magic Resizer can resize it for other networks, Stories, Reels, and other formats.
Canva Pro subscribers can also use Canva’s Content Planner to post content on eight different accounts on Instagram, Facebook, Twitter, LinkedIn, Pinterest, Slack, and Tumblr.
Canva Pro allows you to work with your team on visual content. Designs can be created inside Canva, and then sent to your team members for approval. Everyone can make comments, edits, revisions, and keep track via the version history.
When it comes to printing your designs, Canva has you covered. With an extensive selection of printing options, they can turn your designs into anything from banners and wall art to mugs and t-shirts.
Canva Print is perfect for any business seeking to make a lasting impression. Create inspiring designs people will want to wear, keep, and share. Hand out custom business cards that leave a lasting impression on customers' minds.
The Canva app is available on the Apple App Store and Google Play. The Canva app has earned a 4.9 out of five star rating from over 946.3K Apple users and a 4.5 out of five star rating from over 6,996,708 Google users.
In addition to mobile apps, you can use Canva’s integration with other Internet services to add images and text from sources like Google Maps, Emojis, photos from Google Drive and Dropbox, YouTube videos, Flickr photos, Bitmojis, and other popular visual content elements.
Canva Pros and Cons
A user-friendly interface
Canva is a great tool for people who want to create professional graphics but don’t have graphic design skills.
Hundreds of templates, so you'll never have to start from scratch.
Wide variety of templates to fit multiple uses
Branding kits to keep your team consistent with the brand colors and fonts
Creating visual content on the go
You can find royalty free images, audio, and video without having to subscribe to another service.
Some professional templates are available for Pro user only
Advanced photo editing features like blurring or erasing a specific area are missing.
Some elements that fall outside of a design are tricky to retrieve.
Features (like Canva presentations) could use some improvement.
If you are a regular user of Adobe products, you might find Canva's features limited.
Prefers to work with vectors. Especially logos.
Expensive enterprise pricing
In general, Canva is an excellent tool for those who need simple images for projects. If you are a graphic designer with experience, you will find Canva’s platform lacking in customization and advanced features – particularly vectors. But if you have little design experience, you will find Canva easier to use than advanced graphic design tools like Adobe Photoshop or Illustrator for most projects. If you have any queries let me know in the comments section.
Match ID: 202 Score: 3.57 source: www.crunchhype.com age: 347 days qualifiers: 3.57 mit
If you are looking for the best wordpress plugins, then you are at the right place. Here is the list of best wordpress plugins that you should use in your blog to boost SEO, strong your security and know every aspects of your blog . Although creating a good content is one factor but there are many wordpress plugins that perform different actions and add on to your success. So let's start
Those users who are serious about SEO, Yoast SEO will do the work for them to reach their goals. All they need to do is select a keyword, and the plugin will then optimize your page according to the specified keyword
Yoast offers many popular SEO WordPress plugin functions. It gives you real-time page analysis to optimize your content, images, meta descriptions, titles, and kewords. Yoast also checks the length of your sentences and paragraphs, whether you’re using enough transition words or subheadings, how often you use passive voice, and so on. Yoast tells Google whether or not to index a page or a set of pages too.
Let me summarize these points in bullets:
Enhance the readability of your article to reduce bounce rate
Optimize your articles with targetted keywords
Let Google know who you are and what your site is about
Improve your on-page SEO with advanced, real-time guidance and advice on keyword usage, linking, and external linking.
Keep your focus keywords consistent to help rank better on Google.
Preview how your page would appear in the search engine results page (SERP)
Crawl your site daily to ensure Google indexes it as quickly as possible.
Rate your article informing you of any mistakes you might have made so that you can fix them before publishing.
Stay up-to-date with Google’s latest algorithm changes and adapt your on-page SEO as needed with smartsuggestionss from the Yoast SEO plugin. This plugin is always up-to-date.
Free Version is available
Premium version=$89/year that comes with extra functions, allowing you to optimize your content up to five keywords, among other benefits.
2. WP Rocket
A website running WordPress can put a lot of strain on a server, which increases the chances that the website will crash and harm your business. To avoid such an unfortunate situation and ensure that all your pages load quickly, you need a caching plugin like WP Rocket.
WP Rocket plugin designed to increases your website speed. Instead of waiting for pages to be saved to cache, WP Rocket turns on desired caching settings, like page cache and gzip compression. The plugin also activates other features, such as CDN support and llazy image loadding, to enhance your site speed.
Features in bullets:
Preloading the cache of pages
Reducing the number of HTTP requests allows websites to load more quickly.
Decreasing bandwidth usage with GZIP compression
Apply optimal browser caching headers (expires)
Remove Unused CSS
Deferred loading of images (LazyLoad)
Critical Path CSS generation and deferred loading of CSS files
WordPress Heartbeat API control
Easy import/export of settings
Easy roll back to a previous version
Single License =$49/year for one website
Plus License =$99/year for 3 websites
Infinite License =$249/year for unlimited websites
Wordfence Security is a WordPress firewall and security scanner that keeps your site safe from malicious hackers, spam, and other online threats. This Plugin comes with a web application firewall (WAF) called tthread Defence Feed that helps to prevents brute force attacks by ensuring you set stronger passwords and limiting login attempts. It searches for malware and compares code, theme, and plugin files with the records in the WordPress.org repository to verify their integrity and reports changes to you.
Wordfence security scanner provides you with actionable insights into your website's security status and will alert you to any potential threats, keeping it safe and secure. It also includes login security features that let you activate reCAPTCHA and two-factor authentication for your website.
Features in Bullets.
Scans your site for vulnerabilities.
Alerts you by email when new threats are detected.
Supports advanced login security measures.
IP addresses may be blocked automatically if suspicious activity is detected.
Premium Plan= $99/Year that comes with extra security features like the real time IP backlist and country blocking option and also support from highly qualified experts.
Akismet can help prevent spam from appearing on your site. Every day, it automatically checks every comment against a global database of spam to block malicious content. With Akismet, you also won’t have to worry about innocent comments being caught by the filter or false positives. You can simply tell Akismet about those and it will get better over time. It also checks your contact form submissions against its global spam database and weed out unnecessary fake information.
Features in Bullets:
The program automatically checks comments and filters out spam.
Hidden or misleading links are often revealed in the comment body.
Akismet tracks the status of each comment, allowing you to see which ones were caught by Akismet and which ones were cleared by a moderator.
A spam-blocking feature that saves disk space and makes your site run faster.
Moderators can view a list of comments approved by each user.
Free to use for personal blog
5. Contact Form 7
Contact Form 7 is a plug-in that allows you to create contact forms that make it easy for your users to send messages to your site. The plug-in was developed by Takayuki Miyoshi and lets you create multiple contact forms on the same site; it also integrates Akismet spam filtering and lets you customize the styling and fields that you want to use in the form. The plug-in provides CAPTCHA and Ajax submitting.
Features in bullets:
Create and manage multiple contact forms
Easily customize form fields
Use simple markup to alter mail content
Add Lots of third-party extensions for additional functionality
Shortcode offers a way to insert content into pages or posts.
Akismet spam filtering, Ajax-powered submitting, and CAPTCHA are all features of this plugin.
Free to use
6. Monster Insights
When you’re looking for an easy way to manage your Google Analytics-related web tracking services, Monster Insights can help. You can add, customize, and integrate Google Analytics data with ease so you’ll be able to see how every webpage performs, which online campaigns bring in the most traffic, and which content readers engage with the most. It’s same as Google Analytics
It is a powerful tool to keep track of your traffic stats. With it, you can view stats for your active sessions, conversions, and bounce rates. You’ll also be able to see your total revenue, the products you sell, and how your site is performing when it comes to referrals.
MonsterInsights offers a free plan that includes basic Google Analytics integration, data insights, and user activity metrics.
Features in bullets:
Demographics and interest reports:
Anonymize the IPs of visitor
See the results of how far visitors Scroll down
Show the insights of multiple links to the same page and show you which links get more clicks
See sessions of two related sites as a single session
Google AdSense tracking
Send you weekly analytics report of your blog you can download it as pdf
Premium plan= $99.50/year that comes with extra features like page and post tracking, Adsense tracking, custom tracking and reports.
7. Pretty Links
Pretty Links is a powerful WordPress plugin that enables you to easily cloak affiliate links on your websiteIt even allows you to easily redirect visitors based on a specific request, including permanent 301 and temporary 302/307 redirects.
Pretty links also helps you to automatically shorten your url for your post and pages.
You can also enable auto-linking feature to automatically add affiliate links for certain keywords
Create clean, easy-to-remember URLs on your website (301, 302, and 307 redirects only)
Random-generator or custom URL slugs
Track the number of clicks
Easy to understand reports
View click details including ip address, remote host, browser, operating system, and referring site
You can pass custom parameters to your scripts when using pretty permalinks, and still have full tracking capability.
Exclude IP Addresses from Stats
Cookie-based system to track your activity across clicks
Create nofollow/noindex links
Toggle tracking on / off on each link.
Pretty Link Bookmarklet
Update redirected links easily to new URLs!
Beginner Plan=$79/year that can be used on 1 site
Marketer Plan: $99/year – that can be used on upto 2 sites
Super Affiliate Plan: $149/year – that can be use on upto 5 sites
We hope you’ve found this article useful. We appreciate you reading and welcome your feedback if you have it.
Match ID: 203 Score: 3.57 source: www.crunchhype.com age: 356 days qualifiers: 3.57 mit
Ginger VS Grammarly: When it comes to grammar checkers, Ginger and Grammarly are two of the most popular choices on the market. This article aims to highlight the specifics of each one so that you can make a more informed decision about the one you'll use.
What is Grammarly?
If you are a writer, you must have heard of Grammarly before. Grammarly has over 10M users across the globe, it's probably the most popular AI writing enhancement tool, without a doubt. That's why there's a high chance that you already know about Grammarly.
But today we are going to do a comparison between Ginger and Grammarly, So let's define Grammarly here. Like Ginger, Grammarly is an AI writing assistant that checks for grammatical errors, spellings, and punctuation. The free version covers the basics like identifying grammar and spelling mistakes
While the Premium version offers a lot more functionality, it detects plagiarism in your content, suggests word choice, or adds fluency to it.
Features of Grammarly
Grammarly detects basic to advance grammatical errors and also help you why this is an error and suggest to you how you can improve it
Create a personal dictionary
Check to spell for American, British, Canadian, and Australian English.
Detect unclear structure.
Explore overuse of words and wordiness.
Get to know about the improper tones.
Discover the insensitive language aligns with your intent, audience, style, emotion, and more.
What is Ginger
Ginger is a writing enhancement tool that not only catches typos and grammatical mistakes but also suggests content improvements. As you type, it picks up on errors then shows you what’s wrong, and suggests a fix. It also provides you with synonyms and definitions of words and allows you to translate your text into dozens of languages.
Ginger Software: Features & Benefits
Ginger's software helps you identify and correct common grammatical mistakes, such as consecutive nouns, or contextual spelling correction.
The sentence rephrasing feature can help you convey your meaning perfectly.
Ginger acts like a personal coach that helps you practice certain exercises based on your mistakes.
The dictionary feature helps users understand the meanings of words.
In addition, the program provides a text reader, so you can gauge your writing’s conversational tone.
Ginger vs Grammarly
Grammarly and Ginger are two popular grammar checker software brands that help you to become a better writer. But if you’re undecided about which software to use, consider these differences:
Grammarly only supports the English language while Ginger supports 40+ languages.
Grammarly offers a wordiness feature while Ginger lacks a Wordiness feature.
Grammarly shows an accuracy score while Ginger lacks an accuracy score feature.
Grammarly has a plagiarism checker while ginger doesn't have such a feature.
Grammarly can recognize an incorrect use of numbers while Ginger can’t recognize an incorrect use of numbers.
Grammarly and Ginger both have mobile apps.
Ginger and Grammarly offer monthly, quarterly, and annual plans.
Grammarly allows you to check uploaded documents. while Ginger doesn't check uploaded documents.
Grammarly Offers a tone suggestion feature while Ginger doesn't offer a tone suggestion feature.
Ginger helps to translate documents into 40+ languages while Grammarly doesn't have a translation feature.
Ginger Offers text to speech features while Grammarly doesn't have such features.
Grammarly Score: 7/10
So Grammarly wins here.
Ginger VS Grammarly: Pricing Difference
Ginger offers a Premium subscription for 13.99$/month. it comes at $11.19/month for quarterly and $7.49/month for an annual subscription with 40$ off.
On the other hand, Grammarly offers a Premium subscription for $30/month for a monthly plan $20/month for quarterly, and $12/month for an annual subscription.
For companies with three or more employees, the Business plan costs $12.50/month for each member of your team.
Affordable Subscription plans (Additionals discounts are available)
Active and passive voice changer
Translates documents in 40+ languages
Browser extension available
Personal trainers help clients develop their knowledge of grammar.
Text-to-speech feature reads work out loud
Get a full refund within 7 days
Mobile apps aren't free
Limited monthly corrections for free users
No style checker
No plagiarism checker
Not as user-friendly as Grammarly
You are unable to upload or download documents; however, you may copy and paste files as needed.
Doesn't offer a free trial
Summarizing the Ginger VS Grammarly: My Recommendation
While both writing assistants are fantastic in their ways, you need to choose the one you want.
For example, go for Grammarly if you want a plagiarism tool included.
Choose Ginger if you want to write in languages other than English. I will to the differences for you in order to make the distinctions clearer.
Grammarly offers a plagiarism checking tool
Ginger provides text to speech tool
Grammarly helps you check uploaded documents
Ginger supports over 40 languages
Grammarly has a more friendly UI/UX
Both Ginger and Grammarly are awesome writing tools, without a doubt. Depending on your needs, you might want to use Ginger over Grammarly. As per my experience, I found Grammarly easier to use than Ginger.
Which one you like let me know in the comments section also give your opinions in the comments section below.
Match ID: 204 Score: 3.57 source: www.crunchhype.com age: 357 days qualifiers: 3.57 mit
But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.
How is AI currently being used to design the next generation of chips?
Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.
Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.
What are the benefits of using AI for chip design?
Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.
So it’s like having a digital twin in a sense?
Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.
So, it’s going to be more efficient and, as you said, cheaper?
Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.
We’ve talked about the benefits. How about the drawbacks?
Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years.
Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together.
One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.
How can engineers use AI to better prepare and extract insights from hardware or sensor data?
Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.
One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.
What should engineers and designers consider when using AI for chip design?
Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.
How do you think AI will affect chip designers’ jobs?
Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.
How do you envision the future of AI and chip design?
Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
Match ID: 205 Score: 3.57 source: spectrum.ieee.org age: 359 days qualifiers: 3.57 mit
Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.
IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.
Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.
“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”
The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.
Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).
Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT
In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.
As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.
In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.
“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.
On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.
While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.
“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”
This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.
“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.
Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.
Match ID: 206 Score: 3.57 source: spectrum.ieee.org age: 360 days qualifiers: 3.57 mit
Non-fungible tokens (NFTs) are the most popular digital assets today, capturing the attention of cryptocurrency investors, whales and people from around the world. People find it amazing that some users spend thousands or millions of dollars on a single NFT-based image of a monkey or other token, but you can simply take a screenshot for free. So here we share some freuently asked question about NFTs.
1) What is an NFT?
NFT stands for non-fungible token, which is a cryptographic token on a blockchain with unique identification codes that distinguish it from other tokens. NFTs are unique and not interchangeable, which means no two NFTs are the same. NFTs can be a unique artwork, GIF, Images, videos, Audio album. in-game items, collectibles etc.
2) What is Blockchain?
A blockchain is a distributed digital ledger that allows for the secure storage of data. By recording any kind of information—such as bank account transactions, the ownership of Non-Fungible Tokens (NFTs), or Decentralized Finance (DeFi) smart contracts—in one place, and distributing it to many different computers, blockchains ensure that data can’t be manipulated without everyone in the system being aware.
3) What makes an NFT valuable?
The value of an NFT comes from its ability to be traded freely and securely on the blockchain, which is not possible with other current digital ownership solutionsThe NFT points to its location on the blockchain, but doesn’t necessarily contain the digital property. For example, if you replace one bitcoin with another, you will still have the same thing. If you buy a non-fungible item, such as a movie ticket, it is impossible to replace it with any other movie ticket because each ticket is unique to a specific time and place.
4) How do NFTs work?
One of the unique characteristics of non-fungible tokens (NFTs) is that they can be tokenised to create a digital certificate of ownership that can be bought, sold and traded on the blockchain.
As with crypto-currency, records of who owns what are stored on a ledger that is maintained by thousands of computers around the world. These records can’t be forged because the whole system operates on an open-source network.
NFTs also contain smart contracts—small computer programs that run on the blockchain—that give the artist, for example, a cut of any future sale of the token.
5) What’s the connection between NFTs and cryptocurrency?
Non-fungible tokens (NFTs) aren't cryptocurrencies, but they do use blockchain technology. Many NFTs are based on Ethereum, where the blockchain serves as a ledger for all the transactions related to said NFT and the properties it represents.5) How to make an NFT?
Anyone can create an NFT. All you need is a digital wallet, some ethereum tokens and a connection to an NFT marketplace where you’ll be able to upload and sell your creations
6) How to validate the authencity of an NFT?
When you purchase a stock in NFT, that purchase is recorded on the blockchain—the bitcoin ledger of transactions—and that entry acts as your proof of ownership.
7) How is an NFT valued? What are the most expensive NFTs?
The value of an NFT varies a lot based on the digital asset up for grabs. People use NFTs to trade and sell digital art, so when creating an NFT, you should consider the popularity of your digital artwork along with historical statistics.
In the year 2021, a digital artist called Pak created an artwork called The Merge. It was sold on the Nifty Gateway NFT market for $91.8 million.
8) Can NFTs be used as an investment?
Non-fungible tokens can be used in investment opportunities. One can purchase an NFT and resell it at a profit. Certain NFT marketplaces let sellers of NFTs keep a percentage of the profits from sales of the assets they create.
9) Will NFTs be the future of art and collectibles?
Many people want to buy NFTs because it lets them support the arts and own something cool from their favorite musicians, brands, and celebrities. NFTs also give artists an opportunity to program in continual royalties if someone buys their work. Galleries see this as a way to reach new buyers interested in art.
10) How do we buy an NFTs?
There are many places to buy digital assets, like opensea and their policies vary. On top shot, for instance, you sign up for a waitlist that can be thousands of people long. When a digital asset goes on sale, you are occasionally chosen to purchase it.
11) Can i mint NFT for free?
To mint an NFT token, you must pay some amount of gas fee to process the transaction on the Etherum blockchain, but you can mint your NFT on a different blockchain called Polygon to avoid paying gas fees. This option is available on OpenSea and this simply denotes that your NFT will only be able to trade using Polygon's blockchain and not Etherum's blockchain. Mintable allows you to mint NFTs for free without paying any gas fees.
12) Do i own an NFT if i screenshot it?
The answer is no. Non-Fungible Tokens are minted on the blockchain using cryptocurrencies such as Etherum, Solana, Polygon, and so on. Once a Non-Fungible Token is minted, the transaction is recorded on the blockchain and the contract or license is awarded to whoever has that Non-Fungible Token in their wallet.
12) Why are people investing so much in NFT?
Non-fungible tokens have gained the hearts of people around the world, and they have given digital creators the recognition they deserve. One of the remarkable things about non-fungible tokens is that you can take a screenshot of one, but you don’t own it. This is because when a non-fungible token is created, then the transaction is stored on the blockchain, and the license or contract to hold such a token is awarded to the person owning the token in their digital wallet.
You can sell your work and creations by attaching a license to it on the blockchain, where its ownership can be transferred. This lets you get exposure without losing full ownership of your work. Some of the most successful projects include Cryptopunks, Bored Ape Yatch Club NFTs, SandBox, World of Women and so on. These NFT projects have gained popularity globally and are owned by celebrities and other successful entrepreneurs. Owning one of these NFTs gives you an automatic ticket to exclusive business meetings and life-changing connections.
That’s a wrap. Hope you guys found this article enlightening. I just answer some question with my limited knowledge about NFTs. If you have any questions or suggestions, feel free to drop them in the comment section below. Also I have a question for you, Is bitcoin an NFTs? let me know in The comment section below
Match ID: 207 Score: 3.57 source: www.crunchhype.com age: 361 days qualifiers: 3.57 mit
Are you a great Chrome user? That’s nice to hear. But first, consider whether or not there are any essential Chrome extensions you are currently missing from your browsing life, so here we're going to share with you 10 Best Chrome Extensions That Are Perfect for Everyone. So Let's Start.
When you have too several passwords to remember, LastPass remembers them for you.
This chrome extension is an easy way to save you time and increase security. It’s a single password manager that will log you into all of your accounts. you simply ought to bear in mind one word: your LastPass password to log in to all or any your accounts.
Save usernames and passwords and LastPasswill log you in automatically.
Fill the forms quickly to save your addresses, credit card numbers and more.
MozBar is an SEO toolbar extension that makes it easy for you to analyze your web pages' SEO while you surf. You can customize your search so that you see data for a particular region or for all regions. You get data such as website and domain authority and link profile. The status column tells you whether there are any no-followed links to the page.You can also compare link metrics. There is a pro version of MozBar, too.
Grammarly is a real-time grammar checking and spelling tool for online writing. It checks spelling, grammar, and punctuation as you type, and has a dictionary feature that suggests related words. if you use mobile phones for writing than Grammerly also have a mobile keyboard app.
VidIQ is a SaaS product and Chrome Extension that makes it easier to manage and optimize your YouTube channels. It keeps you informed about your channel's performance with real-time analytics and powerful insights.
Learn more about insights and statistics beyond YouTube Analytics
Find great videos with the Trending tab.
You can check out any video’s YouTube rankings and see how your own video is doing on the charts.
Keep track the history of the keyword to determine when a keyword is rising or down in popularity over time.
Quickly find out which videos are performing the best on YouTube right now.
Let this tool suggest keywords for you to use in your title, description and tags.
ColorZilla is a browser extension that allows you to find out the exact color of any object in your web browser. This is especially useful when you want to match elements on your page to the color of an image.
Advanced Color Picker (similar to Photoshop's)
Ultimate CSS Gradient Generator
The "Webpage Color Analyzer" site helps you determine the palette of colors used in a particular website.
Palette Viewer with 7 pre-installed palettes
Eyedropper - sample the color of any pixel on the page
Color History of recently picked colors
Displays some info about the element, including the tag name, class, id and size.
Auto copy picked colors to clipboard
Get colors of dynamic hover elements
Pick colors from Flash objects
Pick colors at any zoom level
Honey is a chrome extension with which you save each product from the website and notify it when it is available at low price it's one among the highest extensions for Chrome that finds coupon codes whenever you look online.
Best for finding exclusive prices on Amazon.
A free reward program called Honey Gold.
Searches and filters the simplest value fitting your demand.
7. GMass: Powerful Chrome Extension for Gmail Marketers
GMass (or Gmail Mass) permits users to compose and send mass emails using Gmail. it is a great tool as a result of you'll use it as a replacement for a third-party email sending platform. you will love GMass to spice up your emailing functionality on the platform.
8. Notion Web Clipper: Chrome Extension for Geeks
It's a Chrome extension for geeks that enables you to highlight and save what you see on the web.
It's been designed by Notion, that could be a Google space different that helps groups craft higher ideas and collaborate effectively.
Save anything online with just one click
Use it on any device
Organize your saved clips quickly
Tag, share and comment on the clips
If you are someone who works online, you need to surf the internet to get your business done. And often there is no time to read or analyze something. But it's important that you do it. Notion Web Clipper will help you with that.
9. WhatFont: Chrome Extension for identifying Any Site Fonts
WhatFont is a Chrome extension that allows web designers to easily identify and compare different fonts on a page. The first time you use it on any page, WhatFont will copy the selected page.It Uses this page to find out what fonts are present and generate an image that shows all those fonts in different sizes. Besides the apparent websites like Google or Amazon, you'll conjointly use it on sites wherever embedded fonts ar used.
Similar Web is an SEO add on for both Chrome and Firefox.It allows you to check web site traffic and key metrics for any web site, as well as engagement rate, traffic ranking, keyword ranking, and traffic source. this is often a good tool if you are looking to seek out new and effective SEO ways similarly as analyze trends across the web.
Discover keyword trends
Know fresh keywords
Get benefit from the real traffic insights
Analyze engagement metrics
Explore unique visitors data
Analyze your industry's category
Use month to date data
How to Install chrome Extension in Android
I know everyone knows how to install extension in pc but most of people don't know how to install it in android phone so i will show you how to install it in android
1. Download Kiwi browser from Play Store and then Open it.
Continue reading below
2. Tap the three dots at the top right corner and select Extension.
3. Click on (+From Store) to access chrome web store or simple search chrome web store and access it.
4. Once you found an extension click on add to chrome a message will pop-up asking if you wish to confirm your choice. Hit OK to install the extension in the Kiwi browser.
5. To manage extensions on the browser, tap the three dots in the upper right corner. Then select Extensions to access a catalog of installed extensions that you can disable, update or remove with just a few clicks.
Your Chrome extensions should install on Android, but there’s no guarantee all of them will work. Because Google Chrome Extensions are not optimized for Android devices.
We hope this list of 10 best chrome extensions that is perfect for everyone will help you in picking the right Chrome Extensions. We have selected the extensions after matching their features to the needs of different categories of people. Also which extension you like the most let me know in the comment section
Match ID: 208 Score: 3.57 source: www.crunchhype.com age: 367 days qualifiers: 3.57 mit
Email is the marketing tool that helps you create a seamless, connected, frictionless buyer journey. More importantly, email marketing allows you to build relationships with prospects, customers, and past customers. It's your chance to speak to them right in their inbox, at a time that suits them. Along with the right message, email can become one of your most powerful marketing channels.
2. What is benefits of email marketing?
Email marketing is best way for creating long term relationship with your clients, and increasing sales in our company.
Benefits of email marketing for bussiness:
Better brand recognition
Statistics of what works best
More traffic to your products/services/newsletter
Most bussinesses are using email marketing and making tons of money with email marketing.
3. What is the simplest day and time to send my marketing emails?
Again, the answer to this question varies from company to company. And again, testing is the way to find out what works best. Typically, weekends and mornings seem to be times when multiple emails are opened, but since your audience may have different habits, it's best to experiment and then use your data to decide.
4. Which metrics should I be looking at?
The two most important metrics for email marketing are open rate and click-through rate. If your emails aren't opened, subscribers will never see your full marketing message, and if they open them but don't click through to your site, your emails won't convert.
5. How do I write a decent subject line?
The best subject lines are short and to the point, accurately describing the content of the email, but also catchy and intriguing, so the reader wants to know more. Once Again, this is the perfect place for A/B testing, to see what types of subject lines work best with your audience. Your call to action should be clear and simple. It should be somewhere at the top of your email for those who haven't finished reading the entire email, then repeated at the end for those reading all the way through. It should state exactly what you want subscribers to do, for example "Click here to download the premium theme for free.
6. Is email marketing still effective?
Email marketing is one of the most effective ways for a business to reach its customers directly. Think about it. You don't post something on your site hoping people will visit it. You don't even post something on a social media page and hope fans see it. You're sending something straight to each person's inbox, where they'll definitely see it! Even if they don't open it, they'll still see your subject line and business name every time you send an email, so you're still communicating directly with your audience.
7. However do I grow my email subscribers list? Should i buy an email list or build it myself?
Buying an email list is waste of time & money. These email accounts are unverified and not interested in your brand. The mailing list is useless if your subscribers do not open your emails. There are different ways to grow your mailing list.
Give them a free ebook and host it on a landing page where they have to enter the email to download the file and also create a forum page on your website, asks your visitors what questions they might have about your business, and collects email addresses to follow up with them.
8. How do I prevent audience from unsubscribing?
If the subject line of the email is irrelevant to customers, they will ignore it multiple times. But, if it keeps repeating, they are intercepted and unsubscribed from your emails. So, send relevant emails for the benefit of the customer. Don't send emails that often only focus on sales, offers and discounts.
Submit information about your business and offers so you can connect with customers. You can also update them on recent trends in your industry. The basic role of an email is first and foremost to connect with customers, get the most out of this tool.
9. What is the difference between a cold email and a spam email?
Cold emails are mostly sales emails that are sent with content align to the needs of the recipient. It is usually personalized and includes a business perspective. However, it is still an unsolicited email. And all unsolicited emails are marked as SPAM.
Regularly receiving this type of unsolicited email in your users' inboxes, chances are your emails will soon be diverted to spam or junk folders. The most important thing to prevent this from happening is to respect your recipients' choice to opt-out of receiving emails from you. You can add the links to easily unsubscribe. You must be familiar with the CAN-SPAM Act and its regulations.
10. Where can I find email template?
Almost all email campaign tools provide you with ready-made templates. Whether you use MailChimp or Pardot, you'll get several email templates ready to use.
However, if you want to create a template from scratch, you can do so.Most of email campaign tools have option to paste the HTML code of your own design.
11. What email marketing trend will help marketers succeed in 2022?
Is it a trend to listen to and get to know your customers? I think people realize how bad it feels for a brand or a company to obsess over themselves without knowing their customers personal needs. People who listen empathetically and then provide value based on what they learn will win.
You can approach email marketing in different ways. We have compiled a list of most frequently asked questions to help you understand how to get started, what constraints you need to keep in mind, and what future development you will need, we don’t have 100% answers to every situation and there’s always a chance you will have something new and different to deal with as you market your own business.
Match ID: 209 Score: 3.57 source: www.crunchhype.com age: 369 days qualifiers: 3.57 mit