********** LAW **********
return to top
Fox News producer alleges sexism, coached testimony, in new lawsuit
Tue, 21 Mar 2023 14:15:44 EDT
Abby Grossberg alleges that she faced discrimination at Fox News and that she was coached to mislead in her Dominion defamation case deposition.
Match ID: 0 Score: 20.00 source: www.washingtonpost.com age: 0 days
qualifiers: 20.00 new law
‘This is a ransom note’: the battle for wild camping on Dartmoor - video
Tue, 21 Mar 2023 14:02:56 GMT
Dartmoor National Park was the only place in England where there was a right to wild camp without seeking permission. The right to camp freely was lost when Dartmoor’s sixth-largest landowner – the hedge fund manager and Conservative party donor Alexander Darwall – won a high court case overturning it.
The Guardian followed people on one of the UK’s largest ever protests over public use of the countryside, and joined wild campers on Dartmoor as they sought to appeal against the court’s decision and extend the right to roam with new laws
Continue reading...Defence team had argued David Hunter’s confession should be ruled inadmissible as evidence in trial
A court in Cyprus has ruled that the confession of a retired Northumberland coalminer accused of murdering his terminally ill wife was obtained lawfully and can be used in evidence against him.
In what will amount to a major setback for David Hunter, 75, who has campaigned to be tried on the lesser charge of manslaughter, Judge Michalis Droussiotis announced that statements of admission made by the Briton were admissible.
Continue reading...In the summer of 2020, federal law enforcement launched a broad, and until now, secret strategy to infiltrate racial justice groups.
The post The FBI Used an Undercover Cop With Pink Hair to Spy on Activists and Manufacture Crimes appeared first on The Intercept.
Bezalel Smotrich’s comments come as far-right coalition pushes ahead with judiciary overhaul
An Israeli minister has claimed there is “no such thing” as a Palestinian people as Israel’s new coalition government, its most hardline ever, ploughed ahead with a part of its plan to overhaul the judiciary.
Benjamin Netanyahu’s coalition said it was pushing a key part of the overhaul – which would give the coalition control over who becomes a justice or a judge – before the parliament takes a month’s holiday break next week.
Continue reading...At a U.S. base in Syria, some attacks get press while others stay hidden.
The post The Pentagon’s Obsession With Secrecy Protected a Marine Accused of Sexual Assault appeared first on The Intercept.
The officially sanctioned conspiracy theory that Saddam Hussein was behind 9/11 set a dangerous precedent.
The post Bush’s Iraq War Lies Created a Blueprint for Donald Trump appeared first on The Intercept.
Com a saída de Amoedo para salvar sua biografia e a contratação de Leandro Narloch, partido mostra que quer abertamente se abraçar com a ultradireita.
The post O Novo não precisa mais disfarçar: já pode alinhar os sapatênis com os coturnos appeared first on The Intercept.
Gov. Greg Abbott’s budget cuts led to the release of a man who went on to be accused in the killings.
The post Top Cop Scapegoats Reform DA for Double Murder in Austin appeared first on The Intercept.
Damon Silvers, deputy chair of the Congressional Oversight Panel for the 2008 bank bailout, explains how deregulation paved the way for SVB’s collapse.
The post Understanding the Silicon Valley Bank Run appeared first on The Intercept.
They’re all doing great, thanks for asking.
The post The Architects of the Iraq War: Where Are They Now? appeared first on The Intercept.
“I think that we need to see what has actually transpired.”
The post Senators Aren’t Ready to Blame Themselves for Silicon Valley Bank Implosion appeared first on The Intercept.
The shadow of U.S. war crimes in Iraq hangs over the Pentagon's refusal to support probes into Russian atrocities in Ukraine.
The post Biden Administration Splits on Prosecuting Russia for War Crimes in Ukraine appeared first on The Intercept.
Nearly 90% of the multibillion-dollar federal lobbying apparatus in the United States serves corporate interests. In some cases, the objective of that money is obvious. Google pours millions into lobbying on bills related to antitrust regulation. Big energy companies expect action whenever there is a move to end drilling leases for federal lands, in exchange for the tens of millions they contribute to congressional reelection campaigns.
But lobbying strategies are not always so blunt, and the interests involved are not always so obvious. Consider, for example, a 2013 ...
NASA’s Artemis I mission launched early in the predawn hours this morning, at 1:04 a.m. eastern time, carrying with it the hopes of a space program aiming now to land American astronauts back on the moon. The Orion spacecraft now on its way to the moon also carries with it a lot of CubeSat-size science. (As of press time, some satellites have even begun to tweet.)
And while the objective of Artemis I is to show that the launch system and spacecraft can make a trip to the moon and return safely to Earth, the mission is also a unique opportunity to send a whole spacecraft-load of science into deep space. In addition to the interior of the Orion capsule itself, there are enough nooks and crannies to handle a fair number of CubeSats, and NASA has packed as many experiments as it can into the mission. From radiation phantoms to solar sails to algae to a lunar surface payload, Artemis I has a lot going on.
Most of the variety of the science on Artemis I comes in the form of CubeSats, little satellites that are each the size of a large shoebox. The CubeSats are tucked snugly into berths inside the Orion stage adapter, which is the bit that connects the interim cryogenic propulsion stage to the ESA service module and Orion. Once the propulsion stage lifts Orion out of Earth orbit and pushes it toward the moon, the stage and adapter will separate from Orion, and the CubeSats will launch themselves.
Ten CubeSats rest inside the Orion stage adapter at NASA’s Kennedy Space Center.NASA KSC
While the CubeSats look identical when packed up, each one is totally unique in both hardware and software, with different destinations and mission objectives. There are 10 in total (three weren’t ready in time for launch, which is why there are a couple of empty slots in the image above).
Here is what each one is and does:
While the CubeSats head off to do their own thing, inside the Orion capsule itself will be the temporary home of a trio of mannequins. The first, a male-bodied version provided by NASA, is named Commander Moonikin Campos, after NASA electrical engineer Arturo Campos, who was the guy who wrote the procedures that allowed the Apollo 13 command module to steal power from the lunar module’s batteries, one of many actions that saved the Apollo 13 crew.
Moonikin Campos prepares for placement in the Orion capsule.NASA
Moonikin Campos will spend the mission in the Orion commander’s seat, wearing an Orion crew survival system suit. Essentially itself a spacecraft, the suit is able to sustain its occupant for up to six days if necessary. Moonikin Campos’s job will be to pretend to be an astronaut, and sensors inside him will measure radiation, acceleration, and vibration to help NASA prepare to launch human astronauts in the next Artemis mission.
Helga and Zohar in place on the flight deck of the Orion spacecraft.NASA/DLR
Accompanying Moonikin Campos are two female-bodied mannequins, named Helga and Zohar, developed by the German Aerospace Center (DLR) along with the Israel Space Agency. These are more accurately called “anthropomorphic phantoms,” and their job is to provide a detailed recording of the radiation environment inside the capsule over the course of the mission. The phantoms are female because women have more radiation-sensitive tissue than men. Both Helga and Zohar have over 6,000 tiny radiation detectors placed throughout their artificial bodies, but Zohar will be wearing an AstroRad radiation protection vest to measure how effective it is.
NASA’s Biology Experiment-1 is transferred to the Orion team.NASA/KSC
The final science experiment to fly onboard Orion is NASA’s Biology Experiment-1. The experiment is really just seeing what time in deep space does to some specific kinds of biology, so all that has to happen is for Orion to successfully haul some packages of sample tubes around the moon and back. Samples include:
There is some concern that because of the extensive delays with the Artemis launch, the CubeSats have been sitting so long that their batteries may have run down. Some of the CubeSats could be recharged, but for others, recharging was judged to be so risky that they were left alone. Even for CubeSats that don’t start right up, though, it’s possible that after deployment, their solar panels will be able to get them going. But at this point, there’s still a lot of uncertainty, and the CubeSats’ earthbound science teams are now pinning their hopes on everything going well after launch.
For the rest of the science payloads, success mostly means Orion returning to Earth safe and sound, which will also be a success for the Artemis I mission as a whole. And assuming it does so, there will be a lot more science to come.
In 2001, a team of engineers at a then-obscure R&D company called AC Propulsion quietly began a groundbreaking experiment. They wanted to see whether an electric vehicle could feed electricity back to the grid. The experiment seemed to prove the feasibility of the technology. The company’s president, Tom Gage, dubbed the system “vehicle to grid” or V2G.
The concept behind V2G had gained traction in the late 1990s after California’s landmark zero-emission-vehicle (ZEV) mandate went into effect and compelled automakers to commercialize electric cars. In V2G, environmental-policy wonks saw a potent new application of the EV that might satisfy many interests. For the utilities, it promised an economical way of meeting rising demand for electricity. For ratepayers, it offered cheaper and more reliable electricity services. Purveyors of EVs would have a new public-policy rationale backing up their market. And EV owners would become entrepreneurs, selling electricity back to the grid.
AC Propulsion’s experiment was timely. It occurred in the wake of the California electricity crisis of 2000 and 2001, when mismanaged deregulation, market manipulation, and environmental catastrophe combined to unhinge the power grid. Some observers thought V2G could prevent the kinds of price spikes and rolling blackouts then plaguing the Golden State. Around the same time, however, General Motors and other automakers were in the process of decommissioning their battery EV fleets, the key component of V2G.
AC Propulsion’s president, Tom Gage, explains the company’s vehicle-to-grid technology at a 2001 conference in Seattle. Photo-illustration: Max-o-matic; photo source: Alec Brooks
The AC Propulsion experiment thus became an obscure footnote in the tortuous saga of the green automobile. A decade later, in the 2010s, the battery EV began an astounding reversal of fortune, thanks in no small part to the engineers at ACP, whose electric-drive technology informed the development of the Roadster, the car that launched Tesla Motors. By the 2020s, automakers around the world were producing millions of EVs a year. And with the revival of the EV, the V2G concept was reborn.
If a modern electronics- and software-laden car can be thought of as a computer on wheels, then an electric car capable of discharging electricity to the grid might be considered a power plant on wheels. And indeed, that’s how promoters of vehicle-to-grid technology perceive the EV.
Keep in mind, though, that electricity’s unique properties pose problems to anyone who would make a business of producing and delivering it. Electricity is a commodity that is bought and sold, and yet unlike most other commodities, it cannot easily be stored. Once electricity is generated and passes into the grid, it is typically used almost immediately. If too much or too little electricity is present in the power grid, the network can suddenly become unbalanced.
At the turn of the 20th century, utilities promoted the use of electric truck fleets to soak up excess electricity. Photo-illustration: Max-o-matic; photo source: M&N/Alamy
Some operators of early direct-current power plants at the turn of the 20th century solved the problem of uneven power output from their generators by employing large banks of rechargeable lead-acid batteries, which served as a kind of buffer to balance the flow of electrons. As utilities shifted to more reliable alternating-current systems, they phased out these costly backup batteries.
Then, as electricity entrepreneurs expanded power generation and transmission capacity, they faced the new problem of what to do with all the cheap off-peak, nighttime electricity they could now produce. Utilities reconsidered batteries, not as stationary units but in EVs. As the historian Gijs Mom has noted, enterprising utility managers essentially outsourced the storage of electricity to the owners and users of the EVs then proliferating in northeastern U.S. cities. Early utility companies like Boston Edison and New York Edison organized EV fleets, favoring electric trucks for their comparatively capacious batteries.
In the early years of the automobile, battery-powered electric cars were competitive with cars fueled by gasoline and other types of propulsion.Photo-illustration: Max-o-matic; image source: Shawshots/Alamy
The problems of grid management that EVs helped solve faded after World War I. In the boom of the 1920s, U.S. utility barons such as Samuel Insull massively expanded the country’s grid systems. During the New Deal era, the federal government began funding the construction of giant hydropower plants and pushed transmission into rural areas. By the 1950s, the grid was moving electricity across time zones and national borders, tying in diverse sources of supply and demand.
The need for large-scale electrochemical energy storage as a grid-stabilizing source of demand disappeared. When utilities considered storage technology at all in the succeeding decades, it was generally in the form of pumped-storage hydropower, an expensive piece of infrastructure that could be built only in hilly terrain.
It wasn’t until the 1990s that the electric car reemerged as a possible solution to problems of grid electricity. In 1997, Willett Kempton, a professor at the University of Delaware, and Steve Letendre, a professor at Green Mountain College, in Vermont, began publishing a series of journal articles that imagined the bidirectional EV as a resource for electricity utilities. The researchers estimated that, if applied to the task of generating electricity, all of the engines in the U.S. light-duty vehicle fleet would produce around 16 times the output of stationary power plants. Kempton and Letendre also noted that the average light vehicle was used only around 4 percent of the time. Therefore, they reasoned, a fleet of bidirectional EVs could be immensely useful to utilities, even if it was only a fraction the size of the conventional vehicle fleet.
AC Propulsion cofounder Wally Rippel converted a Volkswagen microbus into an electric vehicle while he was still a student at Caltech. Photo-illustration: Max-o-matic; photo source: Herald Examiner Collection/Los Angeles Public Library
The engineers at AC Propulsion (ACP) were familiar with the basic precepts of bidirectional EV power. The company was the brainchild of Wally Rippel and Alan Cocconi, Caltech graduates who had worked in the late 1980s and early 1990s as consultants for AeroVironment, then a developer of lightweight experimental aircraft. The pair made major contributions to the propulsion system for the Impact, a battery-powered concept car that AeroVironment built under contract for General Motors. Forerunner of the famous EV1, the Impact was regarded as the most advanced electric car of its day, thanks to its solid-state power controls, induction motor, and integrated charger. The vehicle inspired California’s ZEV mandate, instituted in 1990. As Cocconi told me, the Impact was bidirectional-capable, although that function wasn’t fully implemented.
AeroVironment had encouraged its engineers to take creative initiative in developing the Impact, but GM tightly managed efforts to translate the idiosyncratic car into a production prototype, which rankled Cocconi and Rippel. Cocconi was also dismayed by the automaker’s decision to equip the production car with an off-board rather than onboard charger, which he believed would limit the car’s utility. In 1992, he and Rippel quit the project and, with Hughes Aircraft engineer Paul Carosa, founded ACP, to further develop battery electric propulsion. The team applied their technology to a two-seat sportscar called the tzero, which debuted in January 1997.
Electric Car tzero 0-60 3.6 sec faster than Tesla Roadster www.youtube.com
Through the 1990s and into the early 2000s, ACP sold its integrated propulsion systems to established automakers, including Honda, Volkswagen, and Volvo, for use in production models being converted into EVs. For car companies, this was a quick and cheap way to gain experience with battery electric propulsion while also meeting any quota they may have been subject to under the California ZEV mandate.
By the turn of the millennium, however, selling EV propulsion systems had become a hard way to make a living. In early 2000, when GM announced it had ceased production of the EV1, it signaled that the automaking establishment was abandoning battery electric cars. ACP looked at other ways of marketing its technology and saw an opportunity in the California electricity crisis then unfolding.
Traditionally, the electricity business combined several discrete services, including some designed to meet demand and others designed to stabilize the network. Since the 1930s, these services had been provided by regulated, vertically integrated utilities, which operated as quasi-monopolies. The most profitable was peaking power—electricity delivered when demand was highest. The less-lucrative stabilization services balanced electricity load and generation to maintain system frequency at 60 hertz, the standard for the United States. In a vertically integrated utility, peaking services essentially subsidized stabilization services.
With deregulation in the 1990s, these aggregated services were unbundled and commodified. In California, regulators separated generation from distribution and sold 40 percent of installed capacity to newly created independent power producers that specialized in peaking power. Grid-stabilization functions were reborn as “ancillary services.” Major utilities were compelled to purchase high-cost peaking power, and because retail prices were capped, they could not pass their costs on to consumers. Moreover, deregulation disincentivized the construction of new power plants. At the turn of the millennium, nearly 20 percent of the state’s generating capacity was idled for maintenance.
General Motors’ Impact debuted at the 1990 Los Angeles Auto Show. It was regarded as the most advanced electric vehicle of its era.Photo-illustration: Max-o-matic; photo source: Alec Brooks
The newly marketized grid was highly unstable, and in 2000 and 2001, things came to a head. Hot weather caused a demand spike, and the accompanying drought (the beginning of the multidecade southwestern megadrought) cut hydropower capacity. As Californians turned on their air conditioners, peaking capacity had to be kept in operation longer. Then market speculators got into the act, sending wholesale prices up 800 percent and bankrupting Pacific Gas & Electric. Under these combined pressures, grid reliability eroded, resulting in rolling blackouts.
With the grid crippled, ACP’s Gage contacted Kempton to discuss whether bidirectional EV power could help. Kempton identified frequency regulation as the optimal V2G market because it was the most profitable of the ancillary services, constituting about 80 percent of what the California Independent System Operator, the nonprofit set up to manage the deregulated grid, then spent on such services.
The result was a demonstration project, a task organized by Alec Brooks, manager of ACP’s tzero production. Like Rippel and Cocconi, Brooks was a Caltech graduate and part of the close-knit community of EV enthusiasts that emerged around the prestigious university. After earning a Ph.D. in civil engineering in 1981, Brooks had joined AeroVironment, where he managed the development of Sunraycer, an advanced solar-powered demonstration EV built for GM, and the Impact. He recruited Rippel and Cocconi for both jobs. During the 1990s, Brooks formed a team at AeroVironment that provided support for GM’s EV programs until he too tired of the corporate routine and joined ACP in 1999.
Before cofounding AC Propulsion, Alan Cocconi worked on Sunraycer, a solar-powered car for GM. Here, he’s testing the car’s motor-drive power electronics.Photo-illustration: Max-o-matic; photo source: Alec Brooks
Working with Gage and Kempton, and consulting with the ISO, Brooks set out to understand how the EV might function as a utility resource.
ACP adapted its second-generation AC-150 drivetrain, which had bidirectional capability, for this application. As Cocconi recalled, the bidirectional function had originally been intended for a different purpose. In the 1990s, batteries had far less capacity than they do today, and for the small community of EV users, the prospect of running out of juice and becoming stranded was very real. In such an emergency, a bidirectional EV with charge to spare could come to the rescue.
With funding from the California Air Resources Board, the team installed an AC-150 drive in a Volkswagen Beetle. The system converted AC grid power to DC power to charge the battery and could also convert DC power from the battery to AC power that could feed both external stand-alone loads and the grid. Over the course of the project, the group successfully demonstrated bidirectional EV power using simulated dispatch commands from the ISO’s computerized energy-management system.
This pair of graphs shows how AC Propulsion’s AC-150 drivetrain performed in a demonstration of grid frequency regulation. The magenta line in the upper graph tracks grid frequency centered around 60 hertz. The lower graph indicates power flowing between the grid and the drivetrain; a negative value means power is being drawn from the grid, while a positive value means power is being sent back to the grid.
Photo-illustration: Max-o-matic; photo source: Alec Brooks
The experiment demonstrated the feasibility of the vehicle-to-grid approach, yet it also revealed the enormous complexities involved in deploying the technology. One unpleasant surprise, Brooks recalled, came with the realization that the electricity crisis had artificially inflated the ancillary-services market. After California resolved the crisis—basically by re-regulating and subsidizing electricity—the bubble burst, making frequency regulation as a V2G service a much less attractive business proposition.
The prospect of integrating EV storage batteries into legacy grid systems also raised concerns about control. The computers responsible for automatically signaling generators to ramp up or down to regulate frequency were programmed to control large thermoelectric and hydroelectric plants, which respond gradually to signals. Batteries, by contrast, respond nearly instantaneously to commands to draw or supply power. David Hawkins, an engineer who served as a chief aide to the ISO’s vice president of operations and advised Brooks, noted that the responsiveness of batteries had unintended consequences when they were used to regulate frequency. In one experiment involving a large lithium-ion battery, the control computer fully charged or discharged the unit in a matter of minutes, leaving no spare capacity to regulate the grid.
In principle, this problem might have been solved with software to govern the charging and discharging. The main barrier to V2G in the early 2000s, it turns out, was that the battery EV would have to be massively scaled up before it could serve as a practical energy-storage resource. And the auto industry had just canceled the battery EV. In its place, automakers promised the fuel-cell electric car, a type of propulsion system that does not easily lend itself to bidirectional power flow.
The dramatic revival of the battery EV in the late 2000s and early 2010s led by Tesla Motors and Nissan revived prospects for the EV as a power-grid resource. This EV renaissance spawned a host of R&D efforts in bidirectional EV power, including ECOtality and the Mid-Atlantic Grid Interactive Cars Consortium. The consortium, organized by Kempton in conjunction with PJM, the regional transmission organization responsible for much of the eastern United States, used a car equipped with an AC-150 drivetrain to further study the use of V2G in the frequency-regulation market.
Over time, however, the research focus in bidirectional EV applications shifted from the grid to homes and commercial buildings. In the wake of the Fukushima nuclear disaster in 2011, for instance, Nissan developed and marketed a vehicle-to-building (V2B) charging system that enabled its Leaf EV to provide backup power.
In 2001, AC Propulsion engineers installed an AC-150 drivetrain in a Volkswagen Beetle to demonstrate the feasibility of V2G technology for regulating frequency on the power grid.Photo-illustration: Max-o-matic; photo source: Alec Brooks
The automaker later entered an R&D partnership with Fermata Energy, a Virginia-based company that develops bidirectional EV power systems. Founded by the entrepreneur and University of Virginia researcher David Slutzky in 2010, Fermata considered and then ruled out the frequency-regulation market, on the grounds that it was too small and unscalable.
Slutsky now believes that early markets for bidirectional EV power will emerge in supplying backup power and supplementing peak loads for individual commercial buildings. Those applications will require institutional fleets of EVs. Slutzky and other proponents of EV power have been pressing for a more favorable regulatory environment, including access to the subsidies that states such as California offer to users of stationary storage batteries.
Advocates believe that V2G can help pay for EV batteries. While interest in this idea seems likely to grow as EVs proliferate, the prospect of electric car owners becoming power entrepreneurs appears more distant. Hawkins, the engineer who advised Brooks, holds that the main barriers to V2G are not so much technological as economic: Viable markets need to emerge. The everyday participant in V2G, he argues, would face the difficult task of attempting to arbitrage the difference between wholesale and retail prices while still paying the retail rate. In principle, EV owners could take advantage of the same feed-in tariffs and net-metering schemes designed to enable homeowners to sell surplus solar power back to the grid. But marketing rooftop solar power has proven more complicated and costly for suburbanites than initially assumed, and the same would likely hold true for EV power.
Another major challenge is how to balance the useful lifetime of EV batteries in transportation and non-vehicle applications. That question turns on understanding how EV batteries will perform and age in stationary-power roles. Users would hardly be further ahead, after all, if they substantially degraded their batteries in the act of paying them off. Grid managers could also face problems if they come to depend on EV batteries that prove unreliable or become unavailable as driving patterns change.
In short, the core conundrum of V2G is the conflict of interest that comes from repurposing privately owned automobiles as power plants. Scaling up this technology will require intimate collaboration between automaking and electricity-making, enterprises with substantially different revenue models and systems of regulation. At the moment, the auto industry does not have a clear interest in V2G.
On the other hand, rising electricity demand, concerns about fossil fuels, greenhouse gases, and climate change, and the challenges of managing intermittent renewable energy have all created new justifications for bidirectional EV power. With the proliferation of EVs over the last decade, more demonstrations of the technology are being staged for a host of applications—sometimes expressed as V2X, or vehicle-to-everything. Some automakers, notably Nissan and now Ford, already sell bidirectional EVs, and others are experimenting with the technology. Enterprises are emerging to equip and manage demonstrations of V2B, V2G, and V2X for utilities and big institutional users of electricity. Some ambitious pilot projects are underway, notably in the Dutch city of Utrecht.
Back in 2002, at the end of their experiment, the engineers at AC Propulsion concluded that what V2G really needed was a powerful institutional champion. They went on to make further important contributions to EV technology. Brooks and Rippel worked for the nascent Tesla Motors, while Cocconi continued at ACP until a cancer diagnosis led him to reevaluate his life. In the mid-2000s, Cocconi sold his stake in the company and devoted himself to aviation, his first love, developing remote-controlled solar-powered aircraft. The rebirth of the battery electric car in the 2010s and 2020s reaffirmed the efforts of these three visionary pioneers.
A strong V2G patron has yet to emerge. Nevertheless, the idea of an off-the-shelf energy storage unit that also provides transportation and pays for itself is likely to remain attractive enough to sustain ongoing interest. Who knows? The electric car might still one day become a power plant on wheels.
The author thanks Alec Brooks, Alan Cocconi, David Hawkins, David Slutzky, and Wally Rippel for sharing their experiences. Parts of this article are adapted from the author’s new book, Age of Auto Electric (MIT Press, 2022).
Three days before astronauts left on Apollo 8, the first-ever flight around the moon, NASA’s safety chief, Jerome Lederer, gave a speech that was at once reassuring and chilling. Yes, he said, the United States’ moon program was safe and well-planned—but even so, “Apollo 8 has 5,600,000 parts and one and one half million systems, subsystems, and assemblies. Even if all functioned with 99.9 percent reliability, we could expect 5,600 defects.”
The mission, in December 1968, was nearly flawless—a prelude to the Apollo 11 landing the next summer. But even today, half a century later, engineers wrestle with the sheer complexity of the machines they build to go to space. NASA’s Artemis I, its Space Launch System rocket mandated by Congress in 2010, endured a host of delays before it finally launched in November 2022. And Elon Musk’s SpaceX may be lauded for its engineering acumen, but it struggled for six years before its first successful flight into orbit.
Relativity envisions 3D-printing facilities someday on the Martian surface, fabricating much of what people from Earth would need to live there.
Is there a better way? An upstart company called Relativity Space is about to try one. Its Terran 1 rocket, the company says, has about a tenth as many parts as comparable launch vehicles do, because it is made through 3D printing. Instead of bending metal and milling and welding, engineers program a robot to deposit layers of metal alloy in place.
Relativity’s first rocket, the company says, is ready to go from launch complex 16 at Cape Canaveral, Fla. When it happens, the company says it will stream the liftoff on YouTube.
Artist’s concept of Relativity’s planned Terran R rocket. The company says it should be able to carry a 20,000-kilogram payload into low Earth orbit.Relativity
“Over 85 percent of the rocket by mass is 3D printed,” said Scott Van Vliet, Relativity’s head of software engineering. “And what’s really cool is not only are we reducing the amount of parts and labor that go into building one of these vehicles over time, but we’re also reducing the complexity, we’re reducing the chance of failure when you reduce the part count, and you streamline the build process.”
Relativity says it can put together a Terran rocket in two months, compared to two years for some conventionally built ones. The speed and cost of making a prototype—say, for wind-tunnel testing—are reduced because you tell the printer to make a scaled-down model. There is less waste because the process is additive. And if something needs to be modified, you reprogram the 3D printer instead of slow, expensive retooling.
Investors have noticed. The company says financial backers have included BlackRock, Y Combinator and the entrepreneur Mark Cuban.
“If you walk into any rocket factory today other than ours,” said Josh Brost, the company’s head of business development, “you still will see hundreds of thousands of parts coming from thousands of vendors, and still being assembled using lots of touch labor and lots of big-fix tools.”
Terran 1 Nose Cone Timelapse Check out this timelapse of our nose cone build for Terran 1. This milestone marks the first time we’ve created this unique shape ...
Terran 1, rated as capable of putting a 1,250-kilogram payload in low Earth orbit, is mainly intended as a test bed. Relativity has signed up a variety of future customers for satellite launches, but the first Terran 1 (“Terran” means “earthling”) will not carry a paying customer’s satellite. The first flight has been given the playful name “Good Luck, Have Fun”—GLHF for short. Eventually, if things are going well, Relativity will build a larger booster, called Terran R, which the company hopes will compete with the SpaceX Falcon 9 for launches of up to 20,000 kg. Relativity says the Terran R should be fully reusable, including the upper stage—something that other commercial launch companies have not accomplished. In current renderings, the rocket is, as the company puts it, “inspired by nature,” shaped to slice through the atmosphere as it ascends and comes back for recovery.
A number of Relativity’s top people came from Musk’s SpaceX or Jeff Bezos’s space company, Blue Origin, and, like Musk, they say their vision is a permanent presence on Mars. Brost calls it “the long-term North Star for us.” They say they can envision 3D-printing facilities someday on the Martian surface, fabricating much of what people from Earth would need to live there. “For that to happen,” says Brost, “you need to have manufacturing capabilities that are autonomous and incredibly flexible.”
Relativity’s fourth-generation Stargate 3D printer.Relativity
Just how Relativity will do all these things is a work in progress. The company says its 3D technology will help it work iteratively—finding mistakes as it goes, then correcting them as it prints the next rocket, and the next, and so on.
“In traditional manufacturing, you have to do a ton of work up front and have a lot of the design features done well ahead of time,” says Van Vliet. “You have to invest in fixed tooling that can often take years to build before you’ve actually developed an article for your launch vehicle. With 3D printing, additive manufacturing, we get to building something very, very quickly.”
The next step is to get the first rocket off the pad. Will it succeed? Brost says a key test will be getting through max q—the point of maximum dynamic pressure on the rocket as it accelerates through the atmosphere before the air around it thins out.
“If you look at history, at new space companies doing large rockets, there’s not a single one that’s done their first rocket on their first try. It would be quite an achievement if we were able to achieve orbit on our inaugural launch,” says Brost.
“I’ve been to many launches in my career,” he says, “and it never gets less exciting or nerve wracking to me.”
Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.
“Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.
Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.
DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.
“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.
“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”
So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.
The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.
The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA
But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.
NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.
The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.
The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA
What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”
So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.
“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”
There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.
“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”
The race to deliver cellular calls from space passes two milestones this month and saw one major announcement last month. First, Apple will offer emergency satellite messaging on two of its latest iPhone models, the company announced on Wednesday. Second, AST SpaceMobile plans a launch on Saturday, 10 September, of an experimental satellite to test full-fledged satellite 5G service. In addition, T-Mobile USA and SpaceX intend to offer their own messaging and limited data service via the second generation of SpaceX’s Starlink satellite constellation, as the two companies announced on 25 August.
Each contender is taking a different approach to space-based cellular service. The Apple offering uses the existing satellite bandwidth Globalstar once used for messaging offerings, but without the need for a satellite-specific handset. The AST project and another company, Lynk Global, would use a dedicated network of satellites with larger-than-normal antennas to produce a 4G, 5G, and someday 6G cellular signal compatible with any existing 4G-compatible phone (as detailed in other recent IEEE Spectrum coverage of space-based 5G offerings). Assuming regulatory approval is forthcoming, the technology would work first in equatorial regions and then across more of the planet as these providers expand their satellite constellations. T-Mobile and Starlink’s offering would work in the former PCS band in the United States. SpaceX, like AST and Lynk, would need to negotiate access to spectrum on a country-by-country basis.
Apple’s competitors are unlikely to see commercial operations before 2024.
“Regulators have not decided on the power limits from space, what concerns there are about interference, especially across national borders. There’s a whole bunch of regulatory issues that simply haven’t been thought about to date.”
—Tim Farrar, telecommunications consultant
The T-Mobile–Starlink announcement is “in some ways an endorsement” of AST and Lynk’s proposition, and “in other ways a great threat,” says telecommunications consultant Tim Farrar of Tim Farrar Associates in Menlo Park, Calif. AST and Lynk have so far told investors they expect their national mobile network operator partners to charge per use or per day, but T-Mobile announced that they plan to include satellite messaging in the 1,900-megahertz range in their existing services. Apple said their Emergency SOS via Satellite service would be free the first two years for U.S. and Canadian iPhone 14 buyers, but did not say what it would cost after that. For now, the Globalstar satellites it is using cannot offer the kind of broadband bandwidth AST has promised, but Globalstar has reported to investors orders for new satellites that might offer new capabilities, including new gateways.
Even under the best conditions—a clear view of the sky—users will need 15 seconds to send a message via Apple’s service. They will also have to follow onscreen guidance to keep the device pointed at the satellites they are using. Light foliage can cause the same message to take more than a minute to send. Ashley Williams, a satellite engineer at Apple who recorded the service’s announcement, also mentioned a data-compression algorithm and a series of rescue-related suggested auto-replies intended to minimize the amount of data that users would need to send during a rescue.
Meanwhile, AST SpaceMobile says it aims to launch an experimental satellite Saturday, 10 September, to test its cellular broadband offering.
Last month’s T-Mobile-SpaceX announcement “helped the world focus attention on the huge market opportunity for SpaceMobile, the only planned space-based cellular broadband network. BlueWalker 3, which has a 693 sq ft array, is scheduled for launch within weeks!” tweeted AST SpaceMobile CEO Abel Avellan on 25 August. The size of the array matters because AST SpaceMobile has so far indicated in its applications for experimental satellite licenses that it intends to use lower radio frequencies (700–900 MHz) with less propagation loss but that require antennas much larger than conventional satellites carry.
The size of the array will also make it more reflective, which has raised concerns among astronomers. The size of Starlink’s planned constellation has already provoked complaints among astronomers because it will interfere with their ability to observe space. Sky & Telescope magazine published on 1 September a call for both professional and amateur astronomers to observe the growing constellations of satellites to document the interference. Professional astronomy societies have lobbied U.S. government agencies and Congress on the issue and met with SpaceX officials in May to discuss a recent change that brightened satellites by 0.5 visual magnitudes.
So far government agencies have issued licenses for thousands of low-Earth-orbiting satellites, which have the biggest impact on astronomers. Even with the constellations starting to form, satellite-cellular telecommunications companies are still open to big regulatory risks. “Regulators have not decided on the power limits from space, what concerns there are about interference, especially across national borders. There’s a whole bunch of regulatory issues that simply haven’t been thought about to date,” Farrar says.
For a hiker with a twisted ankle, a messaging service that takes a while to connect and twinkles in and out of service as satellites fly by may be better than nothing, but early space-based cellular will not be a seamless way to connect to video calls from out at sea.
“User cooperation is in my view the single most critical aspect of whether this service will attract mass-market usage or people willing to pay a significant amount for this service,” Farrar says.
Update 5 Sept.: For now, NASA’s giant Artemis I remains on the ground after two launch attempts scrubbed by a hydrogen leak and a balky engine sensor. Mission managers say Artemis will fly when everything's ready—but haven't yet specified whether that might be in late September or in mid-October.
“When you look at the rocket, it looks almost retro,” said Bill Nelson, the administrator of NASA. “Looks like we’re looking back toward the Saturn V. But it’s a totally different, new, highly sophisticated—more sophisticated—rocket, and spacecraft.”
Artemis, powered by the Space Launch System rocket, is America’s first attempt to send astronauts to the moon since Apollo 17 in 1972, and technology has taken giant leaps since then. On Artemis I, the first test flight, mission managers say they are taking the SLS, with its uncrewed Orion spacecraft up top, and “stressing it beyond what it is designed for”—the better to ensure safe flights when astronauts make their first landings, currently targeted to begin with Artemis III in 2025.
But Nelson is right: The rocket is retro in many ways, borrowing heavily from the space shuttles America flew for 30 years, and from the Apollo-Saturn V.
Much of Artemis’s hardware is refurbished: Its four main engines, and parts of its two strap-on boosters, all flew before on shuttle missions. The rocket’s apricot color comes from spray-on insulation much like the foam on the shuttle’s external tank. And the large maneuvering engine in Orion’s service module is actually 40 years old—used on 19 space shuttle flights between 1984 and 1992.
“I have a name for missions that use too much new technology—failures.”
—John Casani, NASA
Perhaps more important, the project inherits basic engineering from half a century of spaceflight. Just look at Orion’s crew capsule—a truncated cone, somewhat larger than the Apollo Command Module but conceptually very similar.
Old, of course, does not mean bad. NASA says there is no need to reinvent things engineers got right the first time.
“There are certain fundamental aspects of deep-space exploration that are really independent of money,” says Jim Geffre, Orion vehicle-integration manager at the Johnson Space Center in Houston. “The laws of physics haven’t changed since the 1960s. And capsule shapes happen to be really good for coming back into the atmosphere at Mach 32.”
Roger Launius, who served as NASA’s chief historian from 1990 to 2002 and as a curator at the Smithsonian Institution from then until 2017, tells of a conversation he had with John Casani, a veteran NASA engineer who managed the Voyager, Galileo, and Cassini probes to the outer planets.
“I have a name for missions that use too much new technology,” he recalls Casani saying. “Failures.”
The Artemis I flight is slated for about six weeks. (Apollo 11 lasted eight days.) The ship roughly follows Apollo’s path to the moon’s vicinity, but then puts itself in what NASA calls a distant retrograde orbit. It swoops within 110 kilometers of the lunar surface for a gravity assist, then heads 64,000 km out—taking more than a month but using less fuel than it would in closer orbits. Finally, it comes home, reentering the Earth’s atmosphere at 11 km per second, slowing itself with a heatshield and parachutes, and splashing down in the Pacific not far from San Diego.
If all four, quadruply redundant flight computer modules fail, there is a fifth, entirely separate computer onboard, running different code to get the spacecraft home.
“That extra time in space,” says Geffre, “allows us to operate the systems, give more time in deep space, and all those things that stress it, like radiation and micrometeoroids, thermal environments.”
There are, of course, newer technologies on board. Orion is controlled by two vehicle-management computers, each composed of two flight computer modules (FCMs) to handle guidance, navigation, propulsion, communications, and other systems. The flight control system, Geffre points out, is quad-redundant; if at any point one of the four FCMs disagrees with the others, it will take itself offline and, in a 22-second process, reset itself to make sure its outputs are consistent with the others’. If all four FCMs fail, there is a fifth, entirely separate computer running different code to get the spacecraft home.
Guidance and navigation, too, have advanced since the sextant used on Apollo. Orion uses a star tracker to determine its attitude, imaging stars and comparing them to an onboard database. And an optical navigation camera shoots Earth and the moon so that guidance software can determine their distance and position and keep the spacecraft on course. NASA says it’s there as backup, able to get Orion to a safe splashdown even if all communication with Earth has been lost.
But even those systems aren’t entirely new. Geffre points out that the guidance system’s architecture is derived from the Boeing 787. Computing power in deep space is limited by cosmic radiation, which can corrupt the output of microprocessors beyond the protection of Earth’s atmosphere and magnetic field.
Beyond that is the inevitable issue of cost. Artemis is a giant project, years behind schedule, started long before NASA began to buy other launches from companies like SpaceX and Rocket Lab. NASA’s inspector general, Paul Martin, testified to Congress in March that the first four Artemis missions would cost US $4.1 billion each—“a price tag that strikes us as unsustainable.”
Launius, for one, rejects the argument that government is inherently wasteful. “Yes, NASA’s had problems in managing programs in the past. Who hasn’t?” he says. He points out that Blue Origin and SpaceX have had plenty of setbacks of their own—they’re just not obliged to be public about them. “I could go on and on. It’s not a government thing per se and it’s not a NASA thing per se.”
So why return to the moon with—please forgive the pun—such a retro rocket? Partly, say those who watch Artemis closely, because it’s become too big to fail, with so much American money and brainpower invested in it. Partly because it turns NASA’s astronauts outward again, exploring instead of maintaining a space station. Partly because new perspectives could come of it. And partly because China and Russia have ambitions in space that threaten America’s.
“Apollo was a demonstration of technological verisimilitude—to the whole world,” says Launius. “And the whole world knew then, as they know today, that the future belongs to the civilization that can master science and technology.”
Update 7 Sept.: Artemis I has been on launchpad 39B, not 39A as previously reported, at Kennedy Space Center.
RSS Rabbit links users to publicly available RSS entries.
Vet every link before clicking! The creators accept no responsibility for the contents of these entries.
Relevant
Fresh
Convenient
Agile
We're not prepared to take user feedback yet. Check back soon!