Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND
Enjoy today’s videos!
The Real Robotics Lab at University of Leeds presents two Chef quadruped robots remotely controlled by a single operator to make a tasty burger as a team. The operator uses a gamepad to control their walking and a wearable motion capture system for manipulation control of the robotic arms mounted on the legged robots.
We’re told that these particular quadrupeds are vegans, and that the vegan burgers they make are “very delicious.”
State-of-the-art frame interpolation methods generate intermediate frames by inferring object motions in the image from consecutive key-frames. In the absence of additional information, first-order approximations, i.e. optical flow, must be used, but this choice restricts the types of motions that can be modeled, leading to errors in highly dynamic scenarios. Event cameras are novel sensors that address this limitation by providing auxiliary visual information in the blind-time between frames.
Loopy is a robotic swarm of one Degree-of-Freedom (DOF) agents (i.e., a closed-loop made of 36 Dynamixel servos). Each agent (servo) makes its own local decisions based on interactions with its two neighbors. In this video, Loopy is trying to go from an arbitrary initial shape to a goal shape (Flying WV).
A collaboration between Georgia Tech Robotic Musicianship Group and Avshalom Pollak Dance Theatre. The robotic arms respond to the dancers’ movement and to the music. Our goal is for both humans and robots to be surprised and inspired by each other. If successful, both humans and robots will be dancing differently than they did before they met.
The private-public partnership with NASA and Redwire will demonstrate the ability of a small spacecraft—OSAM-2 (On-Orbit Servicing, Manufacturing and Assembly)—to manufacture and assemble spacecraft components in low-Earth orbit.
Inspired by fireflies, researchers create insect-scale robots that can emit light when they fly, which enables motion tracking and communication.
The ability to emit light also brings these microscale robots, which weigh barely more than a paper clip, one step closer to flying on their own outside the lab. These robots are so lightweight that they can’t carry sensors, so researchers must track them using bulky infrared cameras that don’t work well outdoors. Now, they’ve shown that they can track the robots precisely using the light they emit and just three smartphone cameras.
We present a new gripper and exploration approach that uses a finger with very low reflected inertia for probing and then grasping objects. The finger employs a transparent transmission, resulting in a light touch when contact occurs. Experiments show that the finger can safely move faster into contacts than industrial parallel jaw grippers or even most force-controlled grippers with backdrivable transmissions. This property allows rapid proprioceptive probing of objects.
Researchers at ETH Zurich have developed a wearable textile exomuscle that serves as an extra layer of muscles. They aim to use it to increase the upper body strength and endurance of people with restricted mobility.
VISTA is a data-driven, photorealistic simulator for autonomous driving. It can simulate not just live video but LiDAR data and event cameras, and also incorporate other simulated vehicles to model complex driving situations.
The Oxford professor has studied our circadian rhythms for decades – and says much of what we think we know is wrong
Born in Aldershot in 1959, Russell Foster is a professor of circadian neuroscience at Oxford and the director of the Nuffield Laboratory of Ophthalmology. For his discovery of non-rod, non-cone ocular photoreceptors he received numerous awards including the Zoological Society scientific medal. His latest book – the first he has written without a co-author – is Life Time: The New Science of the Body Clock, and How It Can Revolutionize Your Sleep and Health.
What is circadian neuroscience? It’s the fundamental understanding of how our biology ticks on a 24-hour basis. But also it’s bigger than that – it’s an understanding of how different structures interact within the brain and how different genes and their protein products generate a complex behaviour. And that is then embedded throughout our entire biology.
Continue reading... Match ID: 2 Score: 50.00 source: www.theguardian.com age: 0 days qualifiers: 50.00 genes
What Polar Bear Genomes May Reveal About Life in a Low-Ice Arctic Fri, 24 Jun 2022 11:00:00 +0000 Two new studies use whole genome sequencing to explore how the animals have fared in warmer conditions, raising questions about climate and adaptation. Match ID: 4 Score: 50.00 source: www.wired.com age: 1 day qualifiers: 50.00 genetic
Match ID: 5 Score: 50.00 source: theintercept.com age: 2 days qualifiers: 50.00 genes
What the DNA of Ancient Humans Reveals About Pandemics Thu, 23 Jun 2022 11:00:00 +0000 Genomic analysis of ancient remains has shed light on the origins of the black death and offers insights into the coevolution of humans and diseases. Match ID: 6 Score: 50.00 source: www.wired.com age: 2 days qualifiers: 50.00 genetic
Alex Hern reports on recent developments in artificial intelligence and how a Google employee became convinced an AI chatbot was sentient
Google software engineer Blake Lemoine was put on leave by his employer after claiming that the company had produced a sentient artificial intelligence and posting its thoughts online. Google said it suspended him for breaching confidentiality policies.
Earlier this month, Lemoine published conversations between him and LaMDA (Language Model for Dialogue Applications), Google’s chatbot development system. He argued that Lambda was a being, with the intelligence of a child, who should be freed from Google’s ownership.
Continue reading... Match ID: 7 Score: 35.00 source: www.theguardian.com age: 1 day qualifiers: 25.00 google, 10.00 development
15 Best Early Amazon Prime Day Deals Thu, 23 Jun 2022 11:00:00 +0000 From TVs and smartphones to laptops, there are already a few great discounts on some of our favorite products ahead of the retailer's big sales event. Match ID: 9 Score: 35.00 source: www.wired.com age: 2 days qualifiers: 25.00 google, 10.00 amazon
“I thought if I travelled to Australia, I could earn more money and lead a better life,” Jayan* says from his modest home, in a coastal village in the north of Sri Lanka.
A fisherman by trade and a member of Sri Lanka’s Tamil ethnic minority, Jayan is familiar with boats and was asked to fix the ageing vessel that was to take him to Australia, alongside a desperate handful of men, women and children.
Continue reading... Match ID: 14 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 trade
Exclusive: Pay review body recommendation higher than government’s figures but less than unions want
NHS staff should receive a pay rise of at least 4%, independent experts have advised, setting healthcare workers on a collision course with ministers who have set a firm maximum of 3%.
The pay review body (PRB) will recommend that NHS personnel should get an increase this year of somewhere between 4% and 5%, the Guardian understands, despite warnings from the government that undertaking such advice would break the bank.
Continue reading... Match ID: 17 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 trade
Gold futures edged up on Friday, but posted a loss of 0.6% for the week, as copper prices suffered their largest weekly drop since June of last year. "Gold remains trapped in a range as traders await to see if the latest inflation reports will force the [Federal Reserve] into committing to more massive rate hikes beyond the July policy meeting," said Edward Moya, senior market analyst at OANDA. Meanwhile, worries about an economic recession continued to feed expectations for a drop in copper demand. August gold climbed by 50 cents, or less than 0.1%, to settle at $1,830.30 an ounce. July copper settled at $3.7405 a pound, down a fraction of a cent for the session and losing 6.8% for the week.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 19 Score: 25.00 source: www.marketwatch.com age: 1 day qualifiers: 25.00 trade
Baker Hughes on Friday reported that the number of active U.S. rigs drilling for oil was up by 10 to 594 this week. That was the biggest weekly rise since the week ended May 20, Baker Hughes data show. The total active U.S. rig count, which includes those drilling for natural gas, climbed by 13 to 753, according to Baker Hughes. Oil prices continued to trade higher. August West Texas Intermediate crude was up $2.35, or 2.3%, at $106.62 a barrel on the New York Mercantile Exchange.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 20 Score: 25.00 source: www.marketwatch.com age: 1 day qualifiers: 25.00 trade
Aurora Cannabis rallied 6% in premarket trades on Friday after Cantor Fitzgerald analyst Pablo Zuanic upgraded the Canadian cannabis company to overweight from neutral and raised his 12-month price target on the stock to C$4.05 ($3.12) from C$3.90. Zuanic said Aurora Cannabis could benefit from growth in the legal cannabis market in Europe, as one of only two North American companies with a license to grow cannabis in Germany, along with Tilray Brands Inc. . "With questions as to whether imports will be allowed in the future German rec market and likely a limited number of new domestic production licenses issued to supply that future rec market, we believe both Tilray and Aurora should be well positioned," Zuanic said. He described Aurora as an "attractively-valued pure cannabis global play."
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 23 Score: 25.00 source: www.marketwatch.com age: 1 day qualifiers: 25.00 trade
Emergent BioSolutions Inc. said Friday the U.S. Food and Drug Administration has accepted for review its application for approval of its anthrax vaccine. Dubbed AV7909, the vaccine is aimed at people aged 18 through 65 years who have been exposed to anthrax for use along with recommended antibacterial drugs. A decision is expected by April of 2023. "As we progress toward licensure of AV7909, which is designed to follow a two-dose immunization schedule and to elicit a faster immune response, we redouble our efforts to support the government's overall preparedness and response strategy for large-scale emergencies involving anthrax and other threats to public health," said Kelly Warfield, Emergent senior vice president for R&D, in a statement. The company has submitted data from a phase 3 clinical trial of the vaccine, as well as from a phase 2 trial. Emergent shares reversed early losses to trade up 2% premarket and have fallen 26% in the year to date, while the s&P500 has fallen 20%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 24 Score: 25.00 source: www.marketwatch.com age: 1 day qualifiers: 25.00 trade
Earlier this month, I and others wrote a letter to Congress, basically saying that cryptocurrencies are an complete and total disaster, and urging them to regulate the space. Nothing in that letter is out of the ordinary, and is in line with what I wrote about blockchain in 2019. In response, Matthew Green has written—not really a rebuttal—but a “a general response to some of the more common spurious objections…people make to public blockchain systems.” In it, he makes several broad points:
Yes, current proof-of-work blockchains like bitcoin are terrible for the environment. But there are other modes like proof-of-stake that are not.
Match ID: 25 Score: 25.00 source: www.schneier.com age: 1 day qualifiers: 15.00 musk, 10.00 development
The Charming Bloke Who Dominates GeoGuessr Fri, 24 Jun 2022 10:00:00 +0000 Tom Davies has become a beloved icon of the Google Maps guessing game. Match ID: 26 Score: 25.00 source: www.newyorker.com age: 1 day qualifiers: 25.00 google
Google Warns of New Spyware Targeting iOS and Android Users Thu, 23 Jun 2022 17:30:32 +0000 The spyware has been used to target people in Italy, Kazakhstan, and Syria, researchers at Google and Lookout have found. Match ID: 27 Score: 25.00 source: www.wired.com age: 2 days qualifiers: 25.00 google
How to Use Markdown in Google Docs Thu, 23 Jun 2022 12:00:00 +0000 This writer-friendly shorthand now has a home in Google's productivity suite, but it's not without drawbacks. Match ID: 28 Score: 25.00 source: www.wired.com age: 2 days qualifiers: 25.00 google
Transport correspondent Gwyn Topham reports on the rail strike negotiations, and economics columnist Aditya Chakrabortty analyses the political response from the Conservatives and Labour
This week, 40,000 rail workers joined the biggest nationwide rail strike in the UK for 30 years. On Thursday, the RMT union enters its second day of industrial action, which will see train services suspended across Britain. The union argues that rail workers, like many workers, are struggling to keep up with the spiralling cost of living.
At PMQs on Wednesday, Boris Johnson claimed it was not the government’s place to intervene in the negotiations between Network Rail, train operators and the union. He said it was “up to the railway companies to negotiate. That is their job.” But is it that simple? The Guardian and Observer’s transport correspondent, Gwyn Topham, tells Nosheen Iqbal how the dispute got this far, and what it would take for both sides to come to an agreement.
Continue reading... Match ID: 29 Score: 25.00 source: www.theguardian.com age: 2 days qualifiers: 25.00 trade
Two bills attempting to reduce the power of Internet monopolies are currently being debated in Congress: S. 2992, the American Innovation and Choice Online Act; and S. 2710, the Open App Markets Act. Reducing the power to tech monopolies would do more to “fix” the Internet than any other single action, and I am generally in favor of them both. (The Center for American Progress wrote a good summary and evaluation of them. I have written in support of the bill that would force Google and Apple to give up their monopolies on their phone app stores.)...
Match ID: 30 Score: 25.00 source: www.schneier.com age: 4 days qualifiers: 17.86 google, 7.14 apple
Abortion rights groups grasp for a post-Roe strategy Sat, 25 Jun 2022 17:09:37 EDT It’s a precarious moment for the abortion rights movement, which must grapple with determining how to divide its resources and money during the biggest political fight of its life since the 1973 Roe v. Wade ruling legalizing abortion. Match ID: 33 Score: 20.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 apple, 10.00 amazon
Match ID: 34 Score: 20.00 source: www.reddit.com age: 1 day qualifiers: 10.00 apple, 10.00 amazon
Ukraine pulls back from Severodonetsk Fri, 24 Jun 2022 15:55:59 EDT Ukraine’s wins off the battlefield stood in sobering contrast to developments in the country, where Russian troops have made further advances. Match ID: 35 Score: 20.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 development, 10.00 amazon
The once mysterious show-as-a-puzzlebox has, four seasons in, become nothing but an endless MacGuffin hunt. How maddening for its seven remaining fans
American TV has a very weird and specific problem right now: every child exists on TV simply to reflect an adults’ trauma back at them, and nothing else. They droop around the house with a teddy bear asking weird, direct questions at bedtime about how, “Mommy … you seem sad.” Response: “I am, sweetie. [Crying, catching the cry, lowering to a whisper] I am.” I’ll admit I don’t interact with children very often (I am personally childless), but is this how they talk now? These scenes are unbearable, and there is a plague of them. This is just one of my many, many issues with the new series of Westworld (Monday, 9pm, Sky Atlantic).
I know, I’d forgotten about Westworld too. I recently rewatched that first glorious pilot episode on a plane, and I was reminded just what a spectacular piece of TV it really was: Anthony Hopkins quietly chilling while drinking whiskey with a robot, James Marsden’s heartbreaker eyes, Ed Harris’s musky villainy, that small-but-huge moment when Evan Rachel Wood figures it all out.
Continue reading... Match ID: 36 Score: 15.00 source: www.theguardian.com age: 0 days qualifiers: 15.00 musk
Roe’s gone. Now antiabortion lawmakers want more. Sat, 25 Jun 2022 19:52:21 EDT On the heels of their greatest victory, antiabortion activists are eager to capitalize on their momentum by enshrining constitutional abortion bans and pushing Congress to pass a national one, prohibiting abortion pills, and limiting people’s ability to get abortions across state lines. Match ID: 37 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
The festival enters its second day with Paul McCartney headlining, plus sets from Megan Thee Stallion, Haim, Skunk Anansie, and many others – follow along for reviews, photography and more
Pyramid stage Powerhouse West African supergroup Les Amazones d’Afrique open day two of Glastonbury with a set that combines striking political activism with desert blues, dub, and crunchy, glitched-out electronics. These four women, who sing about the harsh realities of gender inequality and are vocal anti-female genital mutilation activists, are the perfect group to open the Pyramid Stage today, as is made clear early on in the set, when Fafa Ruffino issues a call to all the women in the audience. “You’ve got the power to change your life. You are strong, you are powerful … you are your own rock. It’s time to stand for your rights,” she tells the crowd. “We have been taught that we are roses, that we need protection. We don’t need anyone — you don’t need anyone’s protection!” Ruffino’s words have an emphatic, galvanising quality to them; standing, statuesque, in a flowing white jumpsuit, it’s almost as if she’s some kind of angel, here to bestow words of strength upon the audience.
Although they place an emphasis on the politics behind their songs, Les Amazones du Afriques are here to have a good time, too: Watching them dance together, cheer each other on, and compliment each others’ outfits is a blast, a little like watching four friends gas each other up before a night out. Shaking the early crowd from their bleary-eyed stupor, it’s a remarkable mix of sweetness, seriousness and technical skill.
Continue reading... Match ID: 38 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 amazon
Iraqi PM heads to Saudi Arabia, Iran for new dialogue Sat, 25 Jun 2022 17:46:01 EDT Iraq’s caretaker Prime Minister Mustafa al-Kadhimi’s office says he has arrived in Saudi Arabia on an official visit Match ID: 42 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Russia fires missiles across Ukraine, cements gains in east Sat, 25 Jun 2022 17:39:59 EDT Russian forces are seeking to swallow-up the last remaining Ukrainian stronghold in the eastern Luhansk region while pressing their momentum following the withdrawal of Ukrainian troops from the charred ruins of Sievierodonetsk Match ID: 43 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Officer running for state Senate drops out after punching opponent Sat, 25 Jun 2022 17:10:50 EDT Jeann Lugo is under criminal investigation and has dropped out of the race for the Rhode Island state Senate after he allegedly punched Jennifer Rourke at an abortion protest in Providence. Match ID: 44 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
WHO panel: Monkeypox not a global emergency 'at this stage' Sat, 25 Jun 2022 17:06:54 EDT The World Health Organization said the escalating monkeypox outbreak in more than 50 countries should be closely monitored but does not warrant being declared a global health emergency Match ID: 45 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Congo bishops say pope's Canada trip a good sign for future Sat, 25 Jun 2022 16:51:44 EDT Congo’s Catholic bishop say Pope Francis’ decision to go ahead with a trip to Canada was an “encouraging sign” that his knee treatment was working Match ID: 46 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Crowd of more than 500 enter border control area after cutting fence in attempt to cross from Morocco
The death toll from the mass attempt to cross from Morocco into Spain’s enclave of Melilla has risen to 23, according to Moroccan state TV.
About 2,000 people approached Melilla at dawn on Friday and more than 500 managed to enter a border control area after cutting a fence with shears, the Spanish government’s local delegation said in a statement.
Continue reading... Match ID: 49 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 development
Trudeau: US abortion ruling could mean loss of other rights Sat, 25 Jun 2022 15:48:15 EDT Canadian Prime Minister Justin Trudeau says that the U.S. Supreme Court decision to overturn a constitutional right to abortion could lead to the loss of other rights Match ID: 50 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
U.S. abortion decision draws cheers, horror abroad Sat, 25 Jun 2022 15:11:20 EDT Others responded in support of the move. The Vatican said that the decision would challenge “the whole world." Match ID: 53 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Match ID: 55 Score: 10.00 source: www.reddit.com age: 0 days qualifiers: 10.00 tesla
Guatemala court blocks anti-corruption agreement Sat, 25 Jun 2022 14:32:51 EDT A Guatemalan court has tossed out an agreement that made it easier to prosecute bribery involving the Brazilian construction giant Odebrecht Match ID: 56 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Ukraine accuses Russia of launching missiles from Belarusian airspace Sat, 25 Jun 2022 13:42:07 EDT Ukrainian military intelligence said 12 missiles were fired from Russian Tu-22M3 bombers flying over Belarus, marking the first time that Belarusian airspace has been used for such an attack. Match ID: 57 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
After two years away, Glastonbury is back back BACK and this weekend has seen the World’s Greatest Music Festival rightly splashed all over the BBC’s music stations. 6 Music moved from Broadcasting House into a backstage tent from Wednesday to Sunday, while Radio 2 devoted its Saturday night to the wonders of Paul McCartney. All entirely correct and the BBC does this big event stuff absolutely brilliantly. But June is also Pride month and I’m always surprised by how little this is acknowledged by BBC audio. You hope it’s because LGBTQ+ stories are no longer seen as “other”, though there’s still an argument for some celebration and analysis. Anyhow, the 2022 London Pride will be on 2 July. Damian Barr will be hosting Archive on 4: Fifty Years of Pride, on BBC Radio 4 that day (and for TV heads, there’s a three-part BBC Two documentary telling the British Aids story starting on Monday).
Continue reading... Match ID: 63 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 apple
Here in the United States, of the 32 percent of the trash that we try to recycle, about 80 to 95 percent actually gets recycled, as Jason Calaiaro of AMP Robotics points out in “AI Takes a Dumpster Dive.” The technology that Calaiaro’s company is developing could move us closer to 100 percent. But it would have no effect on the two-thirds of the waste stream that never makes it to recyclers.
Certainly, the marginal gains realized by AI and robotics will help the bottom lines of recycling companies, making it profitable for them to recover more useful materials from waste. But to make a bigger difference, we need to address the problem at the beginning of the process: Manufacturers and packaging companies must shift to more sustainable designs that use less material or more recyclable ones.
According to the Joint Research Centre of the European Commission, more than “80 percent of all product-related environmental impacts are determined during the design phase of a product.” One company that applies AI at the start of the design process is Digimind GmbH based in Berlin. As CEO Katharina Eissing told Packaging Europe last year, Digimind’s AI-aided platform lets package designers quickly assess the outcome of changes they make to designs. In one case, Digimind reduced the weight of a company’s 1.5-liter plastic bottles by 13.7 percent, a seemingly small improvement that becomes more impressive when you consider that the company produces 1 billion of these bottles every year.
That’s still just a drop in the polyethylene terephthalate bucket: The world produced an estimated 583 billion PET bottles last year, according to Statista. To truly address our global garbage problem, our consumption patterns must change–canteens instead of single-use plastic bottles, compostable paper boxes instead of plastic clamshell containers, reusable shopping bags instead of “disposable” plastic ones. And engineers involved in product design need to develop packaging free of PET, polystyrene, and polycarbonate, which break down into tiny particles called microplastics that researchers are now finding in human blood and feces.
As much as we may hope that AI can solve our problems for us, that’s wishful thinking. Human ingenuity got us into this mess and humans will have to regulate, legislate, and otherwise incentivize the private sector to get us out of it.
Match ID: 65 Score: 10.00 source: spectrum.ieee.org age: 0 days qualifiers: 10.00 development
The Chinese tech giant is taking surveillance capitalism to a new level. It’s almost enough to make you feel sorry for Zuckerberg
Question: what do men and Excel have in common?
Answer: they’re always automatically turning things into dates when they’re not.
To younger people – that is, anyone under the age of 20 – Microsoft’s spreadsheet program, a tool as essential to accountants as saws are to carpenters, is the contemporary equivalent of mom jeans, handwritten thank-you notes and cravats: stuff that oldies care about. How come, then, that the hashtag #excel has had 3.4bn views on a certain social media platform and that one Excel expert on that platform has 2.7 million followers and 9.7m likes for their tips on using Excel?
The answer is that the platform is TikTok, well known by now as the short-form video-hosting service owned by Chinese company ByteDance, on which you find an endless stream of short-duration (15 seconds) videos, in genres ranging from pranks, stunts, tricks, jokes, dance and entertainment to what one might call “edutainment” (such as advice on how to do stuff with Excel). Over the last couple of years it’s been taking over the social media world, and all the other big platforms – and especially Facebook – seem hypnotised by it, much as rabbits are by the headlights of an oncoming lorry.
Continue reading... Match ID: 66 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 microsoft
As they host the world’s media in Seattle, Boeing’s leadership team must restore its reputation and its balance sheet amid an uncertain future for the industry
In a vast shed near Seattle, Boeing is ramping up production of its bestselling plane, the 737 Max. Rows of trolleys marked with team names such as “Mario Bros” and “Wildcat” wait for technicians to complete a daily dance of tools and parts. Getting the choreography right pays: production stoppages at the Renton factory can filter through to US GDP figures.
Never has the value of smooth operation been more apparent to the jetmaker than in the past three years. The factory lines were stopped for more than a year following two fatal crashes of the 737 Max. In 2018 and 2019, a total of 346 people died when hardware malfunctions and badly designed software caused the planes to override pilots and plunge from the sky.
Continue reading... Match ID: 67 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 seattle
How the Senate defied 26 years of inaction to tackle gun violence Sat, 25 Jun 2022 10:35:30 EDT The legislation represents a crash effort between four wildly different lawmakers thrown together by circumstance and pushed to action by unspeakable tragedies. Match ID: 68 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Florida emerges as key battleground in state-by-state abortion fight Sat, 25 Jun 2022 10:30:00 EDT Gov. Ron DeSantis’s efforts to manage his statewide and national political fortunes will be complicated if Florida becomes an epicenter of the abortion battle. Match ID: 69 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Demands include end to fossil fuels, preservation of biodiversity and greater social justice
About 3,500 protesters have gathered in Munich as the G7 group of leading economic powers prepare to hold their annual gathering in the Bavarian Alps in Germany, which holds the rotating presidency this year.
Police said earlier that they were expecting a crowd of about 20,000, but initially fewer people showed up for the main protest, which started at midday on Saturday, the German news agency dpa reported.
Continue reading... Match ID: 70 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 development
Biden signs gun-control legislation into law Sat, 25 Jun 2022 09:23:58 EDT President Biden signed a gun-control act that is the most significant law of its kind in the last three decades. Match ID: 74 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
The Post-Roe Privacy Nightmare Has Arrived Sat, 25 Jun 2022 13:00:00 +0000 Plus: Microsoft details Russia’s Ukraine hacking campaign, Meta’s election integrity efforts dwindle, and more. Match ID: 75 Score: 10.00 source: www.wired.com age: 0 days qualifiers: 10.00 microsoft
Juul Survives a Blow From the FDA—for Now Sat, 25 Jun 2022 13:00:00 +0000 Plus: Instagram cracks down on age verification, Microsoft says it will stop using AI to track emotions, and Twitter wants to be a blog. Match ID: 76 Score: 10.00 source: www.wired.com age: 0 days qualifiers: 10.00 microsoft
Supreme Court steps into a void left by congressional dysfunction Sat, 25 Jun 2022 08:00:08 EDT Decades of congressional gridlock on everything from gun violence to immigration to abortion rights have left a vaccum that a newly empowered right-wing bloc of Supreme Court justices have seized Match ID: 77 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Match ID: 78 Score: 10.00 source: theintercept.com age: 0 days qualifiers: 10.00 california
The abortions we didn’t have Sat, 25 Jun 2022 07:00:55 EDT The right to choose abortion is important. Even when women don’t choose it. Match ID: 79 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
How to Move Your WhatsApp Chats Across Devices and Apps Sat, 25 Jun 2022 11:00:00 +0000 It's never been easier to switch between iPhone and Android—and to get your messages out of the Meta ecosystem entirely. Match ID: 80 Score: 10.00 source: www.wired.com age: 0 days qualifiers: 10.00 whatsapp
Abortion will soon be banned in 13 states. Here’s which could be next. Sat, 25 Jun 2022 06:49:55 EDT Thirteen states have "trigger bans" to criminalize abortion now that the Supreme Court has overturned Roe v. Wade. Several other states are likely to enact similar laws. Match ID: 81 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
As election season nears, Kenyans brace for unrest and hope for peace Sat, 25 Jun 2022 06:00:00 EDT Kenya’s presidential election is six weeks away, but the campaign is in full swing, and experts are already warning the August vote could lead to instability. Match ID: 82 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Court temporarily halts FDA ban on Juul e-cigarettes Sat, 25 Jun 2022 05:37:19 EDT Juul Labs can continue to sell e-cigarettes, for now, after a federal appeals court issued a temporary stay on an FDA ban. Match ID: 83 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
In Africa, Eastern Europe battles Russian narrative on Ukraine Sat, 25 Jun 2022 05:00:13 EDT Russia has convinced some African nations that sanctions are causing the global food crisis. Eastern European countries are trying to change that narrative. Match ID: 84 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Biden, other critics fear Thomas’s ‘extreme’ position on contraception Fri, 24 Jun 2022 23:32:50 EDT “In future cases, we should reconsider all of this Court’s substantive due process precedents, including Griswold, Lawrence, and Obergefell,” Thomas wrote, also referring to the rights to same-sex relationships and marriage equality, respectively. Match ID: 90 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Biden says restoring abortion rights is up to voters Fri, 24 Jun 2022 22:17:48 EDT The decision in Dobbs v. Jackson Women’s Health was the most anticipated of the court’s term, with political tension surrounding the fight over abortion erupting in May with the leak of a draft opinion indicating a majority of justices intended to end the long-standing precedent. Match ID: 91 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Alito’s opinion completely elides the significance of the 14th Amendment, which was explicitly designed to address the particular horrors of slavery, including the right of individuals to determine whether, with whom, and when to form a family.
Match ID: 94 Score: 10.00 source: theintercept.com age: 1 day qualifiers: 10.00 apple
Diplomats urge action as global food crisis deepens Fri, 24 Jun 2022 17:37:57 EDT Germany hosted officials in Berlin to discuss ways to blunt the impacts of the global food crisis, which has left 200 million acutely food insecure. Match ID: 95 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
Should my child get a coronavirus vaccine? Is it safe? Here’s what you should know. Fri, 24 Jun 2022 15:51:51 EDT The Centers for Disease Control and Prevention and the American Academy of Pediatrics recommend that children get immunized to prevent covid-19, which has killed more than 1,000 children in the United States. Match ID: 99 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
Boris Johnson’s Tories suffer fresh electoral defeat; party chair quits Fri, 24 Jun 2022 13:13:03 EDT Chairman Oliver Dowden's letter came hours after Conservatives lost in areas where the defeats will shake Tories and renew questions about Johnson’s leadership. Match ID: 105 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
As Biden heads to Europe, the mood on Ukraine is grimmer Fri, 24 Jun 2022 12:32:07 EDT When Western allies met earlier this year, their tone on Ukraine was resolute and optimistic. This time, the future is uncertain and the mood far more somber. Match ID: 107 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
Funeral held in Pernambuco of Indigenous expert who was killed in Amazon region with journalist Dom Phillips
The murdered Indigenous advocate Bruno Pereira has been buried in his home state of Pernambuco in Brazil after a small ceremony attended by family members and local tribes.
Dozens of Indigenous people from the Xukuru tribe paraded around his coffin chanting farewell rituals to the beat of their percussion instruments on Friday.
Continue reading... Match ID: 109 Score: 10.00 source: www.theguardian.com age: 1 day qualifiers: 10.00 amazon
Tom Graham, Washington Post copy editor, dies at 71 Fri, 24 Jun 2022 11:45:18 EDT Before joining The Post in 1998, he spent almost 25 years overseeing editorial operations of community newspapers in Howard and Baltimore counties. Match ID: 110 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
House Speaker Nancy Pelosi, a California Democrat, blasted the Supreme Court's ruling on Friday that overturns Roe v. Wade, the landmark 1973 decision that established a constitutional right to an abortion. "There's no point in saying 'good morning,' because it certainly is not one. This morning, the radical Supreme Court is eviscerating Americans' rights and endangering their health and safety," she told reporters during her weekly press conference, which had been scheduled to take place before the ruling came out. "It's a slap in the face to women about using their own judgment to make their own decisions about their reproductive freedom." Pelosi also issued a news release in which she decried the high court's decision.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 111 Score: 10.00 source: www.marketwatch.com age: 1 day qualifiers: 10.00 california
In many countries, abortion is protected by law, not court decision Fri, 24 Jun 2022 10:48:41 EDT In some countries, rulings similar to Roe opened the door to legalizing abortion. In others, governments passed key legislation expanding access. Match ID: 113 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
2022 Travel photo contest: official rules Fri, 24 Jun 2022 10:32:34 EDT Read the official rules of the Washington Post Travel section's 23rd annual photo contest. Match ID: 115 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
How to make a better ice cream sundae, with recipes and tips Fri, 24 Jun 2022 10:00:04 EDT Tips and recipes for ice cream, sauce or toppings to help you to build a better ice cream sundae, for yourself or a crowd. Match ID: 117 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
Art is at the heart of Marfa, Tex. Fri, 24 Jun 2022 10:00:04 EDT Downtown Marfa, a quirky art destination in southwest Texas, was recently added to the National Register of Historic Places. Match ID: 118 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
This vanilla ice cream is creamy, dreamy and easy to adapt Fri, 24 Jun 2022 09:55:25 EDT For this creamy vanilla ice cream recipe, we added goat milk to heavy cream and just a touch of salt to get a luscious texture with a sweetly balanced flavor. Match ID: 119 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
Shopify Goes Soul-Searching Fri, 24 Jun 2022 13:00:00 +0000 Plus: The early days of e-commerce, the question of sentience, and frightening temperatures. Match ID: 120 Score: 10.00 source: www.wired.com age: 1 day qualifiers: 10.00 amazon
5.9-magnitude earthquake leaves children buried under rubble and villages destroyed in already impoverished country
Sitting on a hill overlooking the remote Gayan district, Abdullah Abed pointed towards several freshly dug graves. “They screamed for help,” he said of his son Farhadullah, 10, and daughter Basrina, 18. “We tried to save them but by the time we pulled them out of the rubble, their voices had gone quiet.”
Today they lie buried beside 10 other family members lost in the 5.9-magnitude earthquake that struck eastern Afghanistan in the early hours of Wednesday. An estimated 250 people have died in the hard-hit district, many of them now buried next to Abed’s children, among the more than 1,150 people feared dead and 1,500 injured across Afghanistan’s eastern Paktika and Khost provinces. It was Afghanistan’s deadliest quake in two decades.
Continue reading... Match ID: 121 Score: 10.00 source: www.theguardian.com age: 1 day qualifiers: 10.00 development
Countries across the continent have experienced all-time highs, raising fears of wildfires
A hot topic over the past couple of weeks has been the heatwave that has been scorching large parts of Europe. This week has been no different with more than 200 monthly temperature records broken across France, and countries including Switzerland, Austria, Germany and Spain recording all-time highs. For example, Cazaux and Bordeaux experienced monthly all-time temperature records of 41.9C and 40.5C respectively.
One consequence of this prolonged heat is the drying out of soil and vegetation, permitting the development of wildfires across Spain, with tens of thousands of acres of land likely to be affected. According to scientists at the University of Lleida in Spain, climate change will extend the duration of fire seasons across many regions of Europe.
Continue reading... Match ID: 122 Score: 10.00 source: www.theguardian.com age: 1 day qualifiers: 10.00 development
Demonstrators – mostly Indigenous people from the Javari Valley – held orange and yellow banners, which read: “Protection for our Amazon forest”, “Amazon resist! Who ordered the killing?” and “Bolsonaro out!”, amid growing fears that the criminal investigation into the murders was slowing.
Continue reading... Match ID: 127 Score: 10.00 source: www.theguardian.com age: 2 days qualifiers: 10.00 amazon
One year ago, we wrote about some “high-tech” warehouse robots from Amazon that appeared to be anything but. It was confusing, honestly, to see not just hardware that looked dated but concepts about how robots should work in warehouses that seemed dated as well. Obviously we’d expected a company like Amazon to be at the forefront of developing robotic technology to make their fulfillment centers safer and more efficient. So it’s a bit of a relief that Amazon has just announced several new robotics projects that rely on sophisticated autonomy to do useful, valuable warehouse tasks.
The highlight of the announcement is Proteus, which is like one of Amazon’s Kiva shelf-transporting robots that’s smart enough (and safe enough) to transition from a highly structured environment to a moderately structured environment, an enormous challenge for any mobile robot.
Proteus is our first fully autonomous mobile robot. Historically, it’s been difficult to safely incorporate robotics in the same physical space as people. We believe Proteus will change that while remaining smart, safe, and collaborative.
Proteus autonomously moves through our facilities using advanced safety, perception, and navigation technology developed by Amazon. The robot was built to be automatically directed to perform its work and move around employees—meaning it has no need to be confined to restricted areas. It can operate in a manner that augments simple, safe interaction between technology and people—opening up a broader range of possible uses to help our employees—such as the lifting and movement of GoCarts, the nonautomated, wheeled transports used to move packages through our facilities.
I assume that moving these GoCarts around is a significant task within Amazon’s warehouse, because last year, one of the robots that Amazon introduced (and that we were most skeptical of) was designed to do exactly that. It was called Scooter, and it was this massive mobile system that required manual loading and could move only a few carts to the same place at the same time, which seemed like a super weird approach for Amazon, as I explained at the time:
We know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.
From what I can make out from the limited information available, Proteus shows that Amazon is not, in fact,behind the curve with autonomous mobile robots (AMRs) and has actually been doing what makes sense all along, while for some reason occasionally showing us videos of other robots like Scooter and Bert in order to (I guess?) keep their actually useful platforms secret.
Anyway, Proteus looks to be a combination of one of Amazon’s newer Kiva mobile bases, along with the sensing and intelligence that allow AMRs to operate in semi structured warehouse environments alongside moderately trained humans. Its autonomy seems to be enabled by a combination of stereo-vision sensors and several planar lidars at the front and sides, a good combination for both safety and effective indoor localization in environments with a bunch of reliably static features.
I’m particularly impressed with the emphasis on human-robot interaction with Proteus, which often seems to be a secondary concern for robots designed for work in industry. The “eyes” are expressive in a minimalist sort of way, and while the front of the robot is very functional in appearance, the arrangement of the sensors and light bar also manages to give it a sort of endearingly serious face. That green light that the robot projects in front of itself also seems to be designed for human interaction—I haven’t seen any sensors that use light like that, but it seems like an effective way of letting a human know that the robot is active and moving. Overall, I think it’s cute, although very much not in a “let’s try to make this robot look cute” way, which is good.
What we’re not seeing with Proteus is all of the software infrastructure required to make it work effectively. Don’t get me wrong—making this hardware cost effective and reliable enough that Amazon can scale to however many robots it wants to scale to (likely a frighteningly large number) is a huge achievement. But there’s also all that fleet-management stuff that gets much more complicated once you have robots autonomously moving things around an active warehouse full of fragile humans who need to be both collaborated with and avoided.
Proteus is certainly the star of the show here, but Amazon did also introduce a couple of new robotic systems. One is Cardinal:
The movement of heavy packages, as well as the reduction of twisting and turning motions by employees, are areas we continually look to automate to help reduce risk of injury. Enter Cardinal, the robotic work cell that uses advanced artificial intelligence (AI) and computer vision to nimbly and quickly select one package out of a pile of packages, lift it, read the label, and precisely place it in a GoCart to send the package on the next step of its journey. Cardinal reduces the risk of employee injuries by handling tasks that require lifting and turning of large or heavy packages or complicated packing in a confined space.
The video of Cardinal looks to be a rendering, so I'm not going to spend too much time on it.
There’s also a new system for transferring pods from containers to adorable little container-hauling robots, designed to minimize the number of times that humans have to reach up or down or sideways:
It’s amazing to look at this kind of thing and realize the amount of effort that Amazon is putting in to maximize the efficiency of absolutely everything surrounding the (so far) very hard-to-replace humans in their fulfillment centers. There’s still nothing that can do a better job than our combination of eyes, brains, and hands when it comes to rapidly and reliably picking random things out of things and putting them into other things, but the sooner Amazon can solve that problem, the sooner the humans that those eyes and brains and hands belong to will be able to direct their attention to more creative and fulfilling tasks. Or that’s the idea, anyway.
Amazon says it expects Proteus to start off moving carts around in specific areas, with the hope that it’ll eventually automate cart movements in its warehouses as much as possible. And Cardinal is still in prototype form, but Amazon hopes that it’ll be deployed in fulfillment centers by next year.
Match ID: 130 Score: 10.00 source: spectrum.ieee.org age: 2 days qualifiers: 10.00 amazon
Disaster has killed more than 1,000 people and officials say toll could rise
Organised rescue efforts were struggling to reach the site of an earthquake in Afghanistan that killed more than 1,000 people, as survivors dig through the rubble by hand to find those still missing.
In Paktika province’s Gayan district, villagers stood atop mud bricks that were once a home. Others carefully walked through dirt alleyways, gripping on to damaged walls with exposed timber beams to make their way.
Continue reading... Match ID: 131 Score: 10.00 source: www.theguardian.com age: 2 days qualifiers: 10.00 development
Afghanistan’s Taliban-led government has appealed for more international aid as it struggles to cope with the devastating earthquake in a mountainous eastern region that has left more than 1,000 people dead and many more injured.
With the war-ravaged country already stricken by an economic crisis, the hardline Islamist leadership said sanctions imposed by western countries after the withdrawal of US-led coalition forces last year meant it was handicapped in its ability to deal with Wednesday’s disaster in Khost and Paktika provinces.
Continue reading... Match ID: 133 Score: 10.00 source: www.theguardian.com age: 2 days qualifiers: 10.00 development
Lawyers say government is attempting to intimidate pastoralists as thousands flee to Kenya amid escalating row over evictions
Twenty Maasai pastoralists from northern Tanzania have been charged with the murder of a police officer during protests over government plans to use their ancestral land for conservation and a luxury hunting reserve.
The officer was allegedly shot by an arrow on 10 June while attempting to demarcate land in Loliondo, which borders Serengeti national park.
Continue reading... Match ID: 134 Score: 10.00 source: www.theguardian.com age: 2 days qualifiers: 10.00 development
Match ID: 135 Score: 8.57 source: theintercept.com age: 3 days qualifiers: 8.57 development
ISS Daily Summary Report – 6/22/2022 Wed, 22 Jun 2022 16:00:05 +0000 European Robotic Arm (ERA) Operations: Today, by successfully grappling to base point 3, the ERA Mission 2 was completed. ERA Mission 2 was intended to be completed during RS Extravehicular Activity (EVA) # 53 which occurred in April; however, ERA was unable to complete the grapple to base point 3. In this position, ERA is … Match ID: 136 Score: 8.57 source: blogs.nasa.gov age: 3 days qualifiers: 8.57 apple
This, of course, is not to Apple’s benefit, and gives Tim Cook plenty of reason to keep the term out of his mouth. But it’s more than a marketing ploy. The vision of the metaverse put forward by Meta is fundamentally opposed to what Apple might embrace should it bring an AR/VR (augmented reality/virtual reality) headset to market.
“There was a lot of anxiety in the industry about what Apple was going to announce. Now that they didn’t, there’s almost a sense of relief.”
—Anshel Sag, Moor Insights & Strategy
“Meta has shipped 15 million [Oculus Quest] units, but they’re doing it in a way that costs them,” says Sag. “They’re probably shipping at a loss on hardware.”
Zuckerberg’s vision of an “embodied Internet” is focused on social platforms and services. The Quest headset is impressive and, if projects like the Cambria VR headset come to market, will soon see a generational leap. But Meta’s real goal is the establishment of a new social platform—not developing and selling AR/VR headsets.
For Apple, innovative hardware is the entire point. Whether that hardware is used to access Meta’s metaverse, another company’s platform, or isolated experiences crafted by small developers, is almost irrelevant.
Apple’s alternative to Meta’s embodied Internet will likely be a broad offering spanning the iPhone, iPad, Mac, and a future AR/VR headset. The metaverse may simply be an app (or apps) competing with other AR/VR experiences across multiple devices.
“I wouldn’t read too much into Apple not using the word metaverse,” says George Jijiashvili, principal analyst at Omdia. “Rather, focus on what it’s actually doing to enable experiences which could support virtual social interactions in the future.”
Competitors breathe a sigh of relief
Aside from the new RoomPlan Swift API, a technology that developers can use to quickly create 3D floor plans of real-world spaces, Apple had no all-new AR/VR announcements. RealityOS, the operating system rumored to power Apple’s upcoming headset, wasn’t even teased.
This seems a vote of no confidence in all things metaverse and, perhaps, the entire AR/VR space. However, Jijiashvili warns against such pessimism. “The reality is that Apple has many strengths in this space, which it continues to gradually improve,” says Jijiashvili.
He points out Apple has acquired over eight AR/VR startups since 2015. ARKit, Apple’s augmented-reality development platform for iOS devices, continues to see interest from developers, including big names like Snapchat and even Instagram, which is owned by Meta.
However, Apple’s lack of news gives the rest of the industry the chance to prepare for its seemingly inevitable push into the space. Though consumer headsets are dominated by Meta, which produces nearly 80 percent of all headsets sold, the industry is rife with midsize companies like HTC, Valve, DPVR, Magic Leap, Pico, Lumus, Vuzix, Pimax, and Varjo—to name just a few. Apple’s arrival in the space could threaten these innovators.
“There was a lot of anxiety in the industry about what Apple was going to announce,” says Sag. “Now that they didn’t, there’s almost a sense of relief.”
AR headsets face headwinds
Apple’s decision not to show its headset, which is believed to support both AR and VR, strongly hints the company isn’t satisfied with its progress.
It’s not alone. Meta reportedly delayed an upcoming AR headset, known as Project Nazare. The Magic Leap 2 and Microsoft HoloLens 2 seem trapped in the niche world of high-end enterprise solutions despite years of work by both companies.
AR, it seems, is hard to get right.
This leaves no serious alternative to Meta’s Quest 2 on the horizon. Its successor, Project Cambria, is rumored to target a 2022 release and may have little competition if launched this year. “The focus may have shifted more towards VR than people really expected,” says Sag.
It was a great idea for its time—a network of NASA communications satellites high in geostationary orbit, providing nearly continuous radio contact between controllers on the ground and some of the agency’s highest-profile missions: the space shuttles, the International Space Station, the Hubble Space Telescope, and dozens of others.
The satellites were called TDRS—short for Tracking and Data Relay Satellite—and the first was launched in 1983 on the maiden voyage of the space shuttle Challenger. Twelve more would follow, quietly providing a backbone for NASA’s orbital operations. But they’ve gotten old, they’re expensive, and in the 40 years since they began, they’ve been outpaced by commercial satellite networks.
So what comes next? That’s the 278-million-dollar question—but, importantly, it’s not a multibillion-dollar question.
“Now it’ll be just plug and play. They can concentrate on the mission, and they don’t have to worry about comms, because we provide that for them.” —Craig Miller, Viasat
NASA, following its mantra to get out of the business of routine space operations, has now awarded US $278.5 million in contracts to six companies: Amazon’s Project Kuiper, Inmarsat Government, SES Government Solutions, SpaceX, Telesat, and Viasat. The agency is asking them to offer services that are reliable, adaptable for all sorts of missions, easy for NASA to use, and—ideally—orders of magnitude less expensive than TDRS.
“It’s an ambitious wish list,” says Eli Naffah, communications services project manager at NASA’s Glenn Research Center, in Cleveland. “We’re looking to have industry tell us, based on their capabilities and their business interests, what they would like to provide to us as a service that they would provide to others broadly.”
Inmarsat now operates a number of geostationary satellites in their GX fleet. The projected GX7 satellite [left] is expected to launch in 2023.Inmarsat Government
Satellite communication is one area that has taken off as a business proposition, independent of NASA’s space efforts. Internet and television transmission, GPS, phone service—all of these have become giant enterprises, ubiquitous in people’s lives. Economy of scale and competition have brought prices down dramatically. (That’s very different from, say, space tourism, which attracts a lot of attention but for now is still something that only the very wealthy can afford.)
NASA benefits, in the case of communications, from being a relatively small player, especially if it can get out from under the costs of running something like the TDRS system. The commercial satellite companies take over those costs—which, they say, is fine, since they were spending the money anyway.
“We love having customers like NASA,” says Craig Miller, president for government systems at Viasat. “They’re a joy to work with, their mission is in alignment with a lot of our core values, but we make billions of dollars a year selling Internet to other sources.”
Each of the six companies under the new NASA contract takes a different approach. Inmarsat, SES, and Viasat, for instance, would use large relay satellites, like TDRS, each seeming to hover over a fixed spot on Earth’s equator because, at an altitude of 35,786 kilometers, one orbit takes precisely 24 hours. Amazon and SpaceX, by contrast, would use swarms of smaller satellites in low Earth orbit, only 3,700 km in altitude. (SpaceX, at last count, had launched more than 2,200 of its Starlink satellites.) SES and Telesat would offer two-for-one packages, with service both from high and lower orbits. As for radio frequencies, the companies might use C band, Ka band, L band, optical—whatever their existing clients have needed. And so on.
Sixty SpaceX Starlink satellites wait for deployment from their launch rocket in low Earth orbit, in this photograph from 2019.SpaceX
It may sound like an alphabet soup of ways to solve one basic need—being in contact with its satellites—but engineers say that’s a minor trade-off for NASA if it can piggyback on others’ communications networks. “This allows NASA and our other government users to achieve their missions without the upfront capital expenditure and the full life-cycle cost” of running the TDRS system, said Britt Lewis, a senior vice president of Inmarsat Government, in an email to IEEE Spectrum.
One major advantage to the space agency would be the sheer volume of service available to it. In years past, the TDRS system could handle only so many transmissions at a time; if a particular mission needed to send a large volume of data, it had to book time in advance.
“Now it’ll be just plug and play,” says Miller at Viasat. “They can concentrate on the mission, and they don’t have to worry about comms, because we provide that for them.”
NASA says it expects each company will complete technology development and in-space demonstrations by 2025, with the most successful starting to take over operations for the agency by 2030. There will probably be no single winner: “We’re not really looking to have any one particular company be able to provide all the services on our list,” says NASA’s Naffah.
NASA's TDRS-M communications satellite launched in 2017. NASA
The TDRS satellites have proved durable; TDRS-3, launched by the space shuttle Discovery in 1988, is still usable as a spare if newer satellites break down. NASA says it will probably continue to use the system into the 2030s, but it planned no more launches after the last (of TDRS-13 a.k.a. TDRS-M) in 2017.
If everything works out, says Amazon in an email, “This model would allow organizations like NASA to rely on commercial operators for near-Earth communications while shifting their focus to more ambitious operations, like solving technical challenges for deep space exploration and science missions.”
At which point the sky's the limit. NASA focuses on the moon, Mars, and other exploration, while it buys routine services from the private sector.
“We can provide the same kind of broadband capabilities that you’re used to having on Earth,” says Viasat’s Miller. He smiles at this thought. “We can provide Netflix to the ISS.”
Match ID: 140 Score: 8.57 source: spectrum.ieee.org age: 43 days qualifiers: 3.57 trade, 2.14 musk, 1.43 development, 1.43 amazon
“Hardware is hard,” venture capitalist Marc Andreessen famously declared at a tech investors’ event in 2013. Explaining the longstanding preference for software startups among VCs, Andreessen said, “There are so many more things that can go wrong in a hardware company. There are so many more ways a hardware company can blow up in a nonrecoverable way.”
Even as Andreessen was speaking, however, the seeds were being sown for one of the biggest and most sustained infusions of cash into a hardware-based movement in the last decade. Since then, the design and construction of electric vertical-takeoff-and-landing (eVTOL) aircraft has been propelled by waves of funding from some of the biggest names in tech. And, surprisingly for such a large movement, the funding is mostly coming from sources outside of the traditional venture-capital community—rich investors and multinational corporations. The list includes Google cofounder
Larry Page, autonomy pioneer Sebastian Thrun, entrepreneur Martine Rothblatt, LinkedIn cofounder Reid Hoffman, Zynga founder Mark Pincus, investor Adam Grosser, entrepreneur Marc Lore, and companies including Uber, Mercedes-Benz, Airbus, Boeing, Toyota, Hyundai, Honda, JetBlue, American Airlines, Virgin Atlantic, and many more.
250 companies are working toward what they hope will be a revolution in urban transportation. Some, such as Wisk and Kittyhawk and Joby, are flying a small fleet of prototype aircraft; others have nothing more than a design concept. If the vision becomes reality, hundreds of eVTOLs will swarm over the skies of a big city during a typical rush hour, whisking small numbers of passengers at per-kilometer costs no greater than those of driving a car. This vision, which goes by the name urban air mobility or advanced air mobility, will require backers to overcome entire categories of obstacles, including certification, technology development, and the operational considerations of safely flying large numbers of aircraft in a small airspace.
Even tech development, considered the most straightforward of the challenges, has a way to go. Joby, one of the most advanced of the startups, provided a stark reminder of this fact when it was disclosed on 16 February that one of its unpiloted prototypes crashed during a test flight in a remote part of California. Few details were available, but
reporting by FutureFlight suggested the aircraft was flying test routes at altitudes up to 1,200 feet and at speeds as high as 240 knots.
No one expects the urban air mobility market, if it does get off the ground, to ever be large enough to accommodate 250 manufacturers of eVTOLs, so a cottage industry has sprung up around handicapping the field. SMG Consulting (founded by
Sergio Cecutta, a former executive at Honeywell and Danaher) has been ranking eVTOL startups in its Advanced Air Mobility Reality Index since December 2020. Its latest index—from which our chart below has been adapted, with SMG’s kind permission—suggests that the top 10 startups have pulled in more than US $6 billion in funding; the next couple of hundred startups have combined funding in the several hundred million at most.
Cecutta is quick to point out that funding, though important, is not everything when it comes to ranking the eVTOL companies. How they will navigate the largely uncharted territory of certifying and manufacturing the novel fliers will also be critical. “These companies are all forecasting production in the hundreds, if not thousands” of units per year, he says.
“The aerospace industry is not used to producing in those kinds of numbers….The challenge is to be able to build at that rate, to have a supply chain that can supply you with the components you need to build at that rate. Aerospace is a team sport. There is no company that does 100 percent in-house.” Hardware
really is hard.
This article appears in the March 2022 print issue as “What’s Behind the Air-Taxi Craze”; on 24 Feb. 2022, the chart was updated with data kindly provided by Beta Technologies.
Are you looking for a new graphic design tool? Would you like to read a detailed review of Canva? As it's one of the tools I love using. I am also writing my first ebook using canva and publish it soon on my site you can download it is free. Let's start the review.
Canva is a free graphic design web application that allows you to create invitations, business cards, flyers, lesson plans, banners, and more using professionally designed templates. You can upload your own photos from your computer or from Google Drive, and add them to Canva's templates using a simple drag-and-drop interface. It's like having a basic version of Photoshop that doesn't require Graphic designing knowledge to use. It’s best for nongraphic designers.
Who is Canva best suited for?
Canva is a great tool for small business owners, online entrepreneurs, and marketers who don’t have the time and want to edit quickly.
To create sophisticated graphics, a tool such as Photoshop can is ideal. To use it, you’ll need to learn its hundreds of features, get familiar with the software, and it’s best to have a good background in design, too.
Also running the latest version of Photoshop you need a high-end computer.
So here Canva takes place, with Canva you can do all that with drag-and-drop feature. It’s also easier to use and free. Also an even-more-affordable paid version is available for $12.95 per month.
Free vs Pro vs Enterprise Pricing plan
The product is available in three plans: Free, Pro ($12.99/month per user or $119.99/year for up to 5 people), and Enterprise ($30 per user per month, minimum 25 people).
Free plan Features
250,000+ free templates
100+ design types (social media posts, presentations, letters, and more)
100+ million premium and stock photos, videos, audio, and graphics
610,000+ premium and free templates with new designs daily
Access to Background Remover and Magic Resize
Create a library of your brand or campaign's colors, logos, and fonts with up to 100 Brand Kits
Remove image backgrounds instantly with background remover
Resize designs infinitely with Magic Resize
Save designs as templates for your team to use
100GB of cloud storage
Schedule social media content to 8 platforms
Enterprise Plan Features
Everything Pro has plus:
Establish your brand's visual identity with logos, colors and fonts across multiple Brand Kits
Control your team's access to apps, graphics, logos, colors and fonts with brand controls
Built-in workflows to get approval on your designs
Set which elements your team can edit and stay on brand with template locking
Log in with single-sign on (SSO) and have access to 24/7 Enterprise-level support.
How to Use Canva?
To get started on Canva, you will need to create an account by providing your email address, Google, Facebook or Apple credentials. You will then choose your account type between student, teacher, small business, large company, non-profit, or personal. Based on your choice of account type, templates will be recommended to you.
You can sign up for a free trial of Canva Pro, or you can start with the free version to get a sense of whether it’s the right graphic design tool for your needs.
When you sign up for an account, Canva will suggest different post types to choose from. Based on the type of account you set up you'll be able to see templates categorized by the following categories: social media posts, documents, presentations, marketing, events, ads, launch your business, build your online brand, etc.
Start by choosing a template for your post or searching for something more specific. Search by social network name to see a list of post types on each network.
Next, you can choose a template. Choose from hundreds of templates that are ready to go, with customizable photos, text, and other elements.
You can start your design by choosing from a variety of ready-made templates, searching for a template matching your needs, or working with a blank template.
Canva has a lot to choose from, so start with a specific search.if you want to create business card just search for it and you will see alot of templates to choose from
Inside the Canva designer, the Elements tab gives you access to lines and shapes, graphics, photos, videos, audio, charts, photo frames, and photo grids.The search box on the Elements tab lets you search everything on Canva.
To begin with, Canva has a large library of elements to choose from. To find them, be specific in your search query. You may also want to search in the following tabs to see various elements separately:
The Photos tab lets you search for and choose from millions of professional stock photos for your templates.
You can replace the photos in our templates to create a new look. This can also make the template more suited to your industry.
You can find photos on other stock photography sites like pexel, pixabay and many more or simply upload your own photos.
When you choose an image, Canva’s photo editing features let you adjust the photo’s settings (brightness, contrast, saturation, etc.), crop, or animate it.
When you subscribe to Canva Pro, you get access to a number of premium features, including the Background Remover. This feature allows you to remove the background from any stock photo in library or any image you upload.
The Text tab lets you add headings, normal text, and graphical text to your design.
When you click on text, you'll see options to adjust the font, font size, color, format, spacing, and text effects (like shadows).
Canva Pro subscribers can choose from a large library of fonts on the Brand Kit or the Styles tab. Enterprise-level controls ensure that visual content remains on-brand, no matter how many people are working on it.
Create an animated image or video by adding audio to capture user’s attention in social news feeds.
If you want to use audio from another stock site or your own audio tracks, you can upload them in the Uploads tab or from the more option.
Want to create your own videos? Choose from thousands of stock video clips. You’ll find videos that range upto 2 minutes
You can upload your own videos as well as videos from other stock sites in the Uploads tab.
Once you have chosen a video, you can use the editing features in Canva to trim the video, flip it, and adjust its transparency.
On the Background tab, you’ll find free stock photos to serve as backgrounds on your designs. Change out the background on a template to give it a more personal touch.
The Styles tab lets you quickly change the look and feel of your template with just a click. And if you have a Canva Pro subscription, you can upload your brand’s custom colors and fonts to ensure designs stay on brand.
If you have a Canva Pro subscription, you’ll have a Logos tab. Here, you can upload variations of your brand logo to use throughout your designs.
With Canva, you can also create your own logos. Note that you cannot trademark a logo with stock content in it.
Publishing with Canva
With Canva, free users can download and share designs to multiple platforms including Instagram, Facebook, Twitter, LinkedIn, Pinterest, Slack and Tumblr.
Canva Pro subscribers can create multiple post formats from one design. For example, you can start by designing an Instagram post, and Canva's Magic Resizer can resize it for other networks, Stories, Reels, and other formats.
Canva Pro subscribers can also use Canva’s Content Planner to post content on eight different accounts on Instagram, Facebook, Twitter, LinkedIn, Pinterest, Slack, and Tumblr.
Canva Pro allows you to work with your team on visual content. Designs can be created inside Canva, and then sent to your team members for approval. Everyone can make comments, edits, revisions, and keep track via the version history.
When it comes to printing your designs, Canva has you covered. With an extensive selection of printing options, they can turn your designs into anything from banners and wall art to mugs and t-shirts.
Canva Print is perfect for any business seeking to make a lasting impression. Create inspiring designs people will want to wear, keep, and share. Hand out custom business cards that leave a lasting impression on customers' minds.
The Canva app is available on the Apple App Store and Google Play. The Canva app has earned a 4.9 out of five star rating from over 946.3K Apple users and a 4.5 out of five star rating from over 6,996,708 Google users.
In addition to mobile apps, you can use Canva’s integration with other Internet services to add images and text from sources like Google Maps, Emojis, photos from Google Drive and Dropbox, YouTube videos, Flickr photos, Bitmojis, and other popular visual content elements.
Canva Pros and Cons
A user-friendly interface
Canva is a great tool for people who want to create professional graphics but don’t have graphic design skills.
Hundreds of templates, so you'll never have to start from scratch.
Wide variety of templates to fit multiple uses
Branding kits to keep your team consistent with the brand colors and fonts
Creating visual content on the go
You can find royalty free images, audio, and video without having to subscribe to another service.
Some professional templates are available for Pro user only
Advanced photo editing features like blurring or erasing a specific area are missing.
Some elements that fall outside of a design are tricky to retrieve.
Features (like Canva presentations) could use some improvement.
If you are a regular user of Adobe products, you might find Canva's features limited.
Prefers to work with vectors. Especially logos.
Expensive enterprise pricing
In general, Canva is an excellent tool for those who need simple images for projects. If you are a graphic designer with experience, you will find Canva’s platform lacking in customization and advanced features – particularly vectors. But if you have little design experience, you will find Canva easier to use than advanced graphic design tools like Adobe Photoshop or Illustrator for most projects. If you have any queries let me know in the comments section.
Match ID: 142 Score: 8.57 source: www.crunchhype.com age: 125 days qualifiers: 3.57 trade, 3.57 google, 1.43 apple
Every day, satellites circling overhead capture trillions of pixels of high-resolution imagery of the surface below. In the past, this kind of information was mostly reserved for specialists in government or the military. But these days, almost anyone can use it.
That’s because the cost of sending payloads, including imaging satellites, into orbit has dropped drastically. High-resolution satellite images, which used to cost tens of thousands of dollars, now can be had for the price of a cup of coffee.
What’s more, with the recent advances in artificial intelligence, companies can more easily extract the information they need from huge digital data sets, including ones composed of satellite images. Using such images to make business decisions on the fly might seem like science fiction, but it is already happening within some industries.
These underwater sand dunes adorn the seafloor between Andros Island and the Exuma islands in the Bahamas. The turquoise to the right reflects a shallow carbonate bank, while the dark blue to the left marks the edge of a local deep called Tongue of the Ocean. This image was captured in April 2020 using the Moderate Resolution Imaging Spectroradiometer on NASA’s Terra satellite.
Joshua Stevens/NASA Earth Observatory
Here’s a brief overview of how you, too, can access this kind of information and use it to your advantage. But before you’ll be able to do that effectively, you need to learn a little about how modern satellite imagery works.
The orbits of Earth-observation satellites generally fall into one of two categories: GEO and LEO. The former is shorthand for geosynchronous equatorial orbit. GEO satellites are positioned roughly 36,000 kilometers above the equator, where they circle in sync with Earth’s rotation. Viewed from the ground, these satellites appear to be stationary, in the sense that their bearing and elevation remain constant. That’s why GEO is said to be a geostationary orbit.
Such orbits are, of course, great for communications relays—it’s what allows people to mount satellite-TV dishes on their houses in a fixed orientation. But GEO satellites are also appropriate when you want to monitor some region of Earth by capturing images over time. Because the satellites are so high up, the resolution of that imagery is quite coarse, however. So these orbits are primarily used for observation satellites designed to track changing weather conditions over broad areas.
Being stationary with respect to Earth means that GEO satellites are always within range of a downlink station, so they can send data back to Earth in minutes. This allows them to alert people to changes in weather patterns almost in real time. Most of this kind of data is made available for free by the U.S. National Oceanographic and Atmospheric Administration.
In March 2021, the container ship Ever Given ran aground, blocking the Suez Canal for six days. This satellite image of the scene, obtained using synthetic-aperture radar, shows the kind resolution that is possible with this technology.
The other option is LEO, which stands for low Earth orbit. Satellites placed in LEO are much closer to the ground, which allows them to obtain higher-resolution images. And the lower you can go, the better the resolution you can get. The company Planet, for example, increased the resolution of its recently completed satellite constellation, SkySat, from 72 centimeters per pixel to just 50 cm—an incredible feat—by lowering the orbits its satellites follow from 500 to 450 km and improving the image processing.
The best commercially available spatial resolution for optical imagery is 25 cm, which means that one pixel represents a 25-by-25-cm area on the ground—roughly the size of your laptop. A handful of companies capture data with 25-cm to 1-meter resolution, which is considered high to very high resolution in this industry. Some of these companies also offer data from 1- to 5-meter resolution, considered medium to high resolution. Finally, several government programs have made optical data available at 10-, 15-, 30-, and 250-meter resolutions for free with open data programs. These include NASA/U.S. Geological Survey Landsat, NASA MODIS (Moderate Resolution Imaging Spectroradiometer), and ESA Copernicus. This imagery is considered low resolution.
Because the satellites that provide the highest-resolution images are in the lowest orbits, they sense less area at once. To cover the entire planet, a satellite can be placed in a polar orbit, which takes it from pole to pole. As it travels, Earth rotates under it, so on its next pass, it will be above a different part of Earth.
Many of these satellites don’t pass directly over the poles, though. Instead, they are placed in a near-polar orbit that has been specially designed to take advantage of a subtle bit of physics. You see, the spinning Earth bulges outward slightly at the equator. That extra mass causes the orbits of satellites that are not in polar orbits to shift or (technically speaking) to precess. Satellite operators often take advantage of this phenomenon to put a satellite in what’s called a sun-synchronous orbit. Such orbits allow the repeated passes of the satellite over a given spot to take place at the same time of day. Not having the pattern of shadows shift between passes helps the people using these images to detect changes.
It usually takes 24 hours for a satellite in polar orbit to survey the entire surface of Earth. To image the whole world more frequently, satellite companies use multiple satellites, all equipped with the same sensor and following different orbits. In this way, these companies can provide more frequently updated images of a given location. For example, Maxar’s Worldview Legion constellation, launching later this year, includes six satellites.
After a satellite captures some number of images, all that data needs to be sent down to Earth and processed. The time required for that varies.
DigitalGlobe (which Maxar acquired in 2017) recently announced that it had managed to send data from a satellite down to a ground station and then store it in the cloud in less than a minute. That was possible because the image sent back was of the parking lot of the ground station, so the satellite didn’t have to travel between the collection point and where it had to be to do the data “dumping,” as this process is called.
In general, Earth-observation satellites in LEO don’t capture imagery all the time—they do that only when they are above an area of special interest. That’s because these satellites are limited to how much data they can send at one time. Typically, they can transmit data for only 10 minutes or so before they get out of range of a ground station. And they cannot record more data than they’ll have time to dump.
Currently, ground stations are located mostly near the poles, the most visited areas in polar orbits. But we can soon expect distances to the nearest ground station to shorten because both Amazon and Microsoft have announced intentions to build large networks of ground stations located all over the world. As it turns out, hosting the terabytes of satellite data that are collected daily is big business for these companies, which sell their cloud services (Amazon Web Services and Microsoft’s Azure) to satellite operators.
For now, if you are looking for imagery of an area far from a ground station, expect a significant delay—maybe hours—between capture and transmission of the data. The data will then have to be processed, which adds yet more time. The fastest providers currently make their data available within 48 hours of capture, but not all can manage that. While it is possible, under ideal weather conditions, for a commercial entity to request a new capture and get the data it needs delivered the same week, such quick turnaround times are still considered cutting edge.
The best commercially available spatial resolution is 25 centimeters for optical imagery, which means that one pixel represents something roughly the size of your laptop.
I’ve been using the word “imagery,” but it’s important to note that satellites do not capture images the same way ordinary cameras do. The optical sensors in satellites are calibrated to measure reflectance over specific bands of the electromagnetic spectrum. This could mean they record how much red, green, and blue light is reflected from different parts of the ground. The satellite operator will then apply a variety of adjustments to correct colors, combine adjacent images, and account for parallax, forming what’s called a true-color composite image, which looks pretty much like what you would expect to get from a good camera floating high in the sky and pointed directly down.
Imaging satellites can also capture data outside of the visible-light spectrum. The near-infrared band is widely used in agriculture, for example, because these images help farmers gauge the health of their crops. This band can also be used to detect soil moisture and a variety of other ground features that would otherwise be hard to determine.
Longer-wavelength “thermal” IR does a good job of penetrating smoke and picking up heat sources, making it useful for wildfire monitoring. And synthetic-aperture radar satellites, which I discuss in greater detail below, are becoming more common because the images they produce aren’t affected by clouds and don’t require the sun for illumination.
You might wonder whether aerial imagery, say, from a drone, wouldn’t work at least as well as satellite data. Sometimes it can. But for many situations, using satellites is the better strategy. Satellites can capture imagery over areas that would be difficult to access otherwise because of their remoteness, for example. Or there could be other sorts of accessibility issues: The area of interest could be in a conflict zone, on private land, or in another place that planes or drones cannot overfly.
So with satellites, organizations can easily monitor the changes taking place at various far-flung locations. Satellite imagery allows pipeline operators, for instance, to quickly identify incursions into their right-of-way zones. The company can then take steps to prevent a disastrous incident, such as someone puncturing a gas pipeline while construction is taking place nearby.
This SkySat image shows the effect of a devastating landslide that took place on 30 December 2020. Debris from that landslide destroyed buildings and killed 10 people in the Norwegian village of Ask.
The ability to compare archived imagery with recently acquired data has helped a variety of industries. For example, insurance companies sometimes use satellite data to detect fraudulent claims (“Looks like your house had a damaged roof when you bought it…”). And financial-investment firms use satellite imagery to evaluate such things as retailers’ future profits based on parking-lot fullness or to predict crop prices before farmers report their yields for the season.
Despite these many successes, investigative reporters and nongovernmental organizations aren’t yet using satellite data regularly, perhaps because even the small cost of the imagery is a deterrent. Thankfully, some kinds of low-resolution satellite data can be had for free.
The first place to look for free satellite imagery is the
Copernicus Open Access Hub and EarthExplorer. Both offer free access to a wide range of open data. The imagery is lower resolution than what you can purchase, but if the limited resolution meets your needs, why spend money?
If you require medium- or high-resolution data, you might be able to buy it directly from the relevant satellite operator. This field recently went through a period of mergers and acquisitions, leaving only a handful of providers, the big three in the West being
Maxar and Planet in the United States and Airbus in Germany. There are also a few large Asian providers, such as SI Imaging Services in South Korea and Twenty First Century Aerospace Technology in Singapore. Most providers have a commercial branch, but they primarily target government buyers. And they often require large minimum purchases, which is unhelpful to companies looking to monitor hundreds of locations or fewer.
Expect the distance to the nearest ground station to shorten because both
Amazon and Microsoft have announced intentions to build large networks of ground stations located all over the world.
Fortunately, approaching a satellite operator isn’t the only option. In the past five years, a cottage industry of consultants and local resellers with exclusive deals to service a certain market has sprung up. Aggregators and resellers spend years negotiating contracts with multiple providers so they can offer customers access to data sets at more attractive prices, sometimes for as little as a few dollars per image. Some companies providing geographic information systems—including
Esri, L3Harris, and Safe Software—have also negotiated reselling agreements with satellite-image providers.
Traditional resellers are middlemen who will connect you with a salesperson to discuss your needs, obtain quotes from providers on your behalf, and negotiate pricing and priority schedules for image capture and sometimes also for the processing of the data. This is the case for
Apollo Mapping, European Space Imaging, Geocento, LandInfo, Satellite Imaging Corp., and many more. The more innovative resellers will give you access to digital platforms where you can check whether an image you need is available from a certain archive and then order it. Examples include LandViewer from EOS and Image Hunter from Apollo Mapping.
More recently, a new crop of aggregators began offering customers the ability to programmatically access Earth-observation data sets. These companies work best for people looking to integrate such data into their own applications or workflows. These include the company I work for,
SkyWatch, which provides such a service, called EarthCache. Other examples are UP42 from Airbus and Sentinel Hub from Sinergise.
While you will still need to talk with a sales rep to activate your account—most often to verify you will use the data in ways that fits the company’s terms of service and licensing agreements—once you’ve been granted access to their applications, you will be able to programmatically order archive data from one or multiple providers. SkyWatch is, however, the only aggregator allowing users to programmatically request future data to be collected (“tasking a satellite”).
While satellite imagery is fantastically abundant and easy to access today, two changes are afoot that will expand further what you can do with satellite data: faster revisits and greater use of synthetic-aperture radar (SAR).
Satellite images have helped to reveal China’s treatment of its Muslim Uyghur minority. About a million Uyghurs (and other ethnic minorities) have been interned in prisons or camps like the one shown here [top], which lies to the east of the city of Ürümqi, the capital of China’s Xinjiang Uyghur Autonomous Region. Another satellite image [bottom] shows the characteristic oval shape of a fixed-chimney Bull’s trench kiln, a type widely used for manufacturing bricks in southern Asia. This one is located in Pakistan’s Punjab province. This design poses environmental concerns because of the sooty air pollution it generates, and such kilns have also been associated with human-rights abuses.Top: CNES/Airbus/Google Earth; Bottom: Maxar Technologies/Google Earth
The first of these developments is not surprising. As more Earth-observation satellites are put into orbit, more images will be taken, more often. So how frequently a given area is imaged by a satellite will increase. Right now, that’s typically two or three times a week. Expect the revisit rate soon to become several times a day. This won’t entirely address the challenge of clouds obscuring what you want to view, but it will help.
The second development is more subtle. Data from the two satellites of the European Space Agency’s
Sentinel-1 SAR mission, available at no cost, has enabled companies to dabble in SAR over the last few years.
With SAR, the satellite beams radio waves down and measures the return signals bouncing off the surface. It does that continually, and clever processing is used to turn that data into images. The use of radio allows these satellites to see through clouds and to collect measurements day and night. Depending on the radar band that’s employed, SAR imagery can be used to judge material properties, moisture content, precise movements, and elevation.
As more companies get familiar with such data sets, there will no doubt be a growing demand for satellite SAR imagery, which has been widely used by the military since the 1970s. But it’s just now starting to appear in commercial products. You can expect those offerings to grow dramatically, though.
Indeed, a large portion of the money being invested in this industry is currently going to fund large SAR constellations, including those of
Capella Space, Iceye, Synspective, XpressSAR, and others. The market is going to get crowded fast, which is great news for customers. It means they will be able to obtain high-resolution SAR images of the place they’re interested in, taken every hour (or less), day or night, cloudy or clear.
People will no doubt figure out wonderful new ways to employ this information, so the more folks who have access to it, the better. This is something my colleagues at SkyWatch and I deeply believe, and it’s why we’ve made it our mission to help democratize access to satellite imagery.
One day in the not-so-distant future, Earth-observation satellite data might become as ubiquitous as GPS, another satellite technology first used only by the military. Imagine, for example, being able to take out your phone and say something like, “Show me this morning’s soil-moisture map for Grover’s Corners High; I want to see whether the baseball fields are still soggy.”
This article appears in the March 2022 print issue as “A Boom with a View.”
Editor's note: The original version of this article incorrectly stated that Maxar's Worldview Legion constellation launched last year.
Match ID: 143 Score: 7.86 source: spectrum.ieee.org age: 126 days qualifiers: 3.57 google, 1.43 microsoft, 1.43 development, 1.43 amazon
For sheer drama and resonance, few tech breakthroughs can match the invention of the neodymium-iron-boron permanent magnet in the early 1980s. It’s one of the great stories of corporate intrigue: General Motors in the United States and Sumitomo in Japan independently conceived the technology and then worked in secret, racing to commercialize the technology, and without even being aware of the other’s efforts. The two project leaders—Masato Sagawa of Sumitomo and John Croat of GM—surprised each other by announcing their results at the same conference in Pittsburgh in 1983.
Up for grabs was a market potentially worth billions of dollars. The best permanent magnets at the time, samarium-cobalt, were strong and reliable but expensive. They were used in electric motors, generators, audio speakers, hard-disk drives, and other high-volume products. Today, some
95 percent of permanent magnets are neodymium-iron-boron. The global market for these magnets is expected to reach US $20 billion a year within a couple of years, as the automobile industry shifts toward electric vehicles and as utilities turn increasingly to wind turbines to meet growing demand.
IEEE recently honored Sagawa and Croat by awarding them its Medal for Environmental and Safety Technologies at the 2022 Vision, Innovation, and Challenges Summit. IEEE Spectrum spoke with the two inventors, including an hourlong interview with both of them (only the second time the two have been interviewed together). They revealed their reasons for zeroing in on the rare-earth element neodymium, the major challenges they faced in making a commercial magnet out of it, the extraordinary intellectual-property deal that allowed both GM and Sumitomo to market their magnets worldwide, and their opinions on whether there will ever be a successful permanent magnet that does not use rare-earth elements.
You were trying to make a cheaper magnet, as I understand it. You weren’t even necessarily trying to make a stronger one, although that turned out to be the case. What made you think you could make a cheaper magnet?
John Croat: Well, the problem with samarium-cobalt…they were an excellent magnet. They had good temperature properties. You’ve probably heard the phrase that rare earths aren’t really that rare, but samarium is one of the more rare ones. It constitutes only about 0.8 percent of the composition of the ores that are typically exploited today for rare earths. So it was a fairly expensive rare earth. And, of course, cobalt was very expensive. During my early years at General Motors Research Labs, there was a war in Central African Zaire [now known as the Democratic Republic of the Congo], which is a big cobalt supplier. And the price of cobalt went up to something like $45 a kilogram. Remember, this was in the 1970s, so it basically stopped our research on samarium-cobalt magnets.
Masato, what do you remember? What do you recall of the state of the permanent-magnet market and technology in the 1970s in Japan?
Masato Sagawa: I joined Fujitsu in 1972, so that’s in the same age as with John. And I was given from the company to improve the samarium-cobalt magnet, to improve the mechanical strength. But I wondered why there is no iron compound. Iron is much cheaper and much more [available] than cobalt, and iron has higher magnetic moment than cobalt. So if I can produce rare-earth iron magnets, I thought I will have higher magnetic strengths and much lower cost. So I started to research the samarium-cobalt—or rare-earth iron compound. But it’s an official subject in Fujitsu. And I worked hard on the samarium-cobalt. And I succeeded in the development of samarium-cobalt magnet with high strength. And I asked the company to work on a rare-earth iron compound permanent magnet. But I was not allowed. But I had an idea. Rare-earth, iron and, I think, a small amount of additive elements like some carbon or boron, which are known to have a very small atomic diameter. I studied the rare-earth, iron, boron or rare-earth, iron, carbon. So underground, I did this research for several years. And I reached this neodymium-boron several years later. It was in 1982.
What was it that made you focus on neodymium, iron, and boron? Why those?
Croat: Well, of course, when samarium-cobalt magnets were developed, everyone in this field thought about developing a rare-earth-iron magnet because iron is virtually free compared to cobalt. Now, in terms of the rare earths, as I said, rare earths are not really that rare. The light rare earths, lanthanum, cerium, praseodymium, and neodymium, constitute about 90 percent of the composition of a typical rare-earth deposit…. So we knew at the start that if we wanted to make an economically viable magnet, both Dr. Sagawa and I realized that we had to make the permanent magnet from one of these four rare earths: lanthanum, cerium, neodymium, or praseodymium. The problem with lanthanum and cerium, as you know, the lanthanides are formed by filling the 4F electrons in the 4F series. However, lanthanum and cerium, the two most abundant rare-earths, had no 4F electrons. And we knew by this time, based on the work with samarium-cobalt magnets, that one of the things that you had to have was these 4F electrons to give you the coercivity for the material.
Croat:Coercivity is the resistance to demagnetization. In a permanent magnet, as you say, the moments are all aligned parallel. If you put a magnetic field in the reverse direction, the coercivity will resist the magnet flipping into the opposite direction.
We knew that we wanted iron instead of cobalt…. And both of us set out with the intention of making a rare-earth iron permanent magnet from neodymium or praseodymium. The problem was that there was no intermetallic compounds available. Unlike in this rare-earth cobalt phase diagram—there was lots of interesting intermetallic compounds—the rare-earth-iron phase diagrams do not contain suitable usable intermetallic compounds.
In plain language, what is an intermetallic phase, and why is it important?
Croat: An intermetallic compound or an intermetallic phase is a phase with a fixed ratio of the components. Like, terbium-iron two has one terbium and two irons. And it sits on a crystal lattice in very specific sites on the lattice. You have to have that. That’s one of the quintessential requirements for any rare-earth transition-metal permanent magnet.
It provides the structure and stability you need or the reproducibility?
Croat: All of that. In other words, it’s the thing that holds the magnetic moment in place in the structure. You have to have this crystal structure.
So what was the solution?
Croat: The fact that there was no intermetallic compound was a baffling problem for some time. But then, in 1976, I and a couple of colleagues saw a paper by Art Clark. He was working at the Naval Surface Weapons Laboratory. He had taken a sputtered sample of terbium iron two [TbFe2] and annealed it at increasingly higher temperatures. And at about 350 °C, the coercivity shot up to about 3.5 kilo oersted. And we surmised, and I think correctly at the time, that what had happened was that during the crystallization process, a metastable phase had formed. This was exciting because this is the first time anyone had ever developed a coercivity in a rare-earth iron material. It was also exciting because of the fact that TbFe2 is a cubic material. And a cubic material should not develop coercivity. You have to have a crystal structure with a uniaxial crystal lattice, like hexagonal, rhombohedral, or tetragonal.
And so I started out with that thesis: to create magnetically hard metastable phases that are practical for permanent magnets. And by using rapid solidification, I started making melt-spun materials and crystallizing them. And it worked very well. I had developed very high coercivities right away. The problem with these materials were that they were all unstable. I started to heat them up at about 450 °C, and they would decompose into their equilibrium structure, and the coercivity would go away. So I began to add things to see if I could make them more stable. And one of the things I added was boron. And one day I found that when I heated my sample up containing boron, it did not decompose into its equilibrium structure. And so I knew that I had discovered a ternary neodymium-iron-boron intermetallic phase, a very interesting, technically important intermetallic phase. And it turns out that Masato discovered the same one [laughter].
Sagawa-san, you mentioned that you were interested in a sintering process, which was similar to the process that was then being used to manufacture samarium-cobalt magnets.... When you were working on a way to make neodymium-iron-boron magnets using sintering, did you encounter specific challenges that were difficult, that took a lot of effort to solve?
Sagawa: I was not able to give coercivity to the neodymium-iron-boron alloy. And I tried many processes. But the cost of sintering is good because to give coercivity to the alloy, you have to make a cellular structure in the alloy. So to produce cellular structure, the sintering is a very good way because first, you make single crystal or powder and you align the powder and then sintering. And during sintering, you form automatically cellular structure.
So I tried to form cellular structure. I tested many, many kinds of elements starting from copper. Copper is used in the case of samarium-cobalt magnets. And starting from copper, I tested many, many additive elements almost throughout the periodic table. But I was not able to give coercivity by making additional elements. And at last, I found a good additive element. It’s not another element—it’s neodymium itself. Additional neodymium gives home to cellular structure forming a grain boundary area around the neodymium-iron-boron particles. So I succeeded in giving coercivity to the neodymium-iron-boron by sintering and a neodymium-rich composition. And I succeeded in developing a neodymium-boron sintered magnet with record-high BH maximum [a measure of the maximum magnetic energy that can be stored in a magnet] in the world. It was in 1982.
This work is mostly happening in the late 1970s, early 1980s. You’re both working on almost the same problem on different sides of the world. Sagawa-san, when did you first find out that General Motors was also working on the same challenge that you were working on?
Masato Sagawa of Sumitomo [left] announced the invention of a revolutionary neodymium-iron-boron permanent magnet at a conference in Pittsburgh, in November 1983. At the same meeting, John Croat of General Motors announced the invention of a magnet using the exact same elements.Masato Sagawa
Sagawa: It was when I made the first presentation at the MMM Conference, Magnetism and Magnetic Material Conference, held in Pittsburgh in 1983.
Croat: November 1983.
Sagawa: November 1983. At the same conference, John Croat and his group presented a paper on the same neodymium-boron alloy magnets.
So for years, you both had been working on this problem, attacking the same problem. And you both found out about the other effort at the same conference in Pittsburgh in 1983?
That’s astounding. Did you talk to each other at that conference? Did you get together and say anything to each other?
Croat: I think we introduced ourselves to each other, but I don’t remember much more than that.
What do you recall, Sagawa-san? Do you recall any conversation with John at that meeting?
Sagawa: I remember that I saw John, but I don’t remember if we talked together or not.
Croat: I think it would have been logical if we did, but I cannot remember it. We probably considered ourselves competitors [laughter].
You both came up with independent means of manufacturing. General Motors came up with a technique called melt-spinning, and Sumitomo’s was a sintering process. They had different characteristics. The sintered magnets seem to have more structural strength or resilience. The GM magnets can be produced more inexpensively. They both found large market applications, somewhat different but still large uses. John, why don’t you take a crack in just explaining what their market niches became and still are to this day?
Croat: Yes. The rapidly solidified materials are isotropic. And during the rapid solidification process, you form a magnetic powder. That powder is blended with an epoxy and made into a magnet. But it turned out that these magnets were ideal for making small ring magnets that go into micromotors like spindle motors for hard-disk drives or CD-ROMs or for stepper motors. So that has—
Croat: For robots and things of that nature, servo motors for robots, but also spindle and stepper motors for various applications. And that has been the primary market for these bonded magnets because making a thin-wall ring magnet by the sintering process is very difficult. They tend to crack and break apart. But in contrast, the sintered-magnet market, which is much bigger actually than the bonded-magnet market, has been used primarily for bigger motors, wind-turbine generators, MRIs. Most of the electric-vehicle motors are sintered magnets. So again, most of the market is motors. But the market is bigger for the sintered-magnet market than it is for the bonded-magnet market. But there are two distinctly different markets in general.
Sagawa: I think one of the most important applications of the neodymium-iron-boron magnet is the hard-disk drive. If the neodymium-boron was not found, it would have been difficult to miniaturize the hard-disk drive. Before the appearance of the neodymium-boron magnet, the hard-disk drive was very big. It was difficult to lift by one person, 10 kilo or 20 kilogram or so. Now it becomes very small. And this is because of the invention of neodymium-boron sintered magnet which is used in the actuator motor. And also, the bonded-magnet neodymium is used in the spindle motor to rotate the hard disk. This was a very important invention for the start of our IT society.
Hard-disk drives contain several neodymium permanent magnets. There’s one in the spindle motor that rotates the disk, and typically two others in the read-write arm, also known as the actuator arm (the triangular shaped structure in the photo) that detects and writes data on the disk.Getty Images
You had little or no contact with each other until this meeting in Pittsburgh in 1983, by which time you’d already established all your intellectual property. And yet there was a long-running—well, not that long-running, but a patent case between General Motors and Sumitomo. John, can you start off and tell us a little bit about what happened there?
Croat: Yes. I guess we didn’t mention it, but both Sumitomo and General Motors filed patents shortly after the invention of this material, which turned out to be early 1982, apparently within weeks of each other. But it turns out, because of patent law, the way patent law is written, General Motors ended up with the patents in North America, and Sumitomo ended up with the patents for the composition neodymium-iron-boron in Japan and Europe. General Motors had the neodymium-iron-boron composition in North America. This meant that neither company could market worldwide, and they had to market worldwide to be economically viable. So they actually had a dispute, of course. I don’t know if they actually sued each other. But anyway, they had a negotiation. And I remember being part of these negotiations where we ended up with an agreement where we cross-licensed each other, which allowed both companies to market the material worldwide—manufacture and market the material worldwide.
But you could only manufacture and market your type of material, which, in your case, was this melt-spun, rapid—
Croat: Solidification, melt-spinning.
Solidification. And Sumitomo had the sintering worldwide, North America, Asia, Europe, everywhere.
Croat: It turned out it was based on the particle size of the material. Sumitomo had the rights to manufacture magnets with a particle size greater than one micron, General Motors less than one micron.
Right now, of course, there’s a lot of controversy over the fact that an enormous amount of the world’s market for rare-earth elements is controlled by China, the mining, the production, and so on. So many countries, particularly in Europe and North America, are looking to broaden their base of suppliers for rare earths. But at the same time, there’s this existing market for these magnets. So is this having an effect of any kind on the future directions of R&D in permanent magnets?
Croat: I am no longer close enough to the R&D to know what’s going on, but I think there has been no change. People are still interested in making permanent magnets primarily containing a rare earth.
I don’t see how they’re ever going to get the rare earth out of a rare-earth transition metal magnet and make a good high-performance magnet. So the rare-earth supply problem is going to continue and will maybe even grow in the future as the market for these magnets grows. And I think the only way that they can overcome that is that Japan and Korea and Western Europe and North America will have to have some kind of government help to establish a rare-earth market outside of [China]. There are a lot of countries that have rare earths. India, for example, has rare earths. Australia, Canada have rare earths. United States, of course, has several big deposits. But what happened was, of course, the Chinese reduced the price to the point back in the 1990s and drove everybody else out of business. So somehow, some political will has to be put forth to change the dynamics of the rare-earth market today.
Sagawa: I think it’s impossible to produce high-grade magnet without rare earths. It’s concluded recently. There are very active research on an iron-nickel compound; it was promising. It has high-saturation magnetization and a very high anisotropy field. But I think, in recent research in Japan, it was concluded [that] it’s impossible to produce high-performance permanent magnet from this iron-nickel compound. And this is the last research subject on the rare-earth-free compound consisting of only 3D [orbital] -electron elements, iron-cobalt-nickel.
Match ID: 144 Score: 7.14 source: spectrum.ieee.org age: 4 days qualifiers: 7.14 development
Engineers designing communications products need access to information—the latest research, lists of parts and components, and technical standards to help ensure that their design will work seamlessly with others. But tracking down resources across multiple websites can be time-consuming, and the material might not be relevant or the sources could be questionable.
The new IEEE DiscoveryPoint for Communications platform aims to solve those problems by providing one-stop access to searchable, curated content from trusted sources on just about any telecommunications topic. Its library contains more than 1 million full-text research documents; 10,000 technical standards; 8,000 online courses; 400 ebook titles; 18 million parts and various solutions from manufacturers and distributors; and 1,100 industry and product news bulletins, blogs, and white papers.
“There’s nothing on the market right now that fully supports the workflow of the design engineer and that delivers all the information needed in one place,” says Mark Barragry, senior product manager for corporate markets at IEEE Global Products and Marketing.
In designing IEEE DiscoveryPoint, Barragry says, “We reconstructed the work process of a product design engineer and put together a set of resources that meet all the information needs they would have during a standard product-development cycle.”
IEEE has a wealth of content for telecom designers, Barragry says. IEEE publishes nine of the 10 most-cited journals in telecommunications. More than 40 percent of U.S. patents related to telecommunications cite an IEEE publication. The organization also sponsors more than 7,000 conferences that focus on communications, networking, and broadcast technologies. And the IEEE Standards Association has developed more than 900 standards related to communications, including the popular IEEE 802.11 WiFi standard.
Barragry adds that design engineers who tested the platform before launch said they liked that it came from IEEE, a trusted source.
The subscription-based product’s intuitive search engine saves users time because it zeroes in on key concepts related to the topic they’re searching for. To get started, the user types a word, phrase, concept, the name of an author or company, or another term into the search bar. The search engine’s ranking algorithm analyzes the full text and the metadata of the documents to find relevant material.
The results are organized into channels and categorized by type of material, such as research papers, standards, books, or industry news. For each search result, a machine-learning feature examines the document and generates a short summary of key points, which get highlighted in the document.
Search results can be sorted by relevance or by time period, starting with the previous 90 days and going as far back as 10 years for journals and five years for conferences. The results also can be grouped, for example, by a publication’s name. Searches can be saved, and users can bookmark documents.
IEEE DiscoveryPoint also recommends content based on an automated analysis of the user’s reading activity during the previous 30 days. Users can set up email alerts for new content that fits their search criteria.
In one testimonial about IEEE DiscoveryPoint, a director of technology development said, “I really appreciated the thought that went into this product. It’s an unmet need for people like me.”
The subscription price is based on the size of the organization and how many engineers and technical professionals will be using it. To request a demo, complete this form.
Match ID: 145 Score: 7.14 source: spectrum.ieee.org age: 4 days qualifiers: 7.14 development
In August, NASA will launch
the Psyche mission, sending a deep-space orbiter to a weird metal asteroid orbiting between Mars and Jupiter. While the probe’s main purpose is to study Psyche’s origins, it will also carry an experiment that could inform the future of deep-space communications. The Deep Space Optical Communications (DSOC) experiment will test whether lasers can transmit signals beyond lunar orbit. Optical signals, such as those used in undersea fiber-optic cables, can carry more data than radio signals can, but their use in space has been hampered by difficulties in aiming the beams accurately over long distances. DSOC will use a 4-watt infrared laser with a wavelength of 1,550 nanometers (the same used in many optical fibers) to send optical signals at multiple distances during Psyche’s outward journey to the asteroid.
The Great Electric Plane Race
For the first time in almost a century, the U.S.-based National Aeronautic Association (NAA)
will host a cross-country aircraft race. Unlike the national air races of the 1920s, however, the Pulitzer Electric Aircraft Race, scheduled for 19 May, will include only electric-propulsion aircraft. Both fixed-wing craft and helicopters are eligible. The competition will be limited to 25 contestants, and each aircraft must have an onboard pilot. The course will start in Omaha and end four days later in Manteo, N.C., near the site of the Wright brothers’ first flight. The NAA has stated that the goal of the cross-country, multiday race is to force competitors to confront logistical problems that still plague electric aircraft, like range, battery charging, reliability, and speed.
6-Gigahertz Wi-Fi Goes Mainstream
Wi-Fi is getting a boost with
1,200 megahertz of new spectrum in the 6-gigahertz band, adding a third spectrum band to the more familiar 2.4 GHz and 5 GHz. The new band is called Wi-Fi 6E because it extends Wi-Fi’s capabilities into the 6-GHz band. As a rule, higher radio frequencies have higher data capacity, but a shorter range. With its higher frequencies, 6-GHz Wi-Fi is expected to find use in heavy traffic environments like offices and public hotspots. The Wi-Fi Alliance introduced a Wi-Fi 6E certification program in January 2021, and the first trickle of 6E routers appeared by the end of the year. In 2022, expect to see a bonanza of Wi-Fi 6E–enabled smartphones.
3-Nanometer Chips Arrive
Taiwan Semiconductor Manufacturing Co. (TSMC) plans to begin producing 3-nanometer semiconductor chips
in the second half of 2022. Right now, 5-nm chips are the standard. TSMC will make its 3-nm chips using a tried-and-true semiconductor structure called the FinFET (short for “fin field-effect transistor”). Meanwhile, Samsung and Intel are moving to a different technique for 3 nm called nanosheet. (TSMC is eventually planning to abandon FinFETs.) At one point, TSMC’s sole 3-nm chip customer for 2022 was Apple, for the latter’s iPhone 14, but supply-chain issues have made it less certain that TSMC will be able to produce enough chips—which promise more design flexibility—to fulfill even that order.
Seoul Joins the Metaverse
After Facebook (now Meta) announced it was hell-bent on making the metaverse real, a host of other tech companies followed suit. Definitions differ, but the basic idea of the metaverse involves merging virtual reality and augmented reality with actual reality. Also jumping on the metaverse bandwagon is the government of the South Korean capital, Seoul, which plans to develop a “metaverse platform” by the end of 2022. To build this first public metaverse, Seoul will invest 3.9 billion won (US $3.3 million). The platform will offer
public services and cultural events, beginning with the Metaverse 120 Center, a virtual-reality portal for citizens to address concerns that previously required a trip to city hall. Other planned projects include virtual exhibition halls for school courses and a digital representation of Deoksu Palace. The city expects the project to be complete by 2026.
IBM’s Condors Take Flight
In 2022, IBM will debut a new quantum processor—its biggest yet—as a stepping-stone to a
1,000-qubit processor by the end of 2023. This year’s iteration will contain 433 qubits, three times as much as the company’s 127-qubit Eagle processor, which was launched last year. Following the bird theme, the 433- and 1,000-qubit processors will be named Condor. There have been quantum computers with many more qubits; D-Wave Systems, for example, announced a 5,000-qubit computer in 2020. However, D-Wave’s computers are specialized machines for optimization problems. IBM’s Condors aim to be the largest general-purpose quantum processors.
New Dark-Matter Detector
The Forward Search Experiment (FASER) at CERN is slated to switch on in July 2022. The exact date depends on when the Large Hadron Collider is set to renew proton-proton collisions after three years of upgrades and maintenance. FASER will
begin a hunt for dark matter and other particles that interact extremely weakly with “normal” matter. CERN, the fundamental physics research center near Geneva, has four main detectors attached to its Large Hadron Collider, but they aren’t well-suited to detecting dark matter. FASER won’t attempt to detect the particles directly; instead, it will search for the more strongly interacting Standard Model particles created when dark matter interacts with something else. The new detector was constructed while the collider was shut down from 2018 to 2021. Located 480 meters “downstream” of the ATLAS detector, FASER will also hunt for neutrinos produced in huge quantities by particle collisions in the LHC loop. The other CERN detectors have so far failed to detect such neutrinos.
Pong Turns 50
Atari changed the course of video games when it released its first game, Pong, in 1972. While not the first video game—or even the first to be presented in an upright, arcade-style cabinet—Pong was the first to be commercially successful. The game was developed by engineer Allan Alcorn and originally assigned to him as a test after he was hired, before he began working on actual projects. However, executives at Atari saw potential in Pong’s simple game play and decided to develop it into a real product. Unlike the countless video games that came after it, the original Pong did not use any code or microprocessors. Instead, it was built from a television and transistor-transistor logic.
The Green Hydrogen Boom
Utility company Energias de Portugal (EDP), based in Lisbon, is on track to begin operating a 3-megawatt green hydrogen plant in Brazil by the end of the year. Green hydrogen is hydrogen produced in sustainable ways, using solar or wind-powered electrolyzers to split water molecules into hydrogen and oxygen. According to the International Energy Agency, only
0.1 percent of hydrogen is produced this way. The plant will replace an existing coal-fired plant and generate hydrogen—which can be used in fuel cells—using solar photovoltaics. EDP’s roughly US $7.9 million pilot program is just the tip of the green hydrogen iceberg. Enegix Energy has announced plans for a $5.4 billion green hydrogen plant in the same Brazilian state, Ceará, where the EDP plant is being built. The green hydrogen market is predicted to generate a revenue of nearly $10 billion by 2028, according to a November 2021 report by Research Dive.
A Permanent Space Station for China
China is scheduled
to complete its Tiangong (“Heavenly Palace”) space station in 2022. The station, China’s first long-term space habitat, was preceded by the Tiangong-1 and Tiangong-2 stations, which orbited from 2011 to 2018 and 2016 to 2019, respectively. The new station’s core module, the Tianhe, was launched in April 2021. A further 10 missions by the end of 2022 will deliver other components and modules, with construction to be completed in orbit. The final station will have two laboratory modules in addition to the core module. Tiangong will orbit at roughly the same altitude as the International Space Station but will be only about one-fifth the mass of the ISS.
A Cool Form of Energy Storage
Cryogenic energy-storage company Highview Power
will begin operations at its Carrington plant near Manchester, England, this year. Cryogenic energy storage is a long-term method of storing electricity by cooling air until it liquefies (about –196 °C). Crucially, the air is cooled when electricity is cheaper—at night, for example—and then stored until electricity demand peaks. The liquid air is then allowed to boil back into a gas, which drives a turbine to generate electricity. The 50-megawatt/250-megawatt-hour Carrington plant will be Highview Power’s first commercial plant using its cryogenic storage technology, dubbed CRYOBattery. Highview Power has said it plans to build a similar plant in Vermont, although it has not specified a timeline yet.
Seattle-based startup Nori is set to offer
a cryptocurrency for carbon removal. Nori will mint 500 million tokens of its Ethereum-based currency (called NORI). Individuals and companies can purchase and trade NORI, and eventually exchange any NORI they own for an equal number of carbon credits. Each carbon credit represents a tonne of carbon dioxide that has already been removed from the atmosphere and stored in the ground. When exchanged in this way, a NORI is retired, making it impossible for owners to try to “double count” carbon credits and therefore seem like they’re offsetting more carbon than they actually have. The startup has acknowledged that Ethereum and other blockchain-based technologies consume an enormous amount of energy, so the carbon it sequesters could conceivably originate in cryptocurrency mining. However, 2022 will also see Ethereum scheduled to switch to a much more energy-efficient method of verifying its blockchain, called proof-of-stake, which Nori will take advantage of when it launches.
Match ID: 146 Score: 7.14 source: spectrum.ieee.org age: 177 days qualifiers: 3.57 trade, 1.43 seattle, 1.43 apple, 0.71 startup
Non-fungible tokens (NFTs) are the most popular digital assets today, capturing the attention of cryptocurrency investors, whales and people from around the world. People find it amazing that some users spend thousands or millions of dollars on a single NFT-based image of a monkey or other token, but you can simply take a screenshot for free. So here we share some freuently asked question about NFTs.
1) What is an NFT?
NFT stands for non-fungible token, which is a cryptographic token on a blockchain with unique identification codes that distinguish it from other tokens. NFTs are unique and not interchangeable, which means no two NFTs are the same. NFTs can be a unique artwork, GIF, Images, videos, Audio album. in-game items, collectibles etc.
2) What is Blockchain?
A blockchain is a distributed digital ledger that allows for the secure storage of data. By recording any kind of information—such as bank account transactions, the ownership of Non-Fungible Tokens (NFTs), or Decentralized Finance (DeFi) smart contracts—in one place, and distributing it to many different computers, blockchains ensure that data can’t be manipulated without everyone in the system being aware.
3) What makes an NFT valuable?
The value of an NFT comes from its ability to be traded freely and securely on the blockchain, which is not possible with other current digital ownership solutionsThe NFT points to its location on the blockchain, but doesn’t necessarily contain the digital property. For example, if you replace one bitcoin with another, you will still have the same thing. If you buy a non-fungible item, such as a movie ticket, it is impossible to replace it with any other movie ticket because each ticket is unique to a specific time and place.
4) How do NFTs work?
One of the unique characteristics of non-fungible tokens (NFTs) is that they can be tokenised to create a digital certificate of ownership that can be bought, sold and traded on the blockchain.
As with crypto-currency, records of who owns what are stored on a ledger that is maintained by thousands of computers around the world. These records can’t be forged because the whole system operates on an open-source network.
NFTs also contain smart contracts—small computer programs that run on the blockchain—that give the artist, for example, a cut of any future sale of the token.
5) What’s the connection between NFTs and cryptocurrency?
Non-fungible tokens (NFTs) aren't cryptocurrencies, but they do use blockchain technology. Many NFTs are based on Ethereum, where the blockchain serves as a ledger for all the transactions related to said NFT and the properties it represents.5) How to make an NFT?
Anyone can create an NFT. All you need is a digital wallet, some ethereum tokens and a connection to an NFT marketplace where you’ll be able to upload and sell your creations
6) How to validate the authencity of an NFT?
When you purchase a stock in NFT, that purchase is recorded on the blockchain—the bitcoin ledger of transactions—and that entry acts as your proof of ownership.
7) How is an NFT valued? What are the most expensive NFTs?
The value of an NFT varies a lot based on the digital asset up for grabs. People use NFTs to trade and sell digital art, so when creating an NFT, you should consider the popularity of your digital artwork along with historical statistics.
In the year 2021, a digital artist called Pak created an artwork called The Merge. It was sold on the Nifty Gateway NFT market for $91.8 million.
8) Can NFTs be used as an investment?
Non-fungible tokens can be used in investment opportunities. One can purchase an NFT and resell it at a profit. Certain NFT marketplaces let sellers of NFTs keep a percentage of the profits from sales of the assets they create.
9) Will NFTs be the future of art and collectibles?
Many people want to buy NFTs because it lets them support the arts and own something cool from their favorite musicians, brands, and celebrities. NFTs also give artists an opportunity to program in continual royalties if someone buys their work. Galleries see this as a way to reach new buyers interested in art.
10) How do we buy an NFTs?
There are many places to buy digital assets, like opensea and their policies vary. On top shot, for instance, you sign up for a waitlist that can be thousands of people long. When a digital asset goes on sale, you are occasionally chosen to purchase it.
11) Can i mint NFT for free?
To mint an NFT token, you must pay some amount of gas fee to process the transaction on the Etherum blockchain, but you can mint your NFT on a different blockchain called Polygon to avoid paying gas fees. This option is available on OpenSea and this simply denotes that your NFT will only be able to trade using Polygon's blockchain and not Etherum's blockchain. Mintable allows you to mint NFTs for free without paying any gas fees.
12) Do i own an NFT if i screenshot it?
The answer is no. Non-Fungible Tokens are minted on the blockchain using cryptocurrencies such as Etherum, Solana, Polygon, and so on. Once a Non-Fungible Token is minted, the transaction is recorded on the blockchain and the contract or license is awarded to whoever has that Non-Fungible Token in their wallet.
12) Why are people investing so much in NFT?
Non-fungible tokens have gained the hearts of people around the world, and they have given digital creators the recognition they deserve. One of the remarkable things about non-fungible tokens is that you can take a screenshot of one, but you don’t own it. This is because when a non-fungible token is created, then the transaction is stored on the blockchain, and the license or contract to hold such a token is awarded to the person owning the token in their digital wallet.
You can sell your work and creations by attaching a license to it on the blockchain, where its ownership can be transferred. This lets you get exposure without losing full ownership of your work. Some of the most successful projects include Cryptopunks, Bored Ape Yatch Club NFTs, SandBox, World of Women and so on. These NFT projects have gained popularity globally and are owned by celebrities and other successful entrepreneurs. Owning one of these NFTs gives you an automatic ticket to exclusive business meetings and life-changing connections.
That’s a wrap. Hope you guys found this article enlightening. I just answer some question with my limited knowledge about NFTs. If you have any questions or suggestions, feel free to drop them in the comment section below. Also I have a question for you, Is bitcoin an NFTs? let me know in The comment section below
Match ID: 147 Score: 7.14 source: www.crunchhype.com age: 139 days qualifiers: 3.57 trade, 3.57 google
We’ve always known that phones—and the people carrying them—can be uniquely identified from their Bluetooth signatures, and that we need security techniques to prevent that. This new research shows that that’s not enough.
Computer scientists at the University of California San Diego proved in a study published May 24 that minute imperfections in phones caused during manufacturing create a unique Bluetooth beacon, one that establishes a digital signature or fingerprint distinct from any other device. Though phones’ Bluetooth uses cryptographic technology that limits trackability, using a radio receiver, these distortions in the Bluetooth signal can be discerned to track individual devices...
Match ID: 149 Score: 6.43 source: www.schneier.com age: 8 days qualifiers: 2.14 musk, 1.43 tesla, 1.43 california, 1.43 apple
In one corner stood the defending champion, Texas Instruments. In the other stood the challenger, Fairchild Semiconductor. The referee, judge, promoter, and only spectator was Polaroid. In contention was the contract for the electronics of Polaroid’s secret project—a pioneering product introduced in 1972 as the SX-70, a camera eventually purchased by millions of people.
As the embodiment of truly automated instant photography, the SX-70 fulfilled a long-held dream of Edwin Land, founder of Polaroid Corp., Cambridge, Mass. Vital to this “point and shoot” capability was a new film—one that would develop while exposed to light and so eliminate the tear-away covers of previous Polaroid films. Also vital were sophisticated electronics to control all single lens reflex (SLR) camera functions, including flashbulb selection, exposure control, mirror positioning, start of print development, and ejection of print. These circuits were divided into three modules, one each for motor, exposure and logic, and flash control. At the final count, some 400 transistors were used.
This article was first published as "The battle for the SX-70." It appeared in the May 1989 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The diagrams and photographs appeared in the original print version.
Yet this complicated system had to fit in a package the size of Land’s jacket pocket, he decreed—a constraint that meant employing ICs. But as Polaroid could not fabricate ICs, the success of its SX-70 project lay in the hands of outsiders.
The flash control contract was given to General Electric Co. Then in 1971, when GE dropped out of the IC business, it was issued to Sprague Electric Corp., as well as to Fairchild Semiconductor Corp. of Palo Alto, Calif., and Texas Instruments Inc. of Dallas, Texas. Only Fairchild and Sprague ended up producing flash controllers.
Independent contracts to develop the motor and exposure control modules went to Fairchild and TI. The motor control module contained a linear control IC, an NPN motor drive transistor, and a discrete PNP dynamic braking transistor, and gave the designers little trouble. The exposure control module was a different story.
The grand challenge
Included in the exposure control were three ICs (early Fairchild versions had four). The exposure timer used the current output of a silicon photodiode to regulate how long the shutter blades remained open. The delay-timing circuit generated four intervals: a delay of 40 milliseconds before the shutter opened; the time the shutter remained open before the flash was fired; the duration of the flash; and the maximum exposure time given certain ambient lighting. The power control IC drove the solenoids and motor control unit. And this all had to fit on a board that fit into a 27-by-95-by-2-millimeter space, minus a central hole for the camera lens.
Stopping an electric motor by placing a short circuit across the armature of the motor; the kinetic energy is then dissipated in wiring and short-circuit losses.
Automatic metering of the light entering the camera’s lens and striking the film. In the SX-70, adjustments to affect the amount of light were performed by just one pair of blades, which controlled both the aperture and the shutter speed.
Circuitry that automatically selects the next unused bulb from the flash bar and generates a pulse to fire the bulb. It is activated automatically when a flash bar is inserted, but is inhibited when the film counter reaches zero.
Circuitry that directs the cycle of applying power to the motor and motor braking in response to signals from the exposure control.
Single lens reflex (SLR)
A camera viewing system that, by swinging an angled mirror temporarily between lens and film, allows a person looking into the viewfinder to seem to see through the lens, previewing the image that will be captured on film.
Electrical noise was a major stumbling block. The photocell, for instance, operating with as little as 15 picoamperes, had to maintain its state in an environment in which the motor, the solenoids, and the firing of the flash lamps drew amperes of current. Designers were to take steps like inserting a delay between the release of the solenoids and the start of the photocell-timed exposure; redesigning circuitry on the power supply line to reject noise from the motor; increasing the voltage difference between logic highs and lows, so noise spikes would no longer masquerade as bits; and including a low-pass filter.
As it was 1969, there were no semicustom ICs, gate array technology was in its infancy, and only primitive packaging was available—standard dual in-line packages (DIPs) were at least 0.125 inch thick—while logic and power transistors could not yet share the same piece of silicon. And Polaroid wanted to buy this exposure controller for US $5.75.
What friends are for
Polaroid chairman Land and TI chairman Patrick Haggerty were old friends. On a weekend trip decades before the SX-70 project, they had discussed how electronics might one day make a truly one-step camera possible. The idea was to work on this dream together as soon as the technology arrived. So it came as no surprise when TI was charged with developing the camera’s exposure control board. Land was counting on TI for a fail-safe design, based on analog circuitry and proven technology and therefore reliable, reasonably priced, and capable of being produced on schedule.
Polaroid also asked Fairchild, which it viewed as the country’s leader in IC technology, to tackle a design that would push the state of the art. Fairchild’s version was to be digital and highly integrated, even to combining power transistors with logic on one chip. To Polaroid the approach looked risky, but its engineers were excited by its possibilities. Still, some within Polaroid thought the Land-Haggerty relationship made nonsense of using anyone but TI.
In the dark
The R&D contracts were awarded in 1969, and the competitors went to work, both with the same handicap: incomplete information. Fearing that Kodak Corp. might enter the instant camera business, Polaroid wanted no leaks—so much so that it mentioned neither the new film nor the fact that at one point the camera was redesigned as an SLR—and kept the design teams from seeing a prototype of the camera. (Although TI’s then executive vice president, the now-retired Fred Bucy, saw a demonstration of the early, non-SLR SX-70 in 1969, he said nothing about it to the company’s engineers.) Said Peter Carcia, an engineer on the SX-70 project and still with Polaroid: “They had very little to work with”—only stacks of specifications.
Polaroid engineers recall that loads on the electronics were described simply as inductive, and that details of the battery supply were vague because a new battery was being concurrently designed.
“We didn’t tell them whether a load on the electronics was from a solenoid or a relay, just that it was an inductive load,” recalls Seymour Ellin, now a senior technical manager at Polaroid.
“Since we were making our own battery [designed concurrently], we couldn’t tell them what the battery supply would be,” said Carcia. “I would tell them ‘I want you to design a circuit, but I won’t tell you what the power source will be,’ and they would look at me strangely.”
Polaroid wanted no leaks—so much so that it mentioned neither the new film nor the fact that at one point the camera was redesigned as an SLR.
Even worse was the “Y” delay—which Polaroid engineers told IEEE Spectrum came from the “why” response given Fairchild and TI engineers whenever they questioned one specification: the short delay before starting the exposure, after the user pressed the button. This pause was to allow the mirror (which in an SLR camera reflects the image seen through the lens to the viewfinder) to stop vibrating after it snapped out of the way of the film to be exposed. But that was more than Polaroid wanted to divulge. The sources of the noise problem were left obscure, and its extent understated, said Clark Williams, then a TI design engineer. “That motor pulled 3 amps of current and put out a rich spectrum of noise that played havoc with our circuits,” he said. (He is now a design manager at Dallas Semiconductor Corp. in Dallas, Texas.)
The TI team, unable to base a breadboard on Polaroid’s diagrams alone, sent two engineers and several technicians to Cambridge to work in a little private room there. Whenever they needed to test their breadboard, they would hand it over to Polaroid engineers, who would carry it to another room and eventually report back that, say, a certain signal needed adjustment or a certain section did not function. The TI engineers would make a few adjustments, then the breadboard was carried off for another test. This to-and-fro-ing went on for six months, whereas, said Michael Callahan, a senior TI design engineer who is presently executive vice president of engineering at Crystal Semiconductors Corp. in Austin, Texas: “We could have done the work in two weeks if they had let us sign nondisclosure agreements.”
Designing in Dallas
A preliminary round had disappointed both IC teams. In 1969, before Polaroid had firmed up many SX-70 details, it started both TI and Fairchild developing simple exposure control chips. This early effort, said Polaroid engineers, was also used to develop and test their working relationship with Fairchild. But the SX-70 project changed so much, particularly with its redefinition as an SLR camera, that Polaroid decided to start over. Callahan and Ken Buss, now a senior member of the technical staff at TI, recall a meeting in Dallas at which the TI engineers proudly demonstrated the working circuits—only to have Polaroid ignore them and announce its new requirements.
“That made our chips instantly obsolete,” Buss said. At Fairchild, too, enthusiasm flagged. Coincidentally, both companies soon after underwent a corporate restructuring, but whereas the changes at Fairchild benefited its SX-70 team, those at TI nearly cost it everything.
The TI designers, instead of working directly with Polaroid, were told to report to the Assembled Functions Group. Lacking either chip development or manufacturing facilities of its own, the Group contracted with the IC designers’ department to develop three chips—a photocell amplifier to determine the correct exposure, a chip to control the motor and handle dynamic braking, and a chip to handle timing, count the film used, and serve other functions—and with another department to manufacture the chips. The arrangement further filtered the already limited information from Polaroid.
That left the Group itself with the job of designing the circuitry that would tie the ICs together. Its engineers used 13 discrete transistors, 17 laser-trimmed thick-film resistors, and a photodiode, intending to mount them on a printed-circuit board. Management instead mandated a ceramic substrate essentially because, said one TI design engineer, the Group reported to the same manager as TI’s Hybrid Thick-Film Group, which had excess capacity.
“We knew we couldn’t meet the cost goals with a ceramic substrate,” he said. The ceramic, the precious metal conductors, and the labor all cost too much for the substrate to serve as anything more than a prototype “to let us get all the circuitry in a small area.” And when the design grew from 3/4 square inch to 4 or 5 square inches (from 5 to 25 or 32 square centimeters), the engineer recalled, he and the other designers predicted major manufacturing problems and urged doing a more digital redesign with a printed-circuit board. But management “wouldn’t listen,” he said.
TI’s ceramic-based design did, however, perform to Polaroid’s specifications, and it went into production in late 1972. But it was indeed a nightmare. First, at $100 a unit, it was nowhere near the $5.75 cost goal. And manufacturing problems were tremendous, especially with the gigantic and therefore fragile ceramic substrate. For instance, said TI design engineer Norm Culp: “We had to take a chip, alloy it to a Kapton film carrier [a high temperature plastic foil], then wire bond the chip to the Kapton carrier, then encapsulate the chip. The Kapton film carriers were then tested individually, then reflowed onto the ceramic substrate.”
Yield was about 1 percent, and that one in 100 sometimes cracked on its way to Polaroid.
Moreover, said Culp, reflow-soldering chip carriers to the substrate caused microcracks in the ceramic, and for a while TI inspected every part for the flaws. Then one engineer realized that heating the entire substrate instead of just the part to be reflow-soldered would reduce the microcracks, which, however, showed up in other parts of the process. Yield was about 1 percent, and that one in 100 sometimes cracked on its way to Polaroid.
Polaroid did order several hundred of these ceramic modules to get the SX-70 to market. But it wasn’t at all happy with them. Said Ellin, “TI, essentially, failed to meet the cost objective.”
Competing in California
Meanwhile, engineers at Fairchild were also running into difficulties, but technical ones only. Early in the design process, Fairchild’s corporate restructuring moved the R&D engineers out of their isolated laboratory into operating divisions, making for better communication with manufacturing, which “resolved a lot of problems,” said Howard Murphy, a senior member of the Fairchild research staff and the project director for the SX-70 electronics.
"We designed a die that had around 20 flip-flops on it, probably a new high in IC complexity at that time.”—Howard Murphy, Fairchild
One design problem was high temperature. Murphy recalled that the heat of the heavy currents drawn by the motors and the solenoids affected the control logic circuitry, which then had to be redesigned to work at higher temperatures—the specifications indicated 40 °C. Another hurdle was the photo circuit. It had to time out after 20 seconds, so that pictures could be taken in dim light of about 0.06 candela per square foot (0.65 candela per square meter), although the circuit design team wasn’t fully aware of the reason for this at the time. The circuit also had to be very small and consume just a few milliamperes. “So we designed a die that had around 20 flip-flops on it, probably a new high in IC complexity at that time,” Murphy remembered.
Frank Perrino, a Fairchild product manager, first became involved in the SX-70 project in May 1971, when he oversaw its move into manufacturing. He recalled that the designers were then working on four chips—a driver for the motor and solenoids, a timing chip, and the photodiode and photodiode amplification chips that later became one bipolar CMOS IC. The dice were to be mounted directly on an irregularly shaped 1-by-4-inch ceramic substrate previously metalized on both sides with state-of-the-art lines and spaces.
The costs involved, however, ruled the approach out for production, Perrino told Spectrum. “The ceramic and chips all had to be perfect,” he said, and there was zero “probability of this happening.”
He concluded a printed-circuit board was a must, but how to mount the chips to it? Fairchild’s plastic DIPs were too large and costly for the job. He had, though, read a paper by General Electric engineers on beam tape packaging (BTP), a forerunner of what is now called tape automated bonding (TAB). After investigating BTP, he told Fairchild and Polaroid management, “If we don’t do it this way, it’s not worth doing.” Both agreed.
BTP employed reels of film with copper traces laminated on it around preexisting holes. Chips with bumps of solder on their pads were centered under the holes and bonded to the overhanging copper lead frames. Individual die/film modules were then encapsulated, tested, clipped off the reel, and soldered to the circuit board.
Perrino laid out the double-sided printed-circuit board at home on paper spread across his pool table. He then visited several companies that made polyimide interconnect film, contracted with 3M as a supplier, and persuaded West-Bond Inc. of Anaheim, Calif., to build equipment for attaching the dice to the reel of laminated film. The final circuit board held three IC dice and two flip-chip, thick-film, laser-trimmed resistors.
However, yields were not following the expected learning curve on two of the three ICs, the power transistors because of high doping levels and the timing chip because, said Perrino, of design errors. For example, Jim Feit, another engineer on the project, recalls a parasitic device affecting the flip-flops, which was fixed with the addition of a delay.
Still, though the parts were not cheap, costing Fairchild approximately $20 or $30 each, they were manufacturable.
The SX-70 was introduced in April 1972, in conjunction with the company’s annual stockholders’ meeting. A year earlier, Land had teased the stockholders by pulling a prototype SX-70 out of his pocket and waving it in the air. That was a working model, containing one of TI’s first successful ceramic circuit boards. But for this meeting, Polaroid needed 20 cameras, and John Burgarella, now retired from the company, had to make several trips to Texas to hand-carry enough working boards back to Cambridge. About a month earlier, Land had brought Fairchild engineers Perrino, Murphy, and Will Steffe to his Cambridge office and demonstrated the camera to them. “It was obviously a technological breakthrough,” recalled Perrino, which motivated them “to go back and make the thing work.”
The introduction went off without a hitch. About a dozen scenes, from a poker game to a child’s birthday party, were enacted in a large warehouse, and well-known photographers were shooting them with the new cameras while Polaroid stockholders circulated and examined the pictures. Polaroid engineers were also circulating, with extra cameras in their pockets in case anything went wrong.
Resting on their laurels
So Fairchild won a contract to manufacture the exposure control modules along with the motor circuits and the flash control circuits. The trade press touted their victory. According to a January 1973 Electronic News report, for instance, this contract, “believed to be the largest ever issued by a camera producer to an electronics supplier,” was worth $19 million, and was “considered by some semiconductor executives as an omen of considerable future business.”
Fairchild disbanded most of its design team, pleased with their success. But the manufacturing engineers pressed on, since the cost of the product had to be reduced by three-quarters or more to meet Polaroid’s price target, and contract negotiations were to be reopened for 1974. However, said Perrino, two of the chips in the exposure control module were still in trouble.
C. Lester Hogan, who had recently left Motorola Inc. to take over the Fairchild presidency, blames Fairchild’s then-outdated manufacturing facilities. He started a modernization, but he said, “there wasn’t a lot of extra cash,” and it was not complete until sometime in 1974.
Perrino blames the IC designs as well. “The design rules used in these chips were touch-and-go with the technology,” he told Spectrum. Polaroid’s Carcia agreed: ‘‘We were pushing the fundamental technology.” Redesigning the chips was talked about, but management did not mandate it.
A matter of pride
The TI design team was also disbanded in 1972. Some left the company, some moved on to other projects. The failure, one design engineer told Spectrum, was a black mark that hurt careers.
At the highest level of TI, however, the book was not being closed. TI chairman Haggerty reportedly called his old friend Land and said, “We at TI don’t fail.” He assigned the project about $540,000 from his own budget, and told his managers to do whatever it would take to succeed. The code name Project Alpha emphasized the importance of the fresh start, and Haggerty put executive vice president Bucy in charge of it.
The failure, one design engineer [said], was a black mark that hurt careers.
As the original TI team had been disbanded, Bucy planned to assemble another one from the semiconductor division, and to ensure that this one would communicate directly with Polaroid and also have manufacturing responsibilities.
Dean Toombs, engineering director of the semiconductor group, held a series of meetings and developed a proposal for the redesign that was another break with TI’s first approach: it relied not on proven but on state-of-the-art IC technology and packaging. A circuit board only 1/64 inch thick was to hold up to four digital (not analog) ICs and eight discrete components at most. The chips would be surface mounted to the board in a miniDIP package, a method of volume assembly then new and risky but cheap. (It is now called SOT, which stands for Small Outline Transistors.)
The plan was approved by Bucy, and Henri Jarrat (then Eljarrat) selected to head the effort. At first Jarrat objected to the assignment, but gave way when told it was TI’s top priority. Given carte blanche to assemble a team from anywhere in the organization, he kept the group manageably small—only 18 people. They quickly partitioned the circuitry into three ICs and presented a six-month schedule for the redesign to Fred Bucy and Polaroid president William McCune.
Then Jarrat had his first meeting with Polaroid engineers. He told them he could only integrate the exposure control function into three components if they waived some of their specifications. He began going down his list and to each request the Polaroid engineers said no. So Jarrat stood up, threw his papers down, and said, he recalled, “Now I know why this project is going nowhere. This will never work, and I do not want to have my name attached to a failure.” He charged out of the room. Toombs backed Jarrat’s threat. “We had to get the customer under control,” he told Spectrum.
The ability to negotiate was in part also due to the availability of working cameras to study and the construction of a prototype on which to test breadboards of the chips—luxuries denied the first TI team.
After a brief adjournment, the meeting was reconvened and from then on Polaroid negotiated specifications. For example, the 20-second time out, for taking a picture in a dimly lit room, had made the signal from the photodiode impossibly low for the first design teams and this time around was cut to 10 seconds. “The big reason for our success was Jarrat’s success at convincing them to ease the specs,” said Clark Williams, a member of the second team.
The ability to negotiate was in part also due to the availability of working cameras to study and the construction of a prototype on which to test breadboards of the chips—luxuries denied the first TI team. And when the first group did raise questions out of concern for manufacturability, recalled Buss, the only TI engineer to work on both the design and the redesign efforts, they were told, “Well, your competition can do this.” And, in fact, Fairchild engineers don’t recall that the specifications were problematical.
TI began producing the Project Alpha boards in quantity in mid-1973.
With the redesign, TI quoted Polaroid a price of about $4.10 a unit—well below the $5.75 target. Said former Fairchild president Hogan: “At the time, it cost us $10. We really believed we could get it to $6, but when TI bombed the price down to two thirds of the target price, we just had to drop out.” As for a redesign, said Hogan, “we didn’t have the money to invest that way—we had to invest in the generic fixing of the factory.”
TI created a special camera division with Polaroid as its only customer. The company made about 850 000 units in 1974 and continued to produce the design until the SX-70 and the SX-70 Model 2 were discontinued in 1977. It also spun off a few innovations, including packaging for TI’s watch displays. And the engineers on the Project Alpha team were rewarded with then substantial raises of $100 to $500 a month.
West-Bond and 3M, companies Fairchild had recruited to manufacture packaging equipment and film tape, continued to profitably produce them for other companies.
Fairchild used the BTP packaging technology it developed for the SX-70 on its high-volume plastic DIP products at several manufacturing facilities. It also took its camera control technology overseas on a tour of Japanese camera manufacturers, but after several unsuccessful months gave up and closed down the production line for the exposure control module. It continued to manufacture flash control modules for Polaroid for another year, however. Within six months to a year of losing the exposure control contract, at least half the people who had worked on the project moved to other companies, Feit recalled.
Could the design have gone more smoothly? Certainly better communications between Polaroid and the two semiconductor companies and among different divisions within TI and Fairchild would have eliminated some of the rough spots.
From Polaroid’s standpoint, the information it handed out was as complete as it could be. After all, several parts of the camera system were being developed concurrently, so that the system specifications could not meanwhile be finalized. Also, said one Polaroid engineer, unfamiliarity with photography impaired the IC designers’ comprehension of the data they were given.
In the eyes of the TI and Fairchild engineers, useful information was withheld, and Polaroid engineers do admit a preoccupation with secrecy.
Still, in the eyes of the TI and Fairchild engineers, useful information was withheld, and Polaroid engineers do admit a preoccupation with secrecy due to concern over competition from Kodak. Perhaps being told that certain design issues had yet to be resolved or a detailed explanation of how an SLR functions would have elicited more creative engineering from the IC designers.
Be that as it may, the SX-70 was a brilliant success. Polaroid sold some three million units of the leather-covered Model 1 with its chrome-plated trim and the plastic-bodied Model 2. (Model 3, introduced in 1975, was not an SLR.) So while the design problems both TI and Fairchild endured triggered tense moments at all three companies, their solution opened up a huge new consumer market in electronics.
To probe further
For details on the SX-70 circuitry, see “Behind the lens of the SX-70,” by Gerald Lapidus, IEEE Spectrum (December 1973, pp. 76-83).
Both Time and Life magazines featured the SX-70 camera on their covers in 1972, and discussed it in “Polaroid’s Big Gamble on Small Cameras” (Time, June 26, 1972, pp. 80-82) and “If you are able to state a problem, it can be solved” (Life, October 27, 1972, p. 48). To understand how the development of the SX-70 fit into Polaroid’s Jong history, read The Instant Image: Edwin Land and the Polaroid Experience by Mark Olshaker (Stein & Day, New York, 1978).
Frank Perrino’s version of tape automated bonding is described in U.S. Patent #3,868,724, “Multi-layer connecting structures for packaging semiconductor devices mounted on a flexible carrier,” dated Feb. 25, 1975.
Match ID: 150 Score: 6.43 source: spectrum.ieee.org age: 14 days qualifiers: 3.57 trade, 1.43 development, 1.43 california
Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.
Ng’s current efforts are focused on his company
Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?
Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
When you say you want a foundation model for computer vision, what do you mean by that?
Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
What needs to happen for someone to build a foundation model for video?
Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.
Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI
I remember when my students and I published the first
NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
I expect they’re both convinced now.
Ng: I think so, yes.
Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
How do you define data-centric AI, and why do you consider it a movement?
Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a
data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.
You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?
Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?
Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?
Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.
One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
When you talk about engineering the data, what do you mean exactly?
Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
What about using synthetic data, is that often a good solution?
Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
Do you mean that synthetic data would allow you to try the model on more data sets?
Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?
Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.
One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?
Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.
Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?
Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
At first, the dream of riding a rocket into space was
laughed off the stage by critics who said you’d have to carry along fuel that weighed more than the rocket itself. But the advent of booster rockets and better fuels let the dreamers have the last laugh.
Hah, the critics said: To put a kilogram of payload into orbit we just need 98 kilograms of rocket plus rocket fuel.
What a ratio, what a cost. To transport a kilogram of cargo, commercial air freight services typically charge about US $10; spaceflight costs reach $10,000. Sure, you can save money by reusing the booster, as
Elon Musk and Jeff Bezos are trying to do, but it would be so much better if you could dispense with the booster and shoot the payload straight into space.
The first people to think along these lines used cannon launchers, such as those in
Project HARP (High Altitude Research Project), in the 1960s. Research support dried up after booster rockets showed their mettle. Another idea was to shoot payloads into orbit along a gigantic electrified ramp, called a railgun, but that technology still faces hurdles of a basic scientific nature, not least the need for massive banks of capacitors to provide the jolt of energy.
Imagine a satellite spinning in a vacuum chamber at many times the speed of sound. The gates of that chamber open up, and the satellite shoots out faster than the air outside can rush back in—creating a sonic boom when it hits the wall of air.
SpinLaunch, a company founded in 2015 in Long Beach, Calif., proposes a gentler way to heave satellites into orbit. Rather than shoot the satellite in a gun, SpinLaunch would sling it from the end of a carbon-fiber tether that spins around in a vacuum chamber for as long as an hour before reaching terminal speed. The tether lets go milliseconds before gates in the chamber open up to allow the satellite out.
“Because we’re slowly accelerating the system, we can keep the power demands relatively low,”
David Wrenn, vice president for technology, tells IEEE Spectrum. “And as there’s a certain amount of energy stored in the tether itself, you can recapture that through regenerative braking.”
SpinLaunch began with a lab centrifuge that measures about 12 meters in diameter. In November, a 33-meter version at
Space Port America test-launched a payload thousands of meters up. Such a system could loft a small rocket, which would finish the job of reaching orbit. A 100-meter version, now in the planning stage, should be able to handle a 200-kg payload.
Wrenn answers all the obvious questions. How can the tether withstand the
g-force when spinning at hypersonic speed? “A carbon-fiber cable with a cross-sectional area of one square inch (6.5 square centimeters) can suspend a mass of 300,000 pounds (136,000 kg),” he says.
How much preparation do you need between shots? Not much, because the chamber doesn’t have to be superclean. If the customer wants to loft a lot of satellites—a likely desideratum, given the trend toward massive constellations of small satellites–the setup could include motors powerful enough to spin up in 30 minutes. “Upwards of 10 launches per day are possible,” Wrenn says.
How tight must the vacuum be? A “rough” vacuum suffices, he says. SpinLaunch maintains the vacuum with a system of airlocks operated by those millisecond-fast gates.
Most parts, including the steel for the vacuum chamber and carbon fiber, are off-the-shelf, but those gates are proprietary. All Wrenn will say is that they’re not made of steel.
So imagine a highly intricate communications satellite, housed in some structure, spinning at many times the speed of sound. The gates open up, the satellite shoots out far faster than the air outside can rush back in. Then the satellite hits the wall of air, creating a sonic boom.
No problem, says Wrenn. Electronic systems have been hurtling from vacuums into air ever since the cannon-launching days of HARP, some 60 years ago. SpinLaunch has done work already on engineering certain satellite components to withstand the ordeal—“deployable solar panels, for example,” he says.
After the online version of this article appeared, several readers objected to the SpinLaunch system, above all to the stress it would put on the liquid-fueled rocket at the end of that carbon-fiber tether.
“The system has to support up to 8,000 gs; most payloads at launch are rated at 6 or 10 gs,” said John Bucknell, a rocket scientist who heads the startup Virtus Solis Technologies, which aims to collect solar energy in space and beam it to earth.
Keith Lostrom, a chip engineer, went even further. “Drop a brick onto an egg—that is a tiny fraction of the damage that SpinLaunch’s centripedal acceleration would do to a liquid-fuel orbital launch rocket,” he wrote, in an emailed message.
Wrenn denies that the g-force is a dealbreaker. For one thing, he argues, the turbopumps in liquid-fuel rockets spin at over 30,000 rotations per minute, subjecting the liquid oxygen and fuel to “much more aggressive conditions than the uniform g-force that SpinLaunch has.”
Besides, he says, finite element analysis and high-g testing in the company’s 12-meter accelerator “has led to confidence it’s not a fundamental issue for us. We’ve already hot-fired our SpinLaunch-compatible upper-stage engine on the test stand.”
SpinLaunch says it will announce the site for its full-scale orbital launcher within the next five months. It will likely be built on a coastline, far from populated areas and regular airplane service. Construction costs would be held down if the machine can be built up the side of a hill. If all goes well, expect to see the first satellite slung into orbit sometime around 2025.
This article was updated on 24 Feb. 2022 to include additional perspectives on the technology.
Match ID: 152 Score: 6.43 source: spectrum.ieee.org age: 155 days qualifiers: 3.57 google, 2.14 musk, 0.71 startup
Free Isn’t Always Better: How Slack Holds Its Own Against Microsoft Teams 2022-06-21T00:00:00EDT What will it take to win the collaboration app wars: massive scale or a loyal following? A case study by David Yoffie digs into the intense competition between Microsoft Teams and Salesforce's Slack. Match ID: 153 Score: 5.71 source: hbswk.hbs.edu age: 5 days qualifiers: 5.71 microsoft
The Ghost of Internet Explorer Will Haunt the Web for Years Mon, 20 Jun 2022 11:00:00 +0000 Microsoft's legacy browser may be dead—but its remnants are not going anywhere, and neither are its lingering security risks. Match ID: 154 Score: 5.71 source: www.wired.com age: 5 days qualifiers: 5.71 microsoft
As India’s GDP and per capita income continue to climb, so too will its energy consumption. For instance, just 8 percent of Indian homes had air-conditioning in 2018, but that share is
likely to rise to 50 percent by 2050. The country’s electricity consumption in 2019 was nearly six times as great as in 1990. Greenhouse gas emissions will certainly grow too, because India’s energy generation is dominated by fossil fuels—coal-fired power plants for electricity, coal- and gas-fired furnaces for industrial heating, liquid petroleum gas for cooking, and gasoline and diesel for transportation.
Fossil fuels dominate even though renewable energy generation in many parts of the world now
costs less than fossil-fuel-based electricity. While electricity from older coal plants in India costs 2.7 U.S. cents per kilowatt-hour and 5.5 cents from newer plants that have additional pollution-control equipment, the cost of solar energy has dropped to 2.7 cents per kilowatt-hour, and wind power to 3.4 cents per kilowatt-hour. As renewable energy has steadily gotten cheaper, the installed capacity has grown, to 110 gigawatts. That amounts to 27 percent of capacity, compared to coal’s share, which is 52 percent. The government of India has set a target of 450 GW of renewable energy capacity by 2030.
Yet in terms of energy
generated, renewable energy in India still falls short. In 2021, about 73 percent of the country’s electricity was produced from coal, and only 9.6 percent from solar and wind power. That’s because solar and wind power aren’t available around the clock, so the proportion of the installed capacity that gets used is just 20 to 30 percent. For coal, the capacity utilization rate can go as high as 90 percent.
As renewable energy capacity grows, the only way to drastically reduce coal in the electricity mix is by adding energy storage. Although some of the newer solar plants and wind farms are being set up with large amounts of battery storage, it could be decades before such investments have a significant impact. But there is another way for India to move faster toward its decarbonization goal: by focusing the renewable-energy push in India’s commercial and industrial sectors.
India has some 40,000 commercial complexes, which house offices and research centers as well as shopping centers and restaurants. Together they consume about 8 percent of the country’s electricity. The total footprint of such complexes is expected to triple by 2030, compared to 2010. To attract tenants, the managers of these complexes like to project their properties as users of renewable energy.
A 2-megawatt solar plant located 500 kilometers from IITM Research Park provides dedicated electricity to the complex. A 2.1-MW wind farm now under construction will feed IITMRP through a similar arrangement.IIT Madras
India’s industrial sector, meanwhile, consumes about 40 percent of the country’s electricity, and many industrial operators would also be happy to adopt a greater share of renewable energy if they can see a clear return on investment.
Right now, many of these complexes use rooftop solar, but limited space means they can only get a small share of their energy that way. These same complexes can, however, leverage a special power-transmission and “wheeling” policy that’s offered in India. Under this arrangement, an independent power-generation company sets up solar- or wind-power plants for multiple customers, with each customer investing in the amount of capacity it needs. In India, this approach is known as a group-captive model. The generating station injects the electricity onto the grid, and the same amount is immediately delivered, or wheeled in, to the customer, using the utility’s existing transmission and distribution network. A complex can add energy storage to save any excess electricity for later use. If enough commercial, industrial, and residential complexes adopt this approach, India could rapidly move away from coal-based electricity and meet a greater share of its energy needs with renewable energy. Our group at the Indian Institute of Technology Madras has been developing a pilot to showcase how a commercial complex can benefit from this approach.
The commercial complex known as the IITM Research Park, or IITMRP, in Chennai, is a 110,000-square-meter facility that houses R&D facilities for more than 250 companies, including about 150 startups, and employs about 5,000 workers. It uses an average of 40 megawatt-hours of electricity per weekday, or about 12 gigawatt-hours per year. Within the campus, there is 1 megawatt of rooftop solar, which provides about 10 percent of IITMRP’s energy. The complex is also investing in 2 MW of captive solar and 2.1 MW of captive wind power off-site, the electricity from which will be wheeled in. This will boost the renewable-energy usage to nearly 90 percent in about three years. Should the local power grid fail, the complex has backup diesel generators.
Of course, the generation of solar and wind energy varies from minute to minute, day to day, and season to season. The total generated energy will rarely meet IITMRP’s demand exactly; it will usually either exceed demand or fall short.
To get closer to 100 percent renewable energy, the complex needs to store some of its wind and solar power. To that end, the complex is building two complementary kinds of energy storage. The first is a 2-MWh, 750-volt direct-current lithium-ion battery facility. The second is a chilled-water storage system with a capacity equivalent to about 2.45 MWh. Both systems were designed and fabricated at IITMRP.
The battery system’s stored electricity can be used wherever it’s needed. The chilled-water system serves a specific, yet crucial function: It helps cool the buildings. For commercial complexes in tropical climates like Chennai’s, nearly 40 percent of the energy goes toward air-conditioning, which can be costly. In the IITMRP system, a central heating, ventilation, and air-conditioning (HVAC) system chills water to about 6 °C, which is then circulated to each office. A 300-cubic-meter underground tank stores the chilled water for use within about 6 to 8 hours. That relatively short duration is because the temperature of the chilled water in the tank rises about 1 °C every 2 hours.
The IITMRP’s chilled-water system provides air-conditioning to the complex. Water is chilled to about 6 °C and then stored in this 300-cubic-meter underground tank for later circulation to the offices.IIT Madras
The heat transfer capacity of the chilled-water system is 17,500 megajoules, which as mentioned is equivalent to 2.45 MWh of battery storage. The end-to-end round-trip energy loss is about 5 percent. And unlike with a battery system, you can “charge” and “discharge” the chilled-water tank several times a day without diminishing its life span.
Although energy storage adds to the complex’s capital costs, our calculations show that it ultimately reduces the cost of power. The off-site solar and wind farms are located, respectively, 500 and 600 kilometers from IITMRP. The cost of the power delivered to the complex includes generation (including transmission losses) of 5.14 cents/kWh as well as transmission and distribution charges of 0.89 cents/kWh. In addition, the utilities that supply the solar and wind power impose a charge to cover electricity drawn during times of peak demand. On average, this demand charge is about 1.37 cents/kWh. Thus, the total generation cost for the solar and wind power delivered to IITMRP is about 7.4 cents/kWh.
There’s also a cost associated with energy storage. Because most of the renewable energy coming into the complex will be used immediately, only the excess needs to be stored—about 30 percent of the total, according to our estimate.
So the average cost of round-the-clock renewable energy works out to 9.3 cents/kWh, taking into account the depreciation, financing, and operation costs over the lifetime of the storage. In the future, as the cost of energy storage continues to decline, the average cost will remain close to 9 cents/kWh, even if half of the energy generated goes to storage. And the total energy cost could drop further with declines in interest rates, the cost of solar and wind energy, or transmission and demand charges.
For now, the rate of 9.3 cents/kWh compares quite favorably to what IITMRP pays for regular grid power—about 15 cents/kWh. That means, with careful design, the complex can approach 100 percent renewable energy and still save about a third on the energy costs that it pays today. Keep in mind that grid power in India primarily comes from coal-based generation, so for IITMRP and other commercial complexes, using renewable energy plus storage has a big environmental upside.
IITM Research Park’s lithium-ion battery facility stores excess electricity for use after the sun goes down or when there’s a dip in wind power.IIT Madras
Electricity tariffs are lower for India’s industrial and residential complexes, so the cost advantage of this approach may not be as pronounced in those settings. But renewable energy can also be a selling point for the owners of such complexes—they know many tenants like having their business or home located in a complex that’s green.
Although IITMRP’s annual consumption is about 12 GWh, the energy usage, or load, varies slightly from month to month, from 970 to 1,100 MWh. Meanwhile, the energy generated from the captive off-site solar and wind plants and the rooftop solar plant will vary quite a bit more. The top chart ("Monthly Load and Available Renewable Energy at IITMRP") shows the estimated monthly energy generated and the monthly load.
As is apparent, there is some excess energy available in May and July, and an overall energy deficit at other times. In October, November, and December, the deficit is substantial, because wind-power generation tends to be lowest during those months. Averaged over a year, the deficit works out to be 11 percent; the arrangement we’ve described, in other words, will allow IITMRP to obtain 89 percent of its energy from renewables.
For the complex to reach 100 percent renewable energy, it’s imperative that any excess energy be stored and then used later to make up for the renewable energy deficits. When the energy deficits are particularly high, the only way to boost renewable energy usage further will be to add another source of generation, or else add long-term energy storage that’s capable of storing energy over months. Researchers at IITMRP are working on additional sources of renewable energy generation, including ocean, wave, and tidal energy, along with long-term energy storage, such as zinc-air batteries.
IITM Research Park’s electricity load and available renewable energy vary across months [top] and over the course of a single day [bottom]. To make up for the deficit in renewable energy, especially during October, November, and December, additional renewable generation or long-term energy storage will be needed. At other times of the year, the available renewable energy tends to track the load closely throughout the day, with any excess energy sent to storage.
For other times of the year, the complex can get by on a smaller amount of shorter-term storage. How much storage? If we look at the energy generated and the load on an hourly basis over a typical weekday, we see that the total daily load generally matches the total daily demand, but with small fluctuations in surplus and deficit. Those fluctuations represent the amount of energy that has to move in and out of storage. In the bottom chart ("Daily Load and Available Renewable Energy at IITMRP"), the cumulative deficit peaks at 1.15 MWh, and the surplus peaks at 1.47 MWh. Thus, for much of the year, a storage size of 2.62 MWh should ensure that no energy is wasted.
This is a surprisingly modest amount of storage for a complex as large as IITMRP. It’s possible because for much of the year, the load follows a pattern similar to the renewable energy generated. That is, the load peaks during the hours when the sun is out, so most of the solar energy is used directly, with a small amount of excess being stored for use after the sun goes down. The load drops during the evening and at night, when the wind power is enough to meet most of the complex’s demand, with the surplus again going into storage to be used the next day, when demand picks up.
On weekends, the demand is, of course, much less, so more of the excess energy can be stored for later use on weekdays. Eventually, the complex’s lithium-ion battery storage will be expanded to 5 MWh, to take advantage of that energy surplus. The batteries plus the chilled-water system will ensure that enough storage is available to take care of weekday deficits and surpluses most of the time.
As mentioned earlier, India has some 40,000 commercial complexes like IITMRP, and that number is expected to grow rapidly. Deploying energy storage for each complex and wheeling in solar and wind energy make sense both financially and environmentally. Meanwhile, as the cost of energy storage continues to fall, industrial complexes and large residential complexes could be enticed to adopt a similar approach. In a relatively short amount of time—a matter of years, rather than decades—renewable energy usage in India could rise to about 50 percent.
On the way to that admittedly ambitious goal, the country’s power grids will also benefit from the decentralized energy management within these complexes. The complexes will generally meet their own supply and demand, enabling the grid to remain balanced. And, with thousands of complexes each deploying megawatts’ worth of stationary batteries and chilled-water storage, the country’s energy-storage industry will get a big boost. Given the government’s commitment to expanding India’s renewable capacity and usage, the approach we’re piloting at IITMRP will help accelerate the push toward cleaner and greener power for all.
This article appears in the July 2022 print issue as “Weaning India from Coal.”
Match ID: 155 Score: 5.00 source: spectrum.ieee.org age: 0 days qualifiers: 5.00 startup
Match ID: 156 Score: 5.00 source: theintercept.com age: 1 day qualifiers: 5.00 uber
The End of Klarna’s Easy Money Is Bad News for BNPL Fri, 24 Jun 2022 11:00:00 +0000 As the Swedish unicorn faces competition, regulation, and investment concerns, can “buy now, pay later” companies weather another economic downturn? Match ID: 157 Score: 5.00 source: www.wired.com age: 1 day qualifiers: 5.00 startup
Hello Reddit! In 2015, I left the advertising industry and assembled a dedicated team of engineers, prototype manufacturers, and aerospace fabricators to focus on one problem: committing to clean energy while not abandoning the past. Since then, we've developed a complete electric platform made specifically for transforming the most beloved classic gasoline and diesel vehicles into clean energy classics like our fully electric first generation Ford Bronco. We've also restored custom 1960s-era Mustangs, Land Rover Defender 110s, and Porsche 911s -- which you can check out in my recent interview with MEL Magazine, our website or our Instagram.
In the meantime, I'm here to answer anything about electric cars, refurbishing classic cars, and anything else that comes to mind -- AMA!
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Within the European Union project we aim to autonomously monitor habitats. Regular monitoring of individual plant species allows for more sophisticated decision-making. The video is recorded in Perugia Italy.
What can we learn from nature? What skills from the animal world can be used for industrial applications? Festo has been dealing with these questions in the Bionic Learning Network for years. In association with universities, institutes and development companies, we are developing research platforms whose basic technical principles are based on nature. A recurring theme here is the unique movements and functions of the elephant’s trunk.
We are proud to announce the Relaunch of Misty, providing you with a more intuitive and easy-to-use robot platform! So what is new, we hear you ask? To begin with, we have updated Misty’s conversational skills, focusing on both improved NLU capabilities and added more languages. Python has been added as our primary focus programming language going forward, complemented by enhanced Blockly drag and drop functionality. We think you will really enjoy our brand new Misty Studio, which is both more user friendly and with improved features.
We developed a self-contained end-effector for layouting on construction sites with aerial robots! The end-effector achieves high accuracy through the use of multiple contact points, compliance, and actuation.
The compliance and conformability of soft robots provide inherent advantages when working around delicate objects or in unstructured environments. However, rapid locomotion in soft robotics is challenging due to the slow propagation of motion in compliant structures, particularly underwater. Taking inspiration from cephalopods, here we present an underwater robot with a compliant body that can achieve repeatable jet propulsion by changing its internal volume and cross-sectional area to take advantage of jet propulsion as well as the added mass effect.
If you want to be at the cutting-edge of your research field and publish impactful research papers, you need the most cutting-edge hardware. Our technology is unique (we own the relevant IP), unrivaled and a must-have tool for those in robotics research.
Hardware platforms for socially interactive robotics can be limited by cost or lack of functionality. This article presents the overall system—design, hardware, and software—for Quori, a novel, affordable, socially interactive humanoid robot platform for facilitating non-contact human-robot interaction (HRI) research.
We present hybrid adhesive end-effectors for bimanual handling of deformable objects. The end-effectors are designed with features meant to accommodate surface irregularities in macroscale form, mesoscale waviness, and microscale roughness, achieving good shear adhesion on surfaces with little gripping force. The new gripping system combines passive mechanical compliance with a hybrid electrostatic-adhesive pad so that humanoid robots can grasp a wide range of materials including paperboard and textured plastics.
At the National Centre of Competence in Research (NCCR) Robotics, a new generation of robots that can work side by side with humans—fighting disabilities, facing emergencies and transforming education—is developed.
The OS-150 Robotics Laboratory is Lawrence Livermore National Laboratory’s facility for testing autonomous drones, vehicles, and robots of the future. The Lab, informally known as the “drone pen,” allows operators to pilot drones safely and build trust with their robotic teammates.
Over the past ten years, AI has experienced breakthrough after breakthrough in fields as diverse as computer vision, speech recognition, and protein folding prediction. Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. Geoff joins Pieter Abbeel in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey from academia to Google Brain.
In 2014, Ukrainian soldiers fighting in Crimea knew that the sight of Russian drones would soon be followed by a heavy barrage of Russian artillery. During that war, the Russian military integrated drones into tactical missions, using them to hunt for Ukrainian forces, whom they then pounded with artillery and cannon fire. Russian drones weren’t as advanced as those of their Western counterparts, but the Russian military’s integration of drones into its battlefield tactics was second to none.
Eight years later, the Russians are again invading Ukraine. And since the earlier incursion, the Russian military has spent approximately US $9 billion to domestically produce an armada of some 500 drones (a.k.a. unmanned aerial vehicles, or UAVs). But, astonishingly, three weeks into this invasion, the Russians have not had anywhere near their previous level of success with their drones. There are even signs that in the drone war, the Ukrainians have an edge over the Russians.
How could the drone capabilities of these two militaries have experienced such differing fortunes over the same period? The answer lies in a combination of trade embargoes, tech development, and the rising importance of countermeasures.
Since 2014’s invasion of Crimea, Russia’s drone-development efforts have lagged—during a time of dynamic evolution and development across the UAV industry.
First, some background. Military drones come in a wide variety of sizes, purposes, and capabilities, but they can be grouped into a few categories. On one end of the spectrum are relatively tiny flying bombs, small enough to be carried in a rucksack. On the other end are high-altitude drones, with wingspans up to 25 meters and capable of staying aloft for 30 or 40 hours, of being operated from consoles thousands of kilometers from the battlefield, and of firing air-to-surface missiles with deadly precision. In between are a range of intermediate-size drones used primarily for surveillance and reconnaissance.
Russia’s fleet of drones includes models in each of these categories. However, sanctions imposed after the 2014 invasion of Crimea blocked the Russian military from procuring some key technologies necessary to stay on the cutting edge of drone development, particularly in optics, lightweight composites, and electronics. With relatively limited capabilities of its own in these areas, Russia’s drone development efforts became somewhat sluggish during a time of dynamic evolution and development elsewhere.
Current stalwarts in the Russian arsenal include the Zala Kyb, which is a “loitering munition” that can dive into a target and explode. The most common Russian drones are midsize ones used for surveillance and reconnaissance. These include the Eleron-3SV and the Orlan-10 drones, both of which have been used extensively in Syria and Ukraine. In fact, just last week, an Orlan-10 operator was awarded a military medal for locating a site from which Ukrainian soldiers were ambushing Russian tanks, and also a Ukrainian basing area outside Kyiv containing ten artillery pieces, which were subsequently destroyed. Russia’s only large, missile-firing drone is the Kronshtadt Orion, which is similar to the American MQ-1 Predator and can be used for precision strikes as well as reconnaissance. An Orion was credited with an air strike on a command center in Ukraine in early March 2022.
Meanwhile, since the 2014 Crimea war, when they had no drones at all, the Ukrainians have methodically assembled a modest but highly capable set of drones. The backbone of the fleet, with some 300 units fielded, are the A1-SM Fury and the Leleka-100 reconnaissance drones, both designed and manufactured in Ukraine. The A1-SM Fury entered service in April 2020, and the Leleka-100, in May, 2021.
On offense, the Ukrainian and Russian militaries are closely matched in the drone war. The difference is on defense.
The heavy hitter for Ukraine in this war, though, is the Bayraktar TB2 drone, a combat aerial flyer with a wingspan of 12 meters and an armament of four laser-guided bombs. As of the beginning of March, and after losing two TB2s to Russian-backed separatist forces in Lugansk, Ukraine had a complement of 30 of the drones, which were designed and developed in Turkey. These drones are specifically aimed at destroying tanks and as of 24 March had been credited with destroying 26 vehicles, 10 surface-to-air missile systems, and 3 command posts. Various reports have put the cost of a TB2 at anywhere from $1 million to $10 million. It’s much cheaper than the tens of millions fetched for better-known combat drones, such as the MQ-9 Reaper, the backbone of the U.S. Air Force’s fleet of combat drones.
The Ukrainian arsenal also includes the Tu-141 reconnaissance drones, which are large, high-altitude Soviet-era drones that have had little success in the war. At the small end of the Ukraine drone complement are 100 Switchblade drones, which were donated by the United States as part of the $800 million weapons package announced on 16 March. The Switchblades are loitering munitions similar in size and functionality to the Russian Zala Kyb.
The upshot is that on offense, the Ukrainian and Russian militaries are closely matched in the drone war. The difference is on defense: Ukraine has the advantage when it comes to counter-drone technology. A decade ago, counter-drone technology mostly meant using radar to detect drones and surface-to-air missiles to shoot them down. It quickly proved far too costly and ineffective. Drone technology advanced at a brisk pace over the past decade, so counter-drone technology had to move rapidly to keep up. In Russia, it didn’t. Here, again, the Russian military was hampered by technology embargoes and a domestic industrial base that has been somewhat stagnant and lacking in critical capabilities. For contrast, the combined industrial base of the countries supporting Ukraine in this war is massive and has invested heavily in counter-drone technology.
Russia has deployed electronic warfare systems to counter enemy drones and have likely been using the Borisoglebsk 2 MT-LB and R-330Zh Zhitel systems, which use a combination of jamming and spoofing. These systems fill the air with radio-frequency energy, increasing the noise threshold to such a level that the drone cannot distinguish control signals from the remote pilot. Another standard counterdrone technique is sending false signals to the drone, with the most common being fake (“spoofed”) GPS signals, which disorient the flyer. Jamming and spoofing systems are easy to target because they emit radio-frequency waves at fairly high intensities. In fact, open-source images show that Ukrainian forces have already destroyed three of these Russian counterdrone systems.
The exact systems that have been provided to the Ukrainians is not publicly known, but it’s possible to make an educated guess from among the many systems available.
Additionally, some of the newer drones being used by the Ukrainians include features to withstand such electronic attacks. For example, when one of these drones detects a jamming signal, it switches to frequencies that are not being jammed; if it is still unable to reestablish a connection, the drone operates autonomously with a series of preset maneuvers until a connection can be reestablished.
Meanwhile, Ukraine has access to the wide array of NATO counterdrone technologies. The exact systems that have been provided to the Ukrainians is not publicly known, but it’s possible to make an educated guess from among the many systems available. One of the more powerful ones, from Lockheed Martin, repurposes a solid-state, phased-array radar system developed to spot incoming munitions, to detect and identify a drone. The system then tracks the drone and uses high-energy lasers to shoot it down. Raytheon’s counterdrone portfolio includes similar capabilities along with drone-killing drones and systems capable of beaming high-power microwaves that disrupt the drone’s electronics.
While most major Western defense contractors have some sort of counterdrone system, there has also been significant innovation in the commercial sector, given the mass proliferation of commercial drones. While many of these technologies are aimed at smaller drones, some of the technologies, including acoustic sensing and radio-frequency localization, are effective against larger drones as well. Also, a dozen small companies have developed jamming and spoofing systems specifically aimed at countering modern drones.
Although we don’t know specifically which counterdrone systems are being deployed by the Ukrainians, the images of the destroyed drones tell a compelling story. In the drone war, many of the flyers on both sides have been captured or destroyed on the ground, but more than half were disabled while in flight. The destroyed Ukrainian drones often show tremendous damage, including burn marks and other signs that they were shot down by a Russian surface-to-air missile. A logical conclusion is that the Russians’ electronic counterdrone systems were not effective. Meanwhile, the downed Russian drones are typically much more intact, showing relatively minor damage consistent with a precision strike from a laser or electromagnetic pulse. This is exactly what you would expect if the drones had been dispatched by one of the newer Western counterdrone systems.
In the first three weeks of this conflict, Russian drones have failed to achieve the level of success that they did in 2014. The Ukrainians, on the other hand, have logged multiple victories with drone and counterdrone forces assembled in just 8 years. The Russian drones, primarily domestically sourced, have been foiled repeatedly by NATO counterdrone technology. Meanwhile, the Ukrainian drones, such as the TB2s procured from NATO-member Turkey, have had multiple successes against the Russian counterdrone systems. Match ID: 161 Score: 5.00 source: spectrum.ieee.org age: 92 days qualifiers: 3.57 trade, 1.43 development
ProWritingAid VS Grammarly: When it comes to English grammar, there are two Big Players that everyone knows of: the Grammarly and ProWritingAid. but you are wondering which one to choose so here we write a detail article which will help you to choose the best one for you so Let's start
What is Grammarly?
Grammarly is a tool that checks for grammatical errors, spelling, and punctuation.it gives you comprehensive feedback on your writing. You can use this tool to proofread and edit articles, blog posts, emails, etc.
Grammarly also detects all types of mistakes, including sentence structure issues and misused words. It also gives you suggestions on style changes, punctuation, spelling, and grammar all are in real-time. The free version covers the basics like identifying grammar and spelling mistakes
whereas the Premium version offers a lot more functionality, it detects plagiarism in your content, suggests word choice, or adds fluency to it.
Features of Grammarly
Spelling and Word Suggestion: Grammarly detects basic to advance grammatical errors and also help you why this is an error and suggest to you how you can improve it
Create a Personal Dictionary: The Grammarly app allows you to add words to your personal dictionary so that the same mistake isn't highlighted every time you run Grammarly.
Different English Style: Check to spell for American, British, Canadian, and Australian English.
Plagiarism: This feature helps you detect if a text has been plagiarized by comparing it with over eight billion web pages.
Wordiness: This tool will help you check your writing for long and hard-to-read sentences. It also shows you how to shorten sentences so that they are more concise.
Passive Voice: The program also notifies users when passive voice is used too frequently in a document.
Punctuations: This feature flags all incorrect and missing punctuation.
Repetition: The tool provides recommendations for replacing the repeated word.
Proposition: Grammarly identifies misplaced and confused prepositions.
Plugins: It offers Microsoft Word, Microsoft Outlook, and Google Chrome plugins.
What is ProWritingAid?
ProWritingAid is a style and grammar checker for content creators and writers. It helps to optimize word choice, punctuation errors, and common grammar mistakes, providing detailed reports to help you improve your writing.
ProWritingAid can be used as an add-on to WordPress, Gmail, and Google Docs. The software also offers helpful articles, videos, quizzes, and explanations to help improve your writing.
Features of ProWriting Aid
Here are some key features of ProWriting Aid:
Grammar checker and spell checker: This tool helps you to find all grammatical and spelling errors.
Find repeated words: The tool also allows you to search for repeated words and phrases in your content.
Context-sensitive style suggestions: You can find the exact style of writing you intend and suggest if it flows well in your writing.
Check the readability of your content: Pro Writing Aid helps you identify the strengths and weaknesses of your article by pointing out difficult sentences and paragraphs.
Sentence Length: It also indicates the length of your sentences.
Check Grammatical error: It also checks your work for any grammatical errors or typos, as well.
Overused words: As a writer, you might find yourself using the same word repeatedly. ProWritingAid's overused words checker helps you avoid this lazy writing mistake.
Consistency: Check your work for inconsistent usage of open and closed quotation marks.
Echoes: Check your writing for uniformly repetitive words and phrases.
Difference between Grammarly and Pro-Writing Aid
Grammarly and ProWritingAid are well-known grammar-checking software. However, if you're like most people who can't decide which to use, here are some different points that may be helpful in your decision.
Grammarly vs ProWritingAid
Grammarly is a writing enhancement tool that offers suggestions for grammar, vocabulary, and syntax whereas ProWritingAid offers world-class grammar and style checking, as well as advanced reports to help you strengthen your writing.
Grammarly provides Android and IOS apps whereas ProWritingAid doesn't have a mobile or IOS app.
Grammarly offers important suggestions about mistakes you've made whereas ProWritingAid shows more suggestions than Grammarly but all recommendations are not accurate
Grammarly has a more friendly UI/UX whereas the ProWritingAid interface is not friendly as Grammarly.
Grammarly is an accurate grammar checker for non-fiction writing whereas ProWritingAid is an accurate grammar checker for fiction writers.
Grammarly finds grammar and punctuation mistakes, whereas ProWritingAid identifies run-on sentences and fragments.
Grammarly provides 24/7 support via submitting a ticket and sending emails. ProWritingAid’s support team is available via email, though the response time is approximately 48 hours.
Grammarly offers many features in its free plan, whereas ProWritingAid offers some basic features in the free plan.
Grammarly does not offer much feedback on big picture writing; ProWritingAid offers complete feedback on big picture writing.
Grammarly is a better option for accuracy, whereas ProWritingAid is better for handling fragmented sentences and dialogue. It can be quite useful for fiction writers.
ProWritingAid VS Grammarly: Pricing Difference
ProWritingAid comes with three pricing structures. The full-year cost of ProWritingAid is $79, while its lifetime plans cost $339. You also can opt for a monthly plan of $20.
Grammarly offers a Premium subscription for $30/month for a monthly plan $20/month for quarterly and $12/month for an annual subscription.
The Business plan costs $12.50 per month for each member of your company.
ProWritingAid vs Grammarly – Pros and Cons
It allows you to fix common mistakes like grammar and spelling.
Offers most features in the free plan
Allows you to edit a document without affecting the formatting.
Active and passive voice checker
Plagiarism checker (paid version)
Proofread your writing and correct all punctuation, grammar, and spelling errors.
Allows you to make changes to a document without altering its formatting.
Helps users improve vocabulary
Browser extensions and MS word add-ons
Available on all major devices and platforms
Grammarly will also offer suggestions to improve your style.
Enhance the readability of your sentence
Free mobile apps
Offers free version
Supports only English
Customer support only via email
Limits to 150,000 words
Subscription plans can be a bit pricey
Plagiarism checker is only available in a premium plan
Doesn’t offer a free trial
No refund policy
The free version is ideal for basic spelling and grammatical mistakes, but it does not correct advanced writing issues.
Some features are not available for Mac.
It offers more than 20 different reports to help you improve your writing.
Less expensive than other grammar checkers.
This tool helps you strengthen your writing style as it offers big-picture feedback.
ProWritingAid has a life plan with no further payments required.
Compatible with Google Docs!
Prowritingaid works on both Windows and Mac.
They offer more integrations than most tools.
Editing can be a little more time-consuming when you add larger passages of text.
ProWritingAid currently offers no mobile app for Android or iOS devices.
Plagiarism checker is only available in premium plans.
All recommendations are not accurate
Summarizing the Ginger VS Grammarly: My Recommendation
As both writing assistants are great in their own way, you need to choose the one that suits you best.
For example, go for Grammarly if you are a non-fiction writer
Go for ProWritingAid if you are a fiction writer.
ProWritingAid is better at catching errors found in long-form content. However, Grammarly is more suited to short blog posts and other similar tasks.
ProWritingAid helps you clean up your writing by checking for style, structure, and content while Grammarly focuses on grammar and punctuation.
Grammarly has a more friendly UI/UX whereas; ProWritingAid offers complete feedback on big picture writing.
Both ProWritingAid and Grammarly are awesome writing tools, without a doubt. but as per my experience, Grammarly is a winner here because Grammarly helps you to review and edit your content. Grammarly highlights all the mistakes in your writing within seconds of copying and pasting the content into Grammarly’s editor or using the software’s native feature in other text editors.
Not only does it identify tiny grammatical and spelling errors, it tells you when you overlook punctuations where they are needed. And, beyond its plagiarism-checking capabilities, Grammarly helps you proofread your content. Even better, the software offers a free plan that gives you access to some of its features.
Match ID: 162 Score: 5.00 source: www.crunchhype.com age: 104 days qualifiers: 3.57 google, 1.43 microsoft
SEMrush and Ahrefs are among the most popular tools in the SEO industry. Both companies have been in business for years and have thousands of customers per month.
If you're a professional SEO or trying to do digital marketing on your own, at some point you'll likely consider using a tool to help with your efforts. Ahrefs and SEMrush are two names that will likely appear on your shortlist.
In this guide, I'm going to help you learn more about these SEO tools and how to choose the one that's best for your purposes.
What is SEMrush?
SEMrush is a popular SEO tool with a wide range of features—it's the leading competitor research service for online marketers. SEMrush's SEO Keyword Magic tool offers over 20 billion Google-approved keywords, which are constantly updated and it's the largest keyword database.
The program was developed in 2007 as SeoQuake is a small Firefox extension
Most accurate keyword data: Accurate keyword search volume data is crucial for SEO and PPC campaigns by allowing you to identify what keywords are most likely to bring in big sales from ad clicks. SEMrush constantly updates its databases and provides the most accurate data.
Largest Keyword database: SEMrush's Keyword Magic Tool now features 20-billion keywords, providing marketers and SEO professionals the largest database of keywords.
All SEMrush users receive daily ranking data, mobile volume information, and the option to buy additional keywords by default with no additional payment or add-ons needed
Most accurate position tracking tool: This tool provides all subscribers with basic tracking capabilities, making it suitable for SEO professionals. Plus, the Position Tracking tool provides local-level data to everyone who uses the tool.
SEO Data Management: SEMrush makes managing your online data easy by allowing you to create visually appealing custom PDF reports, including Branded and White Label reports, report scheduling, and integration with GA, GMB, and GSC.
Toxic link monitoring and penalty recovery: With SEMrush, you can make a detailed analysis of toxic backlinks, toxic scores, toxic markers, and outreach to those sites.
Content Optimization and Creation Tools: SEMrush offers content optimization and creation tools that let you create SEO-friendly content. Some features include the SEO Writing Assistant, On-Page SEO Check, er/SEO Content Template, Content Audit, Post Tracking, Brand Monitoring.
Ahrefs is a leading SEO platform that offers a set of tools to grow your search traffic, research your competitors, and monitor your niche. The company was founded in 2010, and it has become a popular choice among SEO tools. Ahrefs has a keyword index of over 10.3 billion keywords and offers accurate and extensive backlink data updated every 15-30 minutes and it is the world's most extensive backlink index database.
Backlink alerts data and new keywords: Get an alert when your site is linked to or discussed in blogs, forums, comments, or when new keywords are added to a blog posting about you.
Intuitive interface: The intuitive design of the widget helps you see the overall health of your website and search engine ranking at a glance.
Site Explorer: The Site Explorer will give you an in-depth look at your site's search traffic.
Reports with charts and graphs
A question explorer that provides well-crafted topic suggestions
Direct Comparisons: Ahrefs vs SEMrush
Now that you know a little more about each tool, let's take a look at how they compare. I'll analyze each tool to see how they differ in interfaces, keyword research resources, rank tracking, and competitor analysis.
Ahrefs and SEMrush both offer comprehensive information and quick metrics regarding your website's SEO performance. However, Ahrefs takes a bit more of a hands-on approach to getting your account fully set up, whereas SEMrush's simpler dashboard can give you access to the data you need quickly.
In this section, we provide a brief overview of the elements found on each dashboard and highlight the ease with which you can complete tasks.
The Ahrefs dashboard is less cluttered than that of SEMrush, and its primary menu is at the very top of the page, with a search bar designed only for entering URLs.
Additional features of the Ahrefs platform include:
You can see analytics from the dashboard, including search engine rankings to domain ratings, referring domains, and backlink
Jumping from one tool to another is easy. You can use the Keyword Explorer to find a keyword to target and then directly track your ranking with one click.
The website offers a tooltip helper tool that allows you to hover your mouse over something that isn't clear and get an in-depth explanation.
When you log into the SEMrush Tool, you will find four main modules. These include information about your domains, organic keyword analysis, ad keyword, and site traffic.
You'll also find some other options like
A search bar allows you to enter a domain, keyword, or anything else you wish to explore.
A menu on the left side of the page provides quick links to relevant information, including marketing insights, projects, keyword analytics, and more.
The customer support resources located directly within the dashboard can be used to communicate with the support team or to learn about other resources such as webinars and blogs.
Detailed descriptions of every resource offered. This detail is beneficial for new marketers, who are just starting.
Both Ahrefs and SEMrush have user-friendly dashboards, but Ahrefs is less cluttered and easier to navigate. On the other hand, SEMrush offers dozens of extra tools, including access to customer support resources.
When deciding on which dashboard to use, consider what you value in the user interface, and test out both.
If you're looking to track your website's search engine ranking, rank tracking features can help. You can also use them to monitor your competitors.
Let's take a look at Ahrefs vs. SEMrush to see which tool does a better job.
The Ahrefs Rank Tracker is simpler to use. Just type in the domain name and keywords you want to analyze, and it spits out a report showing you the search engine results page (SERP) ranking for each keyword you enter.
Rank Tracker looks at the ranking performance of keywords and compares them with the top rankings for those keywords. Ahrefs also offers:
You'll see metrics that help you understand your visibility, traffic, average position, and keyword difficulty.
It gives you an idea of whether a keyword would be profitable to target or not.
SEMRush offers a tool called Position Tracking. This tool is a project tool—you must set it up as a new project. Below are a few of the most popular features of the SEMrush Position Tracking tool:
All subscribers are given regular data updates and mobile search rankings upon subscribing
The platform provides opportunities to track several SERP features, including Local tracking.
Intuitive reports allow you to track statistics for the pages on your website, as well as the keywords used in those pages.
Identify pages that may be competing with each other using the Cannibalization report.
Ahrefs is a more user-friendly option. It takes seconds to enter a domain name and keywords. From there, you can quickly decide whether to proceed with that keyword or figure out how to rank better for other keywords.
SEMrush allows you to check your mobile rankings and ranking updates daily, which is something Ahrefs does not offer. SEMrush also offers social media rankings, a tool you won't find within the Ahrefs platform. Both are good which one do you like let me know in the comment.
Keyword research is closely related to rank tracking, but it's used for deciding which keywords you plan on using for future content rather than those you use now.
When it comes to SEO, keyword research is the most important thing to consider when comparing the two platforms.
The Ahrefs Keyword Explorer provides you with thousands of keyword ideas and filters search results based on the chosen search engine.
Ahrefs supports several features, including:
It can search multiple keywords in a single search and analyze them together. At SEMrush, you also have this feature in Keyword Overview.
Ahrefs has a variety of keywords for different search engines, including Google, YouTube, Amazon, Bing, Yahoo, Yandex, and other search engines.
When you click on a keyword, you can see its search volume and keyword difficulty, but also other keywords related to it, which you didn't use.
SEMrush's Keyword Magic Tool has over 20 billion keywords for Google. You can type in any keyword you want, and a list of suggested keywords will appear.
The Keyword Magic Tool also lets you to:
Show performance metrics by keyword
Search results are based on both broad and exact keyword matches.
Show data like search volume, trends, keyword difficulty, and CPC.
Show the first 100 Google search results for any keyword.
Identify SERP Features and Questions related to each keyword
SEMrush has released a new Keyword Gap Tool that uncovers potentially useful keyword opportunities for you, including both paid and organic keywords.
Both of these tools offer keyword research features and allow users to break down complicated tasks into something that can be understood by beginners and advanced users alike.
If you're interested in keyword suggestions, SEMrush appears to have more keyword suggestions than Ahrefs does. It also continues to add new features, like the Keyword Gap tool and SERP Questions recommendations.
Both platforms offer competitor analysis tools, eliminating the need to come up with keywords off the top of your head. Each tool is useful for finding keywords that will be useful for your competition so you know they will be valuable to you.
Ahrefs' domain comparison tool lets you compare up to five websites (your website and four competitors) side-by-side.it also shows you how your site is ranked against others with metrics such as backlinks, domain ratings, and more.
Use the Competing Domains section to see a list of your most direct competitors, and explore how many keywords matches your competitors have.
To find more information about your competitor, you can look at the Site Explorer and Content Explorer tools and type in their URL instead of yours.
SEMrush provides a variety of insights into your competitors' marketing tactics. The platform enables you to research your competitors effectively. It also offers several resources for competitor analysis including:
Traffic Analytics helps you identify where your audience comes from, how they engage with your site, what devices visitors use to view your site, and how your audiences overlap with other websites.
SEMrush's Organic Research examines your website's major competitors and shows their organic search rankings, keywords they are ranking for, and even if they are ranking for any (SERP) features and more.
The Market Explorer search field allows you to type in a domain and lists websites or articles similar to what you entered. Market Explorer also allows users to perform in-depth data analytics on These companies and markets.
SEMrush wins here because it has more tools dedicated to competitor analysis than Ahrefs. However, Ahrefs offers a lot of functionality in this area, too. It takes a combination of both tools to gain an advantage over your competition.
Lite Monthly: $99/month
Standard Monthly: $179/month
Annually Lite: $990/year
Annually Standard: $1790/year
Pro Plan: $119.95/month
Business Plan: $449.95/month
Which SEO tool should you choose for digital marketing?
When it comes to keyword data research, you will become confused about which one to choose.
Consider choosing Ahrefs if you
Like friendly and clean interface
Searching for simple keyword suggestions
Want to get more keywords for different search engines like Amazon, Bing, Yahoo, Yandex, Baidu, and more
Consider SEMrush if you:
Want more marketing and SEO features
Need competitor analysis tool
Need to keep your backlinks profile clean
Looking for more keyword suggestions for Google
Both tools are great. Choose the one which meets your requirements and if you have any experience using either Ahrefs or SEMrush let me know in the comment section which works well for you.
Match ID: 163 Score: 5.00 source: www.crunchhype.com age: 116 days qualifiers: 3.57 google, 1.43 amazon
Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.
IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.
Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.
“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”
The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.
Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).
Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT
In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.
As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.
In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.
“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.
On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.
While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.
“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”
This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.
“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.
Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.
Match ID: 164 Score: 5.00 source: spectrum.ieee.org age: 138 days qualifiers: 3.57 trade, 1.43 development
Are you a great Chrome user? That’s nice to hear. But first, consider whether or not there are any essential Chrome extensions you are currently missing from your browsing life, so here we're going to share with you 10 Best Chrome Extensions That Are Perfect for Everyone. So Let's Start.
When you have too several passwords to remember, LastPass remembers them for you.
This chrome extension is an easy way to save you time and increase security. It’s a single password manager that will log you into all of your accounts. you simply ought to bear in mind one word: your LastPass password to log in to all or any your accounts.
Save usernames and passwords and LastPasswill log you in automatically.
Fill the forms quickly to save your addresses, credit card numbers and more.
MozBar is an SEO toolbar extension that makes it easy for you to analyze your web pages' SEO while you surf. You can customize your search so that you see data for a particular region or for all regions. You get data such as website and domain authority and link profile. The status column tells you whether there are any no-followed links to the page.You can also compare link metrics. There is a pro version of MozBar, too.
Grammarly is a real-time grammar checking and spelling tool for online writing. It checks spelling, grammar, and punctuation as you type, and has a dictionary feature that suggests related words. if you use mobile phones for writing than Grammerly also have a mobile keyboard app.
VidIQ is a SaaS product and Chrome Extension that makes it easier to manage and optimize your YouTube channels. It keeps you informed about your channel's performance with real-time analytics and powerful insights.
Learn more about insights and statistics beyond YouTube Analytics
Find great videos with the Trending tab.
You can check out any video’s YouTube rankings and see how your own video is doing on the charts.
Keep track the history of the keyword to determine when a keyword is rising or down in popularity over time.
Quickly find out which videos are performing the best on YouTube right now.
Let this tool suggest keywords for you to use in your title, description and tags.
ColorZilla is a browser extension that allows you to find out the exact color of any object in your web browser. This is especially useful when you want to match elements on your page to the color of an image.
Advanced Color Picker (similar to Photoshop's)
Ultimate CSS Gradient Generator
The "Webpage Color Analyzer" site helps you determine the palette of colors used in a particular website.
Palette Viewer with 7 pre-installed palettes
Eyedropper - sample the color of any pixel on the page
Color History of recently picked colors
Displays some info about the element, including the tag name, class, id and size.
Auto copy picked colors to clipboard
Get colors of dynamic hover elements
Pick colors from Flash objects
Pick colors at any zoom level
Honey is a chrome extension with which you save each product from the website and notify it when it is available at low price it's one among the highest extensions for Chrome that finds coupon codes whenever you look online.
Best for finding exclusive prices on Amazon.
A free reward program called Honey Gold.
Searches and filters the simplest value fitting your demand.
7. GMass: Powerful Chrome Extension for Gmail Marketers
GMass (or Gmail Mass) permits users to compose and send mass emails using Gmail. it is a great tool as a result of you'll use it as a replacement for a third-party email sending platform. you will love GMass to spice up your emailing functionality on the platform.
8. Notion Web Clipper: Chrome Extension for Geeks
It's a Chrome extension for geeks that enables you to highlight and save what you see on the web.
It's been designed by Notion, that could be a Google space different that helps groups craft higher ideas and collaborate effectively.
Save anything online with just one click
Use it on any device
Organize your saved clips quickly
Tag, share and comment on the clips
If you are someone who works online, you need to surf the internet to get your business done. And often there is no time to read or analyze something. But it's important that you do it. Notion Web Clipper will help you with that.
9. WhatFont: Chrome Extension for identifying Any Site Fonts
WhatFont is a Chrome extension that allows web designers to easily identify and compare different fonts on a page. The first time you use it on any page, WhatFont will copy the selected page.It Uses this page to find out what fonts are present and generate an image that shows all those fonts in different sizes. Besides the apparent websites like Google or Amazon, you'll conjointly use it on sites wherever embedded fonts ar used.
Similar Web is an SEO add on for both Chrome and Firefox.It allows you to check web site traffic and key metrics for any web site, as well as engagement rate, traffic ranking, keyword ranking, and traffic source. this is often a good tool if you are looking to seek out new and effective SEO ways similarly as analyze trends across the web.
Discover keyword trends
Know fresh keywords
Get benefit from the real traffic insights
Analyze engagement metrics
Explore unique visitors data
Analyze your industry's category
Use month to date data
How to Install chrome Extension in Android
I know everyone knows how to install extension in pc but most of people don't know how to install it in android phone so i will show you how to install it in android
1. Download Kiwi browser from Play Store and then Open it.
Continue reading below
2. Tap the three dots at the top right corner and select Extension.
3. Click on (+From Store) to access chrome web store or simple search chrome web store and access it.
4. Once you found an extension click on add to chrome a message will pop-up asking if you wish to confirm your choice. Hit OK to install the extension in the Kiwi browser.
5. To manage extensions on the browser, tap the three dots in the upper right corner. Then select Extensions to access a catalog of installed extensions that you can disable, update or remove with just a few clicks.
Your Chrome extensions should install on Android, but there’s no guarantee all of them will work. Because Google Chrome Extensions are not optimized for Android devices.
We hope this list of 10 best chrome extensions that is perfect for everyone will help you in picking the right Chrome Extensions. We have selected the extensions after matching their features to the needs of different categories of people. Also which extension you like the most let me know in the comment section
Match ID: 165 Score: 5.00 source: www.crunchhype.com age: 145 days qualifiers: 3.57 google, 1.43 amazon
Email is the marketing tool that helps you create a seamless, connected, frictionless buyer journey. More importantly, email marketing allows you to build relationships with prospects, customers, and past customers. It's your chance to speak to them right in their inbox, at a time that suits them. Along with the right message, email can become one of your most powerful marketing channels.
2. What is benefits of email marketing?
Email marketing is best way for creating long term relationship with your clients, and increasing sales in our company.
Benefits of email marketing for bussiness:
Better brand recognition
Statistics of what works best
More traffic to your products/services/newsletter
Most bussinesses are using email marketing and making tons of money with email marketing.
3. What is the simplest day and time to send my marketing emails?
Again, the answer to this question varies from company to company. And again, testing is the way to find out what works best. Typically, weekends and mornings seem to be times when multiple emails are opened, but since your audience may have different habits, it's best to experiment and then use your data to decide.
4. Which metrics should I be looking at?
The two most important metrics for email marketing are open rate and click-through rate. If your emails aren't opened, subscribers will never see your full marketing message, and if they open them but don't click through to your site, your emails won't convert.
5. How do I write a decent subject line?
The best subject lines are short and to the point, accurately describing the content of the email, but also catchy and intriguing, so the reader wants to know more. Once Again, this is the perfect place for A/B testing, to see what types of subject lines work best with your audience. Your call to action should be clear and simple. It should be somewhere at the top of your email for those who haven't finished reading the entire email, then repeated at the end for those reading all the way through. It should state exactly what you want subscribers to do, for example "Click here to download the premium theme for free.
6. Is email marketing still effective?
Email marketing is one of the most effective ways for a business to reach its customers directly. Think about it. You don't post something on your site hoping people will visit it. You don't even post something on a social media page and hope fans see it. You're sending something straight to each person's inbox, where they'll definitely see it! Even if they don't open it, they'll still see your subject line and business name every time you send an email, so you're still communicating directly with your audience.
7. However do I grow my email subscribers list? Should i buy an email list or build it myself?
Buying an email list is waste of time & money. These email accounts are unverified and not interested in your brand. The mailing list is useless if your subscribers do not open your emails. There are different ways to grow your mailing list.
Give them a free ebook and host it on a landing page where they have to enter the email to download the file and also create a forum page on your website, asks your visitors what questions they might have about your business, and collects email addresses to follow up with them.
8. How do I prevent audience from unsubscribing?
If the subject line of the email is irrelevant to customers, they will ignore it multiple times. But, if it keeps repeating, they are intercepted and unsubscribed from your emails. So, send relevant emails for the benefit of the customer. Don't send emails that often only focus on sales, offers and discounts.
Submit information about your business and offers so you can connect with customers. You can also update them on recent trends in your industry. The basic role of an email is first and foremost to connect with customers, get the most out of this tool.
9. What is the difference between a cold email and a spam email?
Cold emails are mostly sales emails that are sent with content align to the needs of the recipient. It is usually personalized and includes a business perspective. However, it is still an unsolicited email. And all unsolicited emails are marked as SPAM.
Regularly receiving this type of unsolicited email in your users' inboxes, chances are your emails will soon be diverted to spam or junk folders. The most important thing to prevent this from happening is to respect your recipients' choice to opt-out of receiving emails from you. You can add the links to easily unsubscribe. You must be familiar with the CAN-SPAM Act and its regulations.
10. Where can I find email template?
Almost all email campaign tools provide you with ready-made templates. Whether you use MailChimp or Pardot, you'll get several email templates ready to use.
However, if you want to create a template from scratch, you can do so.Most of email campaign tools have option to paste the HTML code of your own design.
11. What email marketing trend will help marketers succeed in 2022?
Is it a trend to listen to and get to know your customers? I think people realize how bad it feels for a brand or a company to obsess over themselves without knowing their customers personal needs. People who listen empathetically and then provide value based on what they learn will win.
You can approach email marketing in different ways. We have compiled a list of most frequently asked questions to help you understand how to get started, what constraints you need to keep in mind, and what future development you will need, we don’t have 100% answers to every situation and there’s always a chance you will have something new and different to deal with as you market your own business.
Match ID: 166 Score: 5.00 source: www.crunchhype.com age: 147 days qualifiers: 3.57 google, 1.43 development
Inside theVehicle Assembly Building (VAB) at NASA’s Kennedy Space Center in Florida—a cavernous structure built in the 1960s for constructing the Apollo program’s Saturn V rockets and, later, for preparing the space shuttle—the agency’s next big rocket is taking shape.
Tom Whitmeyer, NASA’s deputy associate administrator for exploration system development, recalled seeing the completed
Space Launch System (SLS) vehicle there in October, after the last component, the Orion spacecraft, was installed on top. To fully view the 98-meter-tall vehicle, he had to back off to the opposite side of the building.
“It’s taller than the Statue of Liberty,” he said at an
October 2021 briefing about the rocket’s impending launch. “And I like to think of it as the Statue of Liberty, because it’s [a] very engineering-complicated piece of equipment, and it’s very inclusive. It represents everybody.”
Perhaps so. But it’s also symbolic of NASA’s way of developing rockets, which is often characterized by cost overruns and delays. As this giant vehicle nears its first launch later this year, it runs the risk of being overtaken by commercial rockets that have benefited from new technologies and new approaches to development.
NASA’s newest rocket didn’t originate in the VAB, of course—it began life on Capitol Hill. In 2010, the Obama administration announced its intent to cancel NASA’s Constellation program for returning people to the moon, citing rising costs and delays. Some in Congress pushed back, worried about the effect on the space industry of canceling Constellation at the same time NASA was retiring its space shuttles.
The White House and Congress reached a compromise in a 2010 NASA authorization bill. It directed the agency to develop a new rocket, the Space Launch System, using technologies and contracts already in place for the shuttle program. The goal was to have a rocket capable of placing at least 70 tonnes into orbit by the end of 2016.
To achieve that, NASA extensively repurposed shuttle hardware. The core stage of SLS is a modified version of the external tank from the shuttle, with four
RS-25 engines developed for the shuttle mounted on its base. Attached to the sides of the core stage are two solid-rocket boosters, similar to those used on the shuttle but with five segments of solid fuel instead of four.
Difficulties pushed back the first SLS launch by years, although not all the problems were within NASA’s control.
Mounted on top of the core stage is what’s called the
Interim Cryogenic Propulsion Stage, which is based on the upper stage for the Delta IV rocket and is powered by one RL10 engine, a design that has been used for decades. This stage will propel the Orion capsule to the moon or beyond after it has attained orbit. As the name suggests, this stage is a temporary one: NASA is developing a more powerful Exploration Upper Stage, with four RL10 engines. But it won’t be ready until the mid-2020s.
Even though SLS uses many existing components and was not designed for reusability, combining those components to create a new rocket proved more difficult than expected. The core stage, in particular, turned out to be surprisingly complex, as NASA struggled with the challenge of incorporating four engines. Once the first core stage was complete, it spent more than a year on a test stand at NASA’s
Stennis Space Center in Mississippi, including two static-fire tests of its engines, before going to the Kennedy Space Center for launch preparations.
Those difficulties pushed back the first SLS launch by years, although not all the problems were within NASA’s control. Hurricanes damaged the Stennis test stand as well as the New Orleans facility where the core stage is built. The pandemic also slowed the work, before and after all the components arrived at the VAB for assembly. “In Florida in August and September , it hit our area very hard,” said Mike Bolger, manager of the exploration ground systems program at NASA, describing the most recent wave of the pandemic at the October briefing.
Now, after years of delays, the first launch of the SLS is finally getting close. “Completing stacking [of the SLS] is a really important milestone. It shows that we’re in the home stretch,” said Mike Sarafin, NASA’s manager for the first SLS mission, called Artemis 1, at the same briefing.
After a series of tests inside the VAB, the completed vehicle will roll out to Launch Complex 39B. NASA will then conduct a practice countdown called a wet dress rehearsal—“wet” because the core stage will be loaded with liquid-hydrogen and liquid-oxygen propellants.
Controllers will go through the same steps as in an actual countdown, stopping just before the point where the RS-25 engines would normally ignite. “For us, on the ground, it’s a great chance to get the team and the ground systems wrung out and ready for launch,” Bolger said of the wet dress rehearsal.
This giant tank will help increase the capacity for storing liquid hydrogen at the Kennedy Space Center. Glenn Benson/NASA
After that test, the SLS will roll back to the VAB for final checks before returning to the pad for the actual launch. The earliest possible launch for Artemis 1 is 12 February 2022, but at the time of this writing, NASA officials said it was too soon to commit to a specific launch date.
“We won’t really be in a position to set a specific launch date until we have a successful wet dress [rehearsal],” Whitmeyer said. “We really want to see the results of that test, see how we’re doing, see if there’s anything we need to do, before we get ready to launch.”
To send the uncrewed Orion spacecraft to the moon on its desired trajectory, SLS will have to launch in one of a series of two-week launch windows, dictated by
a variety of constraints. The first launch window runs through 27 February. A second opens on 12 March and runs through 27 March, followed by a third from 8 to 23 April. Sarafin said there’s a “rolling analysis cycle” to calculate specific launch opportunities each day.
A complicating factor here is the supply of propellants available. The core stage’s tanks store 2 million liters of liquid hydrogen and almost three-quarters of a million liters of liquid oxygen, putting a strain on the liquid hydrogen available at the Kennedy Space Center.
“This rocket is so big, and we need so much liquid hydrogen, that our current infrastructure at the Kennedy Space Center just does not support an every-day launch attempt,” Sarafin said. If a launch attempt is postponed after the core stage is fueled, Bolger explained, NASA would have to wait days to try again. That’s because a significant fraction of liquid hydrogen is lost to boil-off during each launch attempt, requiring storage tanks to be refilled before the next attempt. “We are currently upgrading our infrastructure,” he said, but improvements like larger liquid hydrogen storage tanks won’t be ready until the second SLS mission in 2023. There’s no pressure to launch on a specific day, Sarafin said. “We’re going to fly when the hardware’s ready to fly.”
SLS is not the only game in town when it comes to large rockets. In a factory located just outside the gates of the Kennedy Space Center, Blue Origin, the spaceflight company founded by Amazon’s Jeff Bezos, is working on its New Glenn rocket. While not as powerful as SLS, its ability to place up to 45 tonnes into orbit outclasses most other rockets in service today. Moreover, unlike SLS, the rocket’s first stage is reusable, designed to land on a ship.
New Glenn and SLS do have something in common: development delays. Blue Origin once projected the first launch of the rocket to be in 2020. By early 2021, though, that launch date had slipped to no earlier than the fourth quarter of 2022.
A successful SpaceX Starship launch vehicle, fully reusable and able to place 100 tonnes into orbit, could also make the SLS obsolete.
A key factor in that schedule is the development of Blue Origin’s
BE-4 engine, seven of which will power New Glenn’s first stage. Testing that engine has taken longer than expected, affecting not only New Glenn but also United Launch Alliance’s new Vulcan Centaur rocket, which uses two BE-4 engines in its first stage. Vulcan’s first flight has slipped to early 2022, and New Glenn could see more delays as well.
Meanwhile halfway across the country, at the southern tip of Texas,
SpaceX is moving ahead at full speed with its next-generation launch system, Starship. For two years, the company has been busy building, testing, flying—and often crashing—prototypes of the vehicle, culminating in a successful flight in May 2021 when the vehicle lifted off, flew to an altitude of 10 kilometers, and landed.
SpaceX is now preparing for orbital test flights, installing the Starship vehicle on top of a giant booster called, aptly,
Super Heavy. A first test flight will see Super Heavy lift off from the Boca Chica, Texas, test site and place Starship in orbit. Starship will make less than one lap around the planet, though, reentering the atmosphere and splashing down in the Pacific about 100 kilometers from the Hawaiian island of Kauai.
When that launch will take place remains uncertain—despite some optimistic announcements. “If all goes well, Starship will be ready for its first orbital launch attempt next month, pending regulatory approval,” SpaceX CEO
Elon Musk tweeted on 22 October 2021. But Musk surely must have known at the time that regulatory approval would take much longer.
SpaceX needs a launch license from the U.S. Federal Aviation Administration to perform that orbital launch, and that license, in turn, depends on an ongoing environmental review of Starship launches from Boca Chica. The FAA hasn’t set a schedule for completing that review. But the
draft version was open for public comments through the beginning of November, and it’s likely to take the FAA months to review those comments and incorporate them into the final version of the report. That suggests that the initial orbital flight of Starship atop Super Heavy will also take place sometime in early 2022.
Starship could put NASA in a bind. The agency is funding a version of Starship to serve as a
lunar lander for the Artemis program, transporting astronauts to and from the surface of the moon as soon as 2025. So NASA clearly wants Starship development to proceed apace. But a successful Starship launch vehicle, fully reusable and able to place 100 tonnes into orbit, could also make the SLS obsolete.
Of course, on the eve of the first SLS launch, NASA isn’t going to give up on the vehicle it’s worked so long and hard to develop. “SLS and Orion were purpose-designed to do this mission,” says Pam Melroy, NASA deputy administrator. “It’s designed to take a huge amount of cargo and people to deep space. Therefore, it’s not something we’re going to walk away from.”
Match ID: 167 Score: 5.00 source: spectrum.ieee.org age: 181 days qualifiers: 2.14 musk, 1.43 development, 1.43 amazon
A recent United Nations provision has banned the use of mercury in spacecraft propellant. Although no private company has actually used mercury propellant in a launched spacecraft, the possibility was alarming enough—and the dangers extreme enough—that the ban was enacted just a few years after one U.S.-based startup began toying with the idea. Had the company gone through with its intention to sell mercury propellant thrusters to some of the companies building massive satellite constellations over the coming decade, it would have resulted in Earth’s upper atmosphere being laced with mercury.
Mercury is a neurotoxin. It’s also bio-accumulative, which means it’s absorbed by the body at a faster rate than the body can remove it. The most common way to get mercury poisoning is through eating contaminated seafood. “It’s pretty nasty,” says Michael Bender, the international coordinator of the Zero Mercury Working Group (ZMWG). “Which is why this is one of the very few instances where the governments of the world came together pretty much unanimously and ratified a treaty.”
Bender is referring to the 2013 Minamata Convention on Mercury, a U.N. treaty named for a city in Japan whose residents suffered from mercury poisoning from a nearby chemical factory for decades. Because mercury pollutants easily find their way into the oceans and the atmosphere, it’s virtually impossible for one country to prevent mercury poisoning within its borders. “Mercury—it’s an intercontinental pollutant,” Bender says. “So it required a global treaty.”
Today, the only remaining permitted uses for mercury are in fluorescent lighting and dental amalgams, and even those are being phased out. Mercury is otherwise found as a by-product of other processes, such as the burning of coal. But then a company hit on the idea to use it as a spacecraft propellant.
In 2018, an employee at Apollo Fusion approached the Public Employees for Environmental Responsibility (PEER), a nonprofit that investigates environmental misconduct in the United States. The employee—who has remained anonymous—alleged that the Mountain View, Calif.–based space startup was planning to build and sell thrusters that used mercury propellant to multiple companies building low Earth orbit (LEO) satellite constellations.
Apollo Fusion wasn’t the first to consider using mercury as a propellant. NASA originally tested it in the 1960s and 1970s with two Space Electric Propulsion Tests (SERT), one of which was sent into orbit in 1970. Although the tests demonstrated mercury’s effectiveness as a propellant, the same concerns over the element’s toxicity that have seen it banned in many other industries halted its use by the space agency as well.
“I think it just sort of fell off a lot of folks’ radars,” says Kevin Bell, the staff counsel for PEER. “And then somebody just resurrected the research on it and said, ‘Hey, other than the environmental impact, this was a pretty good idea.’ It would give you a competitive advantage in what I imagine is a pretty tight, competitive market.”
That’s presumably why Apollo Fusion was keen on using it in their thrusters. Apollo Fusion as a startup emerged more or less simultaneously with the rise of massive LEO constellations that use hundreds or thousands of satellites in orbits below 2,000 kilometers to provide continual low-latency coverage. Finding a slightly cheaper, more efficient propellant for one large geostationary satellite doesn’t move the needle much. But doing the same for thousands of satellites that need to be replaced every several years? That’s a much more noticeable discount.
Were it not for mercury’s extreme toxicity, it would actually make an extremely attractive propellant. Apollo Fusion wanted to use a type of ion thruster called a Hall-effect thruster. Ion thrusters strip electrons from the atoms that make up a liquid or gaseous propellant, and then an electric field pushes the resultant ions away from the spacecraft, generating a modest thrust in the opposite direction. The physics of rocket engines means that the performance of these engines increases with the mass of the ion that you can accelerate.
Mercury is heavier than either xenon or krypton, the most commonly used propellants, meaning more thrust per expelled ion. It’s also liquid at room temperature, making it efficient to store and use. And it’s cheap—there’s not a lot of competition with anyone looking to buy mercury.
Bender says that ZMWG, alongside PEER, caught wind of Apollo Fusion marketing its mercury-based thrusters to at least three companies deploying LEO constellations—One Web, Planet Labs, and SpaceX. Planet Labs, an Earth-imaging company, has at least 200 CubeSats in low Earth orbit. One Web and SpaceX, both wireless-communication providers, have many more. One Web plans to have nearly 650 satellites in orbit by the end of 2022. SpaceX already has nearly 1,500 active satellites aloft in its Starlink constellation, with an eye toward deploying as many as 30,000 satellites before its constellation is complete. Other constellations, like Amazon’s Kuiper constellation, are also planning to deploy thousands of satellites.
In 2019, a group of researchers in Italy and the United States estimated how much of the mercury used in spacecraft propellant might find its way back into Earth’s atmosphere. They figured that a hypothetical LEO constellation of 2,000 satellites, each carrying 100 kilograms of propellant, would emit 20 tonnes of mercury every year over the course of a 10-year life span. Three quarters of that mercury, the researchers suggested, would eventually wind up in the oceans.
That amounts to 1 percent of global mercury emissions from a constellation only a fraction of the size of the one planned by SpaceX alone. And if multiple constellations adopted the technology, they would represent a significant percentage of global mercury emissions—especially, the researchers warned, as other uses of mercury are phased out as planned in the years ahead.
Fortunately, it’s unlikely that any mercury propellant thrusters will even get off the ground. Prior to the fourth meeting of the Minamata Convention, Canada, the European Union, and Norway highlighted the dangers of mercury propellant, alongside ZMWG. The provision to ban mercury usage in satellites was passed on 26 March 2022.
The question now is enforcement. “Obviously, there aren’t any U.N. peacekeepers going into space to shoot down” mercury-based satellites, says Bell. But the 137 countries, including the United States, who are party to the convention have pledged to adhere to its provisions—including the propellant ban.
The United States is notable in that list because as Bender explains, it did not ratify the Minamata Convention via the U.S. Senate but instead deposited with the U.N. an instrument of acceptance. In a 7 November 2013 statement (about one month after the original Minamata Convention was adopted), the U.S. State Department said the country would be able to fulfill its obligations “under existing legislative and regulatory authority.”
Bender says the difference is “weedy” but that this appears to mean that the U.S. government has agreed to adhere to the Minamata Convention’s provisions because it already has similar laws on the books. Except there is still no existing U.S. law or regulation banning mercury propellant. For Bender, that creates some uncertainty around compliance when the provision goes into force in 2025.
Still, with a U.S. company being the first startup to toy with mercury propellant, it might be ideal to have a stronger U.S. ratification of the Minamata Convention before another company hits on the same idea. “There will always be market incentives to cut corners and do something more dangerously,” Bell says.
Update 19 April 2022: In an email, a spokesperson for Astra stated that the company's propulsion system, the Astra Spacecraft Engine, does not use mercury. The spokesperson also stated that Astra has no plans to use mercury propellant and that the company does not have anything in orbit that uses mercury.
Updated 20 April 2022 to clarify that Apollo Fusion was building thrusters that used mercury, not that they had actually used them.
Match ID: 168 Score: 4.29 source: spectrum.ieee.org age: 67 days qualifiers: 2.14 musk, 1.43 amazon, 0.71 startup
When entrepreneur JoeBen Bevirt launched Joby Aviation 12 years ago, it was just one of a slew of offbeat tech projects at his Sproutwerx ranch in the Santa Cruz mountains. Today, Joby has more than 1,000 employees and it’s backed by close to US $2 billion in investments, including $400 million from Toyota Motor Corporation along with big infusions from Uber and JetBlue.
Having raked in perhaps 30 percent of all the money invested in electrically-powered vertical takeoff and landing (eVTOL) aircraft so far, Joby is the colossus in an emerging class of startups working on these radical, battery-powered commercial flyers. All told, at least 250 companies worldwide are angling to revolutionize transportation in and around cities with a new category of aviation, called urban air mobility or advanced air mobility. With Joby at the apex, the category’s top seven companies together have hauled in more than $5 billion in funding—a figure that doesn’t include private firms, whose finances haven’t been disclosed.
But with some of these companies pledging to start commercial operations in 2024, there is no clear answer to a fundamental question: Are we on the verge of a stunning revolution in urban transportation, or are we witnessing, as aviation analyst Richard Aboulafia puts it, the “mother of all aerospace bubbles”?
Even by the standards of big-money tech investment, the vision is giddily audacious. During rush hour, the skies over a large city, such as Dubai or Madrid or Los Angeles, would swarm with hundreds, and eventually thousands, of eVTOL “air taxis.” Each would seat between one and perhaps half a dozen passengers, and would, eventually, be autonomous. Hailing a ride would be no more complicated than scheduling a trip on a ride-sharing app.
“We’re going to have to get the consumer used to thinking about flying in a small aircraft without a pilot on board. I have reservations about the general public’s willingness to accept that vision.”
—Laurie Garrow, Georgia Tech
And somehow, the cost would be no greater, either. In a discussion hosted by the Washington Post last July, Bevirt declared, “Our initial price point would be comparable to the cost of a taxi or an Uber, but our target is to move quickly down to the cost of what it costs you to drive your own car. And we believe that's the critical unlock to making this transformative to the world and for people’s daily lives.” Asked to put some dollar figures on his projection, Bevirt said, “Our goal is to launch this service [in 2024] at an average price of around $3 a mile and to move that down below $1 a mile over time.” The cost of an Uber varies by city and time of day, but it’s usually between $1 and $2 per mile, not including fees.
Industry analysts tend to have more restrained expectations. With the notable exception of China, they suggest, limited commercial flights will begin with eVTOL aircraft flown by human pilots, a phase that is expected to last six to eight years at least. Costs will be similar to those of helicopter trips, which tend to be in the range of $6 to $10 per mile or more. Of the 250+ startups in the field, only three—Kittyhawk, Wisk Aero (a joint venture of Kittyhawk and Boeing), and Ehang—plan to go straight to full autonomy without a preliminary phase involving pilots, says Chris Anderson, Chief Operating Officer at Kittyhawk.
To some, the autonomy issue is the heart of whether this entire enterprise can succeed economically. “When you figure in autonomy, you go from $3 a mile to 50 cents a mile,” says Anderson, citing studies done by his company. “You can’t do that with a pilot in the seat.”
Laurie A. Garrow, a professor at the Georgia Institute of Technology, agrees. “For the large-scale vision, autonomy will be critical,” she says. “In order to get to the vision that people have, where this is a ubiquitous mode of transportation with a high market share, the only way to get that is by… eliminating the pilot.” Garrow, a civil engineer who co-directs the university’s Center for Urban and Regional Air Mobility, adds that autonomy presents challenges beyond technology: “We’re going to have to get the consumer used to thinking about flying in a small aircraft without a pilot on board. I have reservations about the general public’s willingness to accept that vision, especially early on.”
“The technical problems are, if not solved, then solvable. The main limiters are laws and regulations.”
—Chris Anderson, COO, Kittyhawk
Some analysts have much more fundamental doubts. Aboulafia, managing director at the consultancy AeroDynamic Advisory, says the figures simply don’t add up. eVTOL startups are counting on mass-manufacturing techniques to reduce the costs of these exotic aircraft, but such techniques have never been applied to producing aircraft on the scale specified in the projections. Even the anticipated lower operating costs, Aboulafia adds, won’t compensate. “If I started a car service here in Washington, D.C., using Rolls Royces, you’d think I was out of my mind, right?,” he asks. “But if I put batteries in those Rolls Royces, would you think I was any less crazy?”
What everyone agrees on is that achieving even a modest amount of success for eVTOLs will require surmounting entire categories of challenges, including regulations and certification, technology development, and the operational considerations of safely flying large numbers of aircraft in a small airspace.
To some, certification will be the highest hurdle. “The technical problems are, if not solved, then solvable,” says Anderson. “The main limiters are laws and regulations.”
There are dozens of aviation certification agencies in the world. But the three most important ones for these new aircraft are the Federal Aviation Administration (FAA) in the U.S., the European Union Aviation Safety Agency (EASA), and the Civil Aviation Administration of China (CAAC). Of the three, the FAA is considered the most challenging, for several reasons. One is that, to deal with eVTOLs, the agency has chosen to adapt its existing certification rules. That gives some observers pause, because the FAA does not have a body of knowledge and experience for certifying aircraft that fly by means of battery systems and electric motors. The EASA, on the other hand, has created an entirely new set of regulations tailored for eVTOL aircraft and related technology, according to Erin Rivera, senior associate for regulatory affairs at Lilium.
To clear an aircraft for commercial flight, the FAA actually requires three certifications: one for the aircraft itself, one for its operations, and one for its manufacturing. For the aircraft, the agency designates different categories, or “parts,” for different kinds of fliers. For eVTOLs (other than multicopters), the applicable category seems to be Title 14 Code of Federal Regulations, Part 23, which covers “normal, utility, acrobatic, and commuter category airplanes.” The certification process itself is performance based, meaning that the FAA establishes performance criteria that an aircraft must meet, but does not specify how it must meet them.
Because eVTOLs are so novel, the FAA is expected to lean on industry-developed standards referred to as Means of Compliance (MOC). The proposed MOCs must be acceptable to the FAA. Through a certification scheme known as the “issue paper process,” the applicant begins by submitting what’s known as a G1 proposal, which specifies the applicable certification standards and special conditions that must be met to achieve certification. The FAA reviews and then either approves or rejects the proposal. If it’s rejected, the applicant revises the proposal to address the FAA’s concerns and tries again.
“If very high levels of automation are critical to scaling, that will be very difficult to certify. How do you certify all the algorithms?”
—Matt Metcalfe, Deloitte Consulting
Some participants are wary. When he was the chief executive of drone maker 3D Robotics, Anderson participated in an analogous experiment in which the FAA had pledged to work more closely with industry to expedite certification of drone aircraft such as multicopters. “That was five years ago, and none of the drones have been certified,” Anderson points out. “It was supposed to be agile and streamlined, and it has been anything but.”
Nobody knows how many eVTOL startups have started the certification process with the FAA, although a good guess seems to be one or two dozen. Joby is furthest along in the process, according to Mark Moore, CEO of Whisper Aero, a maker of advanced electric propulsor systems in Crossville, Tenn. The G1 certification proposals are not public, but when the FAA accepts one (presumably Joby’s), it will become available through the U.S. Federal Register for public comment. Observers expect that to happen any day now.
This certification phase of piloted aircraft is fraught with unknowns because of the novelty of the eVTOL craft themselves. But experts say a greater challenge lies ahead, when manufacturers seek to certify the vehicles for autonomous flight. “If very high levels of automation are critical to scaling, that will be very difficult to certify,” says Matt Metcalfe, a managing director in Deloitte Consulting's Future of Mobility and Aviation practice. “That’s a real challenge, because it’s so complicated. How do you certify all the algorithms?”
“It’s a matter of, how do you ensure that autonomous technology is going to be as safe as a pilot?,” says an executive at one of the startups. “How do you certify that it’s always going to be able to do what it says? With true autonomous technology, the system itself can make an undetermined number of decisions, within its programming. And the way the current certification regulations work, is that they want to be able to know the inputs and outcome of every decision that the aircraft system makes. With a fully autonomous system, you can’t do that.”
Perhaps surprisingly, most experts contacted for this story agreed with Kittyhawk's Anderson that the technical challenges of building the aircraft themselves are solvable. Even autonomy—certification challenges aside—is within reach, most say. The Chinese company EHang has already offered fully autonomous, trial flights of its EH216 multicopter to tourists in the northeastern port city of Yantai and is now building a flight hub in its home city of Guangzhou. Wisk, Kittyhawk, Joby, and other companies have collectively conducted thousands of flights that were at least partially autonomous, without a pilot on board.
Experts foresee eVTOLs largely replacing helicopters for niche applications. There’s less agreement on whether middle-class people will ever be routinely whisked around cities for pennies a mile.
A more imposing challenge, and one likely to determine whether the grand vision of urban air mobility comes to pass, is whether municipal and aviation authorities can solve the challenges of integrating large numbers of eVTOLs into the airspace over major cities. Some of these challenges are, like the aircraft themselves, totally new. For example, most viable scenarios require the construction of “vertiports” in and around cities. These would be like mini airports where the eVTOLs would take off and land, be recharged, and take on and discharge passengers. Right now, it’s not clear who would pay for these. “Manufacturers probably won’t have the money to do it,” says Metcalfe at Deloitte.
As Georgia Tech's Garrow sees it, “vertiports may be one of the greatest constraints on scalability of UAM.” Vertiports, she explains, will be the “pinch points,” because at urban facilities, space will likely be limited to accommodating several aircraft at most. And yet at such a facility, room will be needed during rush hours to accommodate dozens of aircraft needing to land, be charged, take on passengers, and take off. “So the scalability of operations at the vertiports, and the amount of land space required to do that, are going to be two major challenges.”
Despite all the challenges, Garrow, Metcalfe, and others are cautiously optimistic that air mobility will eventually become part of the urban fabric in many cities. They foresee an initial period in which the eVTOLs largely replace helicopters in a few niche applications, such as linking downtown transportation depots to airports for those who can afford it, taking tourists on sightseeing tours, and transporting organs and high-risk patients among hospitals. There’s less agreement on whether middle-class people will ever be routinely whisked around cities for pennies a mile. Even some advocates think that’s more than 10 years away, if it happens at all.
If it does happen, a few studies have predicted that travel times and greenhouse-gas and pollutant emissions could all be reduced. A 2020 study published by the U.S. National Academy of Sciences found a substantial reduction in overall energy use for transportation under “optimistic” scenarios for urban air mobility. And a 2021 study at the University of California, Berkeley, found that in the San Francisco Bay area, overall travel times could be reduced with as few as 10 vertiports. The benefits went up as the number of vertiports increased and as the transfer times at the vertiports went down. But the study also warned that “vertiport scheduling and capacity may become bottlenecks that limit the value of UAM.”
Metacalfe notes that ubiquitous modern conveniences like online shopping have already unleashed tech-based revolutions on a par with the grand vision for UAM. “We tend to look at this through the lens of today,” he says. “And that may be the wrong way to look at it. Ten years ago we never would have thought we’d be getting two or three packages a day. Similarly, the way we move people and goods in the future could be very, very different from the way we do it today.”
This article appears in the March 2022 print issue as “What’s Behind the Air-Taxi Craze.”
Match ID: 169 Score: 4.29 source: spectrum.ieee.org age: 137 days qualifiers: 1.43 development, 1.43 california, 0.71 uber, 0.71 startup
First one on the list is copy.ai. It is an AI based copy writer tool. Basically what a copywriter tool does is, it gives you content that you can post on your blog or video when you give it a few descriptions about the topic you want content on.So copy ai can help you write instagram captions gives you blog idea, product descriptions, facebook content, startup ideas, viral ideas, a lot of things it can do, you just make an account in this website, then select a tool and fill in the necessary description and the AI will generate content on what you ask for.
For tutorials go to their official Youtube channel .An awesome tool that is going to be really handy in the future.
Hotpot.ai offers a collection of AI tools for designers, as well as for anyone, it has an “AI picture restorer” which removes scratches ,and basically restores your old photo into amazing pictures and makes it look brand new.
Ai picture colorizer , turns your black and white photo into color. And there is a background remover tool, picture enlarger and a lot more for designers, check it out,and explore all the tools.
Deep-nostalgia became very popular on the internet when people started
making reaction videos of their parents reacting to animated pictures of their grandparents. So deep - nostalgia is a very cool app, that will animate any photo of a person.
So what makes it really cool is that fact that you can upload an old photo of your family and see them animate and living. Which is pretty cool and creepy at the same time if they are dead already.. Really amazing service from myheritage, I created a lot of cool animations with my old photos as well as with the photos of my grandparents.
Having a nice looking profile picture is really important if you want that professional feel in your socials. Whether in linkedin or twitter having a
distinct and catchy profile picture can make all the difference. So that's where pfpmaker comes in. it a free online tool to create amazing professional profile pictures that fits you. It generates a lot of profile pictures and you can also make small changes to already created profile pictures if you want to,as well.
Speaking of brands, getting a good logo for your brand is the most frustrating
thing ever, so brandmark.io makes it super easy. It will create a logo for your brand within 2 clicks. So you goto this website. Type in your brand name and slogan if you have any, and give BRAND KEYWORDS that relate to your brand, then pick a color style and done, the ai will
generate amazing logos for you.
You can also make minor edits to the suggested logos to better fit your needs as well. But to get that png you need to pay a hefty price, but if you are looking for some logo ideas, this is a great place to start.
Even in the previous websites, some had picture enlarger tools. This deep-image.ai is a dedicated image enlarger, which supports upto 4x enlargement for free. The UI is pretty good and the tool is pretty fast with amazing results.
Bigjpg does the same as deep-image.ai , but this service offers a little bit more options like if your photo is an artwork it scales image differently than normal photos and it supports upto 4x enlargement for free and you can also set noise reduction options. Very good tool,
Lumen5 is an online marketing video maker that makes it really easy to create branding or informational videos within a couple of clicks. They have really great templates and various aspect ratios for various social media platforms.
You can also edit each element of the video if you don't like the preset, and the best part is, they have a ton of , I mean a ton of free stock photos and videos.You can also upload your own videos or any type of media. Definitely a good tool if you don't know how to work with complex tools like after effects, but want to create a sick video for your brand.
If you are struggling to find good names for your brand or youtube channel, give
namelix a try. It's an ai based name generator that will suggest good names for your brand depending on the keyword that you give.. Also logo for your brand. Pretty cool and an amazing piece of tool. So that's been it , those are my favourite free AI based tools that you can use right now,
Which one You like the most Let me know in the Comments below.
Match ID: 170 Score: 4.29 source: www.crunchhype.com age: 152 days qualifiers: 3.57 google, 0.71 startup
In the latest push for nuclear power in space, the Pentagon’s Defense Innovation Unit (DIU) awarded a contract in May to Seattle-based Ultra Safe Nuclear to advance its nuclear power and propulsion concepts. The company is making a soccer ball–size radioisotope battery it calls EmberCore. The DIU’s goal is to launch the technology into space for demonstration in 2027.
Ultra Safe Nuclear’s system is intended to be lightweight, scalable, and usable as both a propulsion source and a power source. It will be specifically designed to give small-to-medium-size military spacecraft the ability to maneuver nimbly in the space between Earth orbit and the moon. The DIU effort is part of the U.S. military’s recently announced plans to develop a surveillance network in cislunar space.
Besides speedy space maneuvers, the DIU wants to power sensors and communication systems without having to worry about solar panels pointing in the right direction or batteries having enough charge to work at night, says Adam Schilffarth, director of strategy at Ultra Safe Nuclear. “Right now, if you are trying to take radar imagery in Ukraine through cloudy skies,” he says, “current platforms can only take a very short image because they draw so much power.”
Radioisotope power sources are well suited for small, uncrewed spacecraft, adds Christopher Morrison, who is leading EmberCore’s development. Such sources rely on the radioactive decay of an element that produces energy, as opposed to nuclear fission, which involves splitting atomic nuclei in a controlled chain reaction to release energy. Heat produced by radioactive decay is converted into electricity using thermoelectric devices.
Radioisotopes have provided heat and electricity for spacecraft since 1961. The Curiosity and Perseverance rovers on Mars, and deep-space missions including Cassini, New Horizons, and Voyager all use radioisotope batteries that rely on the decay of plutonium-238, which is nonfissile—unlike plutonium-239, which is used in weapons and power reactors.
For EmberCore, Ultra Safe Nuclear has instead turned to medical isotopes such as cobalt-60 that are easier and cheaper to produce. The materials start out inert, and have to be charged with neutrons to become radioactive. The company encapsulates the material in a proprietary ceramic for safety.
Cobalt-60 has a half-life of five years (compared to plutonium-238’s 90 years), which is enough for the cislunar missions that the DOD and NASA are looking at, Morrison says. He says that EmberCore should be able to provide 10 times as much power as a plutonium-238 system, providing over 1 million kilowatt-hours of energy using just a few pounds of fuel. “This is a technology that is in many ways commercially viable and potentially more scalable than plutonium-238,” he says.
One downside of the medical isotopes is that they can produce high-energy X-rays in addition to heat. So Ultra Safe Nuclear wraps the fuel with a radiation-absorbing metal shield. But in the future, the EmberCore system could be designed for scientists to use the X-rays for experiments. “They buy this heater and get an X-ray source for free,” says Schilffarth. “We’ve talked with scientists who right now have to haul pieces of lunar or Martian regolith up to their sensor because the X-ray source is so weak. Now we’re talking about a spotlight that could shine down to do science from a distance.”
Ultra Safe Nuclear’s contract is one of two awarded by the DIU—which aims to speed up the deployment of commercial technology through military use—to develop nuclear power and propulsion for spacecraft. The other contract was awarded to Avalanche Energy, which is making a lunchbox-size fusion device it calls an Orbitron. The device will use electrostatic fields to trap high-speed ions in slowly changing orbits around a negatively charged cathode. Collisions between the ions can result in fusion reactions that produce energetic particles.
Both companies will use nuclear energy to power high-efficiency electric propulsion systems. Electric propulsion technologies such as ion thrusters, which use electromagnetic fields to accelerate ions and generate thrust, are more efficient than chemical rockets, which burn fuel. Solar panels typically power the ion thrusters that satellites use today to change their position and orientation. Schilffarth says that the higher power from EmberCore should give a greater velocity change of 10 kilometers per second in orbit than today’s electric propulsion systems.
Ultra Safe Nuclear is also one of three companies developing nuclear fission thermal propulsion systems for NASA and the Department of Energy. Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is seeking companies to develop a fission-based nuclear thermal rocket engine, with demonstrations expected in 2026.
Match ID: 171 Score: 4.29 source: spectrum.ieee.org age: 16 days qualifiers: 1.43 seattle, 1.43 development, 1.43 apple
Kamisetty Ramamohan “K.R.” Rao died on 15 January 2021 at the age of 89. He co-invented the discrete cosine transform (DCT) technique, which is widely used in digital signal processing and data compression.
This tribute is an excerpted version of an article dedicated to his memory written by three of his colleagues: IEEE Member Jae Jeong Hwang, Zoran M. Milicevic, and IEEE Life Senior Member Zoran S. Bojković. Hwang is a professor of IT convergence and communication engineering at
Kunsan National University, in Korea; Miicevic is an assistant professor of telecommunications and IT at the University of Belgrade, in Serbia; and Bojković is a professor of electrical engineering, also at the University of Belgrade.
Education and Early Career
Rao received a bachelor’s degree in electrical engineering in 1952 from the
College of Engineering, Guindy, in Chennai, India. He then moved to the United States and earned two master’s degrees from the University of Florida, in Gainesville: one in EE in 1959 and the other in nuclear engineering in 1960. He received a Ph.D. in 1966 in EE from the University of New Mexico, in Albuquerque.
After graduating, he joined UT Arlington as a research professor. He was promoted to associate professor three years later and became a full professor in 1973.
Similar to the discrete Fourier transform, the DCT converts a signal or image from the spatial domain (a matrix of pixels) to the frequency domain (in which images are represented by mathematical functions).
DCT technology reduces the amount of data required to display, store, and transmit images by identifying parts of the image that contain significant amounts of energy—the ones that are most important to retaining image quality.
Originally proposed as an image-compression technique, DCT is now an industry standard in image and video coding, commonly used to store and transmit JPEG images as well as MPEG video files. DCT also has applications in digital video and television, speech coding, satellite imaging, signal processing, and telecommunications.
Rao went on to develop four different types of the technology: DCT-I, DCT-II (used in image and video compression including high-definition television), DCT-III, and DCT-IV (which has applications in audio coding algorithms).
“HDTV would not have been possible without the research accomplished by K.R. Rao and his students and collaborators,” said
Venkat Devarajan, a former Rao student who is now an EE professor at UT Arlington.
Rao co-authored 22 books, some of which have been translated from English to Chinese, Japanese, Korean, Russian, and Spanish. He also published
papers on Walsh functions and a variety of other topics related to image and signal processing.
He was a visiting professor at universities in Australia, India, Japan, Korea, Singapore, and Thailand. He also conducted workshops and tutorials on video and audio coding and standards around the world.
“Everyone speaks about him in the highest regard—not just as a scholar but as a mentor, a friend, a person who helped them, and a person who encouraged them,” said Vistasp Karbhezi, an engineering professor and former president of UT Arlington. “I think that’s his legacy.”
Match ID: 172 Score: 3.57 source: spectrum.ieee.org age: 9 days qualifiers: 3.57 google
Dubbed the era of bio revolution, the 21st century has seen a rapid ascent in bioscience breakthroughs. It has accelerated life science innovation and opened doors to global transformation across many industries, such as:
Agriculture—CRISPR edited crops
Consumer Goods—DNA based cosmetics and plant-based proteins
Biological applications are expected to create up to four trillion dollars in annual economic impact by 2040. However, realizing this opportunity depends on equipping organizations with the right data intelligence to drive innovation from discovery to commercialization.
Join us for the webinar, Bio revolution: data driven growth across industries, as our panel of experts discuss how the convergence of biological sciences across industries has impacted the pace of innovation, collaboration, regulatory concerns, sustainability and the role that data plays in helping them navigate it all.
Vicky Zhou, Ph.D., Partner & Director, BCG Digital Ventures
Vasheharan Kanesarajah, Head of Strategy, Clarivate
Anna Levchuk, Vice President of Product, Clarivate
What opportunities and risks has the bio revolution created within the consumer goods, manufacturing, and technology industries
How data and analytics can help navigate the convergence of biological sciences
What regulatory challenges arise as biological innovations move from the lab to commercial adoption
Ways innovation teams manage knowledge gaps within organizations to embrace bioscience applications
There are lots of questions floating around about how affiliate marketing works, what to do and what not to do when it comes to setting up a business. With so much uncertainty surrounding both personal and business aspects of affiliate marketing. In this post, we will answer the most frequently asked question about affiliate marketing
1. What is affiliate marketing?
Affiliate marketing is a way to make money by promoting the products and services of other people and companies. You don't need to create your product or service, just promote existing ones. That's why it's so easy to get started with affiliate marketing. You can even get started with no budget at all!
2. What is an affiliate program?
An affiliate program is a package of information you create for your product, which is then made available to potential publishers. The program will typically include details about the product and its retail value, commission levels, and promotional materials. Many affiliate programs are managed via an affiliate network like ShareASale, which acts as a platform to connect publishers and advertisers, but it is also possible to offer your program directly.
3. What is an affiliate network and how do affiliate networks make money?
Affiliate networks connect publishers to advertisers. Affiliate networks make money by charging fees to the merchants who advertise with them; these merchants are known as advertisers. The percentage of each sale that the advertiser pays is negotiated between the merchant and the affiliate network.
4. What's the difference between affiliate marketing and dropshipping?
Dropshipping is a method of selling that allows you to run an online store without having to stock products. You advertise the products as if you owned them, but when someone makes an order, you create a duplicate order with the distributor at a reduced price. The distributor takes care of the post and packaging on your behalf. As affiliate marketing is based on referrals and this type of drop shipping requires no investment in inventory when a customer buys through the affiliate link, no money exchanges hands.
5. Can affiliate marketing and performance marketing be considered the same thing?
Performance marketing is a method of marketing that pays for performance, like when a sale is made or an ad is clicked This can include methods like PPC (pay-per-click) or display advertising. Affiliate marketing is one form of performance marketing where commissions are paid out to affiliates on a performance basis when they click on their affiliate link and make a purchase or action.
6. Is it possible to promote affiliate offers on mobile devices?
Smartphones are essentially miniature computers, so publishers can display the same websites and offers that are available on a PC. But mobiles also offer specific tools not available on computers, and these can be used to good effect for publishers. Publishers can optimize their ads for mobile users by making them easy to access by this audience. Publishers can also make good use of text and instant messaging to promote their offers. As the mobile market is predicted to make up 80% of traffic in the future, publishers who do not promote on mobile devices are missing out on a big opportunity.
7. Where do I find qualified publishers?
The best way to find affiliate publishers is on reputable networks like ShareASale Cj(Commission Junction), Awin, and Impact radius. These networks have a strict application process and compliance checks, which means that all affiliates are trustworthy.
8. What is an affiliate disclosure statement?
An affiliate disclosure statement discloses to the reader that there may be affiliate links on a website, for which a commission may be paid to the publisher if visitors follow these links and make purchases.
9. Does social media activity play a significant role in affiliate marketing?
Publishers promote their programs through a variety of means, including blogs, websites, email marketing, and pay-per-click ads. Social media has a huge interactive audience, making this platform a good source of potential traffic.
10. What is a super affiliate?
A super affiliate is an affiliate partner who consistently drives a large majority of sales from any program they promote, compared to other affiliate partners involved in that program. Affiliates make a lot of money from affiliate marketing Pat Flynn earned more than $50000 in 2013 from affiliate marketing.
11. How do we track publisher sales activity?
Publishers can be identified by their publisher ID, which is used in tracking cookies to determine which publishers generate sales. The activity is then viewed within a network's dashboard.
12. Could we set up an affiliate program in multiple countries?
Because the Internet is so widespread, affiliate programs can be promoted in any country. Affiliate strategies that are set internationally need to be tailored to the language of the targeted country.
13. How can affiliate marketing help my business?
Affiliate marketing can help you grow your business in the following ways:
It allows you to save time and money on marketing, which frees you up to focus on other aspects of your business.
You get access to friendly marketers who are eager to help you succeed.
It also helps you to promote your products by sharing links and banners with a new audience.
It offers high ROI(Return on investment) and is cost-effective.
14. How do I find quality publishers?
One of the best ways to work with qualified affiliates is to hire an affiliate marketing agency that works with all the networks. Affiliates are carefully selected and go through a rigorous application process to be included in the network.
15. How Can we Promote Affiliate Links?
Affiliate marketing is generally associated with websites, but there are other ways to promote your affiliate links, including:
A website or blog
Through email marketing and newsletter
Social media, like Facebook, Instagram, or Twitter.
Leave a comment on blogs or forums.
Write an e-book or other digital product.
16. Do you have to pay to sign up for an affiliate program?
To build your affiliate marketing business, you don't have to invest money in the beginning. You can sign up for free with any affiliate network and start promoting their brands right away.
17. What is a commission rate?
Commission rates are typically based on a percentage of the total sale and in some cases can also be a flat fee for each transaction. The rates are set by the merchant.
Who manages your affiliate program?
Some merchants run their affiliate programs internally, while others choose to contract out management to a network or an external agency.
18. What is a cookie?
Cookies are small pieces of data that work with web browsers to store information such as user preferences, login or registration data, and shopping cart contents. When someone clicks on your affiliate link, a cookie is placed on the user's computer or mobile device. That cookie is used to remember the link or ad that the visitor clicked on. Even if the user leaves your site and comes back a week later to make a purchase, you will still get credit for the sale and receive a commission it depends on the site cookies duration
19. How long do cookies last?
The merchant determines the duration of a cookie, also known as its “cookie life.” The most common length for an affiliate program is 30 days. If someone clicks on your affiliate link, you’ll be paid a commission if they purchase within 30 days of the click.
Most new affiliates are eager to begin their affiliate marketing business. Unfortunately, there is a lot of bad information out there that can lead inexperienced affiliates astray. Hopefully, the answer to your question will provide clarity on how affiliate marketing works and the pitfalls you can avoid. Most importantly, keep in mind that success in affiliate marketing takes some time. Don't be discouraged if you're not immediately making sales or earning money. It takes most new affiliates months to make a full-time income.
Match ID: 174 Score: 3.57 source: www.crunchhype.com age: 24 days qualifiers: 3.57 google
If you want to pay online, you need to register an account and provide credit card information. If you don't have a credit card, you can pay with bank transfer. With the rise of cryptocurrencies, these methods may become old.
Imagine a world in which you can do transactions and many other things without having to give your personal information. A world in which you don’t need to rely on banks or governments anymore. Sounds amazing, right? That’s exactly what blockchain technology allows us to do.
It’s like your computer’s hard drive. blockchain is a technology that lets you store data in digital blocks, which are connected together like links in a chain.
Blockchain technology was originally invented in 1991 by two mathematicians, Stuart Haber and W. Scot Stornetta. They first proposed the system to ensure that timestamps could not be tampered with.
A few years later, in 1998, software developer Nick Szabo proposed using a similar kind of technology to secure a digital payments system he called “Bit Gold.” However, this innovation was not adopted until Satoshi Nakamoto claimed to have invented the first Blockchain and Bitcoin.
So, What is Blockchain?
A blockchain is a distributed database shared between the nodes of a computer network. It saves information in digital format. Many people first heard of blockchain technology when they started to look up information about bitcoin.
Blockchain is used in cryptocurrency systems to ensure secure, decentralized records of transactions.
Blockchain allowed people to guarantee the fidelity and security of a record of data without the need for a third party to ensure accuracy.
To understand how a blockchain works, Consider these basic steps:
Blockchain collects information in “blocks”.
A block has a storage capacity, and once it's used up, it can be closed and linked to a previously served block.
Blocks form chains, which are called “Blockchains.”
More information will be added to the block with the most content until its capacity is full. The process repeats itself.
Each block in the chain has an exact timestamp and can't be changed.
Let’s get to know more about the blockchain.
How does blockchain work?
Blockchain records digital information and distributes it across the network without changing it. The information is distributed among many users and stored in an immutable, permanent ledger that can't be changed or destroyed. That's why blockchain is also called "Distributed Ledger Technology" or DLT.
Here’s how it works:
Someone or a computer will transacts
The transaction is transmitted throughout the network.
A network of computers can confirm the transaction.
When it is confirmed a transaction is added to a block
The blocks are linked together to create a history.
And that’s the beauty of it! The process may seem complicated, but it’s done in minutes with modern technology. And because technology is advancing rapidly, I expect things to move even more quickly than ever.
A new transaction is added to the system. It is then relayed to a network of computers located around the world. The computers then solve equations to ensure the authenticity of the transaction.
Once a transaction is confirmed, it is placed in a block after the confirmation. All of the blocks are chained together to create a permanent history of every transaction.
How are Blockchains used?
Even though blockchain is integral to cryptocurrency, it has other applications. For example, blockchain can be used for storing reliable data about transactions. Many people confuse blockchain with cryptocurrencies like bitcoin and ethereum.
Blockchain already being adopted by some big-name companies, such as Walmart, AIG, Siemens, Pfizer, and Unilever. For example, IBM's Food Trust uses blockchain to track food's journey before reaching its final destination.
Although some of you may consider this practice excessive, food suppliers and manufacturers adhere to the policy of tracing their products because bacteria such as E. coli and Salmonella have been found in packaged foods. In addition, there have been isolated cases where dangerous allergens such as peanuts have accidentally been introduced into certain products.
Tracing and identifying the sources of an outbreak is a challenging task that can take months or years. Thanks to the Blockchain, however, companies now know exactly where their food has been—so they can trace its location and prevent future outbreaks.
Blockchain technology allows systems to react much faster in the event of a hazard. It also has many other uses in the modern world.
What is Blockchain Decentralization?
Blockchain technology is safe, even if it’s public. People can access the technology using an internet connection.
Have you ever been in a situation where you had all your data stored at one place and that one secure place got compromised? Wouldn't it be great if there was a way to prevent your data from leaking out even when the security of your storage systems is compromised?
Blockchain technology provides a way of avoiding this situation by using multiple computers at different locations to store information about transactions. If one computer experiences problems with a transaction, it will not affect the other nodes.
Instead, other nodes will use the correct information to cross-reference your incorrect node. This is called “Decentralization,” meaning all the information is stored in multiple places.
Blockchain guarantees your data's authenticity—not just its accuracy, but also its irreversibility. It can also be used to store data that are difficult to register, like legal contracts, state identifications, or a company's product inventory.
Pros and Cons of Blockchain
Blockchain has many advantages and disadvantages.
Accuracy is increased because there is no human involvement in the verification process.
One of the great things about decentralization is that it makes information harder to tamper with.
Safe, private, and easy transactions
Provides a banking alternative and safe storage of personal information
Data storage has limits.
The regulations are always changing, as they differ from place to place.
It has a risk of being used for illicit activities
Frequently Asked Questions About Blockchain
I’ll answer the most frequently asked questions about blockchain in this section.
Is Blockchain a cryptocurrency?
Blockchain is not a cryptocurrency but a technology that makes cryptocurrencies possible. It's a digital ledger that records every transaction seamlessly.
Is it possible for Blockchain to be hacked?
Yes, blockchain can be theoretically hacked, but it is a complicated task to be achieved. A network of users constantly reviews it, which makes hacking the blockchain difficult.
What is the most prominent blockchain company?
Coinbase Global is currently the biggest blockchain company in the world. The company runs a commendable infrastructure, services, and technology for the digital currency economy.
Who owns Blockchain?
Blockchain is a decentralized technology. It’s a chain of distributed ledgers connected with nodes. Each node can be any electronic device. Thus, one owns blockhain.
What is the difference between Bitcoin and Blockchain technology?
Bitcoin is a cryptocurrency, which is powered by Blockchain technology while Blockchain is a distributed ledger of cryptocurrency
What is the difference between Blockchain and a Database?
Generally a database is a collection of data which can be stored and organized using a database management system. The people who have access to the database can view or edit the information stored there. The client-server network architecture is used to implement databases. whereas a blockchain is a growing list of records, called blocks, stored in a distributed system. Each block contains a cryptographic hash of the previous block, timestamp and transaction information. Modification of data is not allowed due to the design of the blockchain. The technology allows decentralized control and eliminates risks of data modification by other parties.
Blockchain has a wide spectrum of applications and, over the next 5-10 years, we will likely see it being integrated into all sorts of industries. From finance to healthcare, blockchain could revolutionize the way we store and share data. Although there is some hesitation to adopt blockchain systems right now, that won't be the case in 2022-2023 (and even less so in 2026). Once people become more comfortable with the technology and understand how it can work for them, owners, CEOs and entrepreneurs alike will be quick to leverage blockchain technology for their own gain. Hope you like this article if you have any question let me know in the comments section
FOLLOW US ON TWITTER
Follow @AdilAhmad_c Match ID: 175 Score: 3.57 source: www.crunchhype.com age: 68 days qualifiers: 3.57 google
Go outside on a clear night, and if you’re very lucky you will see the sky falling. NASA estimates that 50,000 meteorites from space have been found on Earth.
The shooting stars or fireballs they form as they enter the atmosphere can be beautiful, but they’re hard to track. Of those 50,000, astronomers have been able to plot the past orbits of only about 40.
“It was a semi-surprise,” says Anderson, an American who came to Curtin in 2018 to do his Ph.D. work on technology for meteorite searches. “We weren’t expecting to have that much success the first time.”
Curtin’s Space Science and Technology Center, in the city of Perth, runs the Desert Fireball Network, a system of 50 automated cameras that monitor Australia’s night skies for incoming meteors. One night last year, two of the cameras tracked a streak in the sky, and the system calculated that a small rock had probably crashed in the desert scrub of Western Australia, in a region known as the Nullarbor. The observations weren’t ideal—they estimated that the meteorite weighed between 150 and 700 grams and had come down in an area of 5 square kilometers—but Anderson and two colleagues decided to make a field trip. In December, they set out from Perth on a drive of more than 1,000 km looking for a needle in a haystack: one blackened piece of rock on the desert floor, 50 km from the nearest paved road.
In the past, the trip would have been all but pointless. Meteorite hunters usually search the ground on foot, walking back and forth in a grid pattern and hoping they hit pay dirt. Eighty percent of the time, they fail.
“It’s been shown that people are just terrible at these kinds of repetitive tasks,” says Anderson. “A major problem is humans just not paying attention.”
Through repetition, the machine and the researchers learned to deal with false positives: bottles, cans, desert plant roots, and occasional kangaroo bones.
That’s where technology came in. They used off-the-shelf hardware—a quadcopter drone with a 44-megapixel camera and a desktop computer with a good video card. The unusual part was the convolutional neural network they ran on it—machine-learning software not often carried by campers in the outback.
“The holy grail of meteorite hunting right now is a drone that can grid a geographic area, look at the ground, and find meteorites with AI,” says Mike Hankey of the American Meteor Society.
Seamus Anderson [right] poses with his two colleagues, both pointing at the meteorite they just found. The photo was taken with the drone they had used to locate the specimen.Seamus Anderson/Curtin University
A machine-learning system needs training—data about the world from which it can extrapolate—so the researchers fed it drone images of the Nullarbor terrain. Some of them included meteorite samples borrowed from a local museum and planted on the ground. Those images were given a score of 1—a definite meteorite, even if each appeared only as a black dot. Other images showing random terrain nearby were scored as 0—no meteorite here. Through repetition, the machine and the researchers learned to deal with false positives: bottles, cans, desert plant roots, and occasional kangaroo bones.
“It’s like training your kid to figure out what a dog looks like,” says Anderson now. “You could show lots of images of nothing but black Labs—and then, when it sees a picture of a German Shepard, it’s maybe going to freak out and not know exactly what it’s supposed to do. So you have to give it many opportunities to know what a meteorite can look like in that background.”
Top: The incoming meteor and where it landed in Western Australia. Bottom left: The likely orbit of the meteoroid before it hit the Earth. Bottom: The section of desert scientists searched.
Seamus Anderson/Curtin University
They began surveying: 43 drone flights over three days, going back and forth at an altitude of about 20 meters, recording 57,255 images. Back at camp, they began to process their images. From the first four flights alone, the algorithm gave 59,384 objects a score of at least 0.7 on that scale of 0 to 1—a lot of possible specimens. The researchers were quickly able to narrow them down to 259 and then 38, which they reinspected with a second, smaller drone. Soon they were down to four, and set out on foot, guided by GPS, to find them.
Before we reach the conclusion, it’s worth pausing to ask why meteorites are worth chasing. Space scientists will say that some date from the beginnings of the solar system. Some contain amino acids, those most basic building blocks of life. A few are large enough to do harm. Others, Anderson points out, contain rare elements, perhaps valuable for future technologies but hard to mine on Earth.
So there was a lot to think about in the desert heat—life, the universe, the reliability of their algorithm—as Anderson and his two comrades paced the ground looking for a blackened rock.
“Then one of my friends on the trip, John Fairweather, said one of the most annoying things you can hear at that moment—like, ‘Hey, is this the meteorite?’” Anderson says. He thought it was a joke. “And I thought, ‘That’s not funny right now, John.’ And I looked over and, literally, he’s got the rock.”
The meteorite, named DFN 09, is shown here with a pen for scale.Seamus Anderson/Curtin University
Anderson looked around to be sure the surroundings matched what the overhead drone image had shown. They did. The rock was a chondrite, a common type of iron-rich meteorite. It was 5 centimeters long, about the size of an egg, and weighed 70 grams. Most important to Anderson, the algorithm had given this particular patch of ground a score of 1.0—a perfect match.
“And I stood there, and I basically just screamed for a minute or two. Yes, it was awesome.”
Match ID: 176 Score: 3.57 source: spectrum.ieee.org age: 93 days qualifiers: 3.57 google
Are you searching for an ecomerce platform to help you build an online store and sell products?
In this Sellfy review, we'll talk about how this eCommerce platform can let you sell digital products while keeping full control of your marketing.
And the best part? Starting your business can be done in just five minutes.
Let us then talk about the Sellfy platform and all the benefits it can bring to your business.
What is Sellfy?
Sellfy is an eCommerce solution that allows digital content creators, including writers, illustrators, designers, musicians, and filmmakers, to sell their products online. Sellfy provides a customizable storefront where users can display their digital products and embed "Buy Now" buttons on their website or blog. Sellfy product pages enable users to showcase their products from different angles with multiple images and previews from Soundcloud, Vimeo, and YouTube. Files of up to 2GB can be uploaded to Sellfy, and the company offers unlimited bandwidth and secure file storage. Users can also embed their entire store or individual project widgets in their site, with the ability to preview how widgets will appear before they are displayed.
Sellfy is a powerful e-commerce platform that helps you personalize your online storefront. You can add your logo, change colors, revise navigation, and edit the layout of your store. Sellfy also allows you to create a full shopping cart so customers can purchase multiple items. And Sellfy gives you the ability to set your language or let customers see a translated version of your store based on their location.
Sellfy gives you the option to host your store directly on its platform, add a custom domain to your store, and use it as an embedded storefront on your website. Sellfy also optimizes its store offerings for mobile devices, allowing for a seamless checkout experience.
Sellfy allows creators to host all their products and sell all of their digital products on one platform. Sellfy also does not place storage limits on your store but recommends that files be no larger than 5GB. Creators can sell both standard and subscription-based products in any file format that is supported by the online marketplace. Customers can purchase products instantly after making a purchase – there is no waiting period.
You can organize your store by creating your product categories, sorting by any characteristic you choose. Your title, description, and the image will be included on each product page. In this way, customers can immediately evaluate all of your products. You can offer different pricing options for all of your products, including "pay what you want," in which the price is entirely up to the customer. This option allows you to give customers control over the cost of individual items (without a minimum price) or to set pricing minimums—a good option if you're in a competitive market or when you have higher-end products. You can also offer set prices per product as well as free products to help build your store's popularity.
Sellfy is ideal for selling digital content, such as ebooks. But it does not allow you to copyrighted material (that you don't have rights to distribute).
Sellfy offers several ways to share your store, enabling you to promote your business on different platforms. Sellfy lets you integrate it with your existing website using "buy now" buttons, embed your entire storefront, or embed certain products so you can reach more people. Sellfy also enables you to connect with your Facebook page and YouTube channel, maximizing your visibility.
Payments and security
Sellfy is a simple online platform that allows customers to buy your products directly through your store. Sellfy has two payment processing options: PayPal and Stripe. You will receive instant payments with both of these processors, and your customer data is protected by Sellfy's secure (PCI-compliant) payment security measures. In addition to payment security, Sellfy provides anti-fraud tools to help protect your products including PDF stamping, unique download links, and limited download attempts.
Marketing and analytics tools
The Sellfy platform includes marketing and analytics tools to help you manage your online store. You can send email product updates and collect newsletter subscribers through the platform. With Sellfy, you can also offer discount codes and product upsells, as well as create and track Facebook and Twitter ads for your store. The software's analytics dashboard will help you track your best-performing products, generated revenue, traffic channels, top locations, and overall store performance.
To expand functionality and make your e-commerce store run more efficiently, Sellfy offers several integrations. Google Analytics and Webhooks, as well as integrations with Patreon and Facebook Live Chat, are just a few of the options available. Sellfy allows you to connect to Zapier, which gives you access to hundreds of third-party apps, including tools like Mailchimp, Trello, Salesforce, and more.
Sellfy has its benefits and downsides, but fortunately, the pros outweigh the cons.
It takes only a few minutes to set up an online store and begin selling products.
You can sell your products on a single storefront, even if you are selling multiple product types.
Sellfy supports selling a variety of product types, including physical items, digital goods, subscriptions, and print-on-demand products.
Sellfy offers a free plan for those who want to test out the features before committing to a paid plan.
You get paid the same day you make a sale. Sellfy doesn't delay your funds as some other payment processors do.
Print-on-demand services are available directly from your store, so you can sell merchandise to fans without setting up an integration.
You can conduct all store-related activities via the mobile app and all online stores have mobile responsive designs.
Everything you need to make your website is included, including a custom domain name hosting, security for your files, and the ability to customize your store
The file security features can help you protect your digital property by allowing you to put PDF stamps, set download limits, and SSL encryption.
Sellfy provides unlimited support.
Sellfy provides simple and intuitive tax and VAT configuration settings.
Marketing strategies include coupons, email marketing, upselling, tracking pixels, and cart abandonment.
Although the free plan is helpful, but it limits you to only 10 products.
Payment plans often require an upgrade if you exceed a certain sales amount per year.
The storefront designs are clean, but they're not unique templates for creating a completely different brand image.
Sellfy's branding is removed from your hosted product when you upgrade to the $49 per month Business plan.
The free plan does not allow for selling digital or subscription products.
In this article, we have taken a look at some of the biggest benefits associated with using sellfy for eCommerce. Once you compare these benefits to what you get with other platforms such as Shopify, you should find that it is worth your time to consider sellfy for your business. After reading this article all of your questions will be solved but if you have still some questions let me know in the comment section below, I will be happy to answer your questions.
Note: This article contains affiliate links which means we make a small commission if you buy sellfy premium plan from our link.
Match ID: 177 Score: 3.57 source: www.crunchhype.com age: 105 days qualifiers: 3.57 google
Content creation is one of the biggest struggles for many marketers and business owners. It often requires both time and financial resources, especially if you plan to hire a writer. Today, we have a fantastic opportunity to use other people's products by purchasing Private Label Rights.
To find a good PLR website, first, determine the type of products you want to acquire. One way to do this is to choose among membership sites or PLR product stores. Following are 10 great sites that offer products in both categories.
What are PLR websites?
Private Label Rights (PLR) products are digital products that can be in the form of an ebook, software, online course videos, value-packed articles, etc. You can use these products with some adjustments to sell as your own under your own brand and keep all the money and profit yourself without wasting your time on product creation. The truth is that locating the best website for PLR materials can be a time-consuming and expensive exercise. That’s why we have researched, analyzed, and ranked the best 10 websites:
PLR.me is of the best places to get PLR content in 2021-2022. It offers a content marketing system that comes with courses, brandable tools, and more. It is the most trusted PLR website, among other PLR sites. The PLR.me platform features smart digital caching PLR tools for health and wellness professionals. The PLR.me platform, which was built on advanced caching technology, has been well-received by big brands such as Toronto Sun and Entrepreneur. The best thing about this website is its content marketing automation tools.
Pay-as-you-go Plan – $22
100 Monthly Plan – $99/month
400 Annual Plan – $379/year
800 Annual Plan – $579/year
2500 Annual Plan – $990/year
Access over 15,940+ ready-to-use PLR coaching resources.
Content marketing and sliding tools are provided by the site.
You can create courses, products, webinars, emails, and nearly anything else you can dream of.
You can cancel your subscription anytime.
Compared to other top PLR sites, this one is a bit more expensive.
InDigitalWorks is a leading private label rights membership website established in 2008. As of now, it has more than 100,000 members from around the globe have joined the platform. The site offers thousands of ready-to-be-sold digital products for online businesses in every single niche possible. InDigitalWorks features hundreds of electronic books, software applications, templates, graphics, videos that you can sell right away.
3 Months Plan – $39
1 Year Plan – $69
Lifetime Plan – $79
IndigitalWorks promotes new authors by providing them with 200 free products for download.
Largest and most reputable private label rights membership site.
20000+ digital products
137 training videos provided by experts to help beginners set up and grow their online presence for free.
10 GB of web hosting will be available on a reliable server.
Fewer people are experiencing the frustration of not getting the help they need.
BuyQualityPLR’s website is a Top PLR of 2021-2022! It's a source for major Internet Marketing Products and Resources. Whether you’re an Affiliate Marketer, Product Creator, Course Seller, BuyQualityPLR can assist you in the right direction. You will find several eBooks and digital products related to the Health and Fitness niche, along with a series of Security-based products. If you search for digital products, Resell Rights Products, Private Label Rights Products, or Internet Marketing Products, BuyQualityPLR is among the best websites for your needs.
Free PLR articles packs, ebooks, and other digital products are available
Price ranges from 3.99$ to 99.9$
Everything on this site is written by professionals
The quick download features available
Doesn't provide membership.
Offers thousand of PLR content in many niches
Valuable courses available
You can't buy all content because it doesn't provide membership
The IDPLR website has helped thousands of internet marketers since 2008. This website follows a membership approach and allows you to gain access to thousands of PLR products in different niches. The best thing about this site is the quality of the products, which is extremely impressive. This is the best PLR website of 2021-2022, offering over 200k+ high-quality articles. It also gives you graphics, templates, ebooks, and audio.
3 Months ACCESS: $39
1 YEAR ACCESS: $69
LIFETIME ACCESS: $79
You will have access to over 12,590 PLR products.
You will get access to training tutorials and Courses in a Gold membership.
10 GB of web hosting will be available on a reliable server.
You will receive 3D eCover Software
It offers an unlimited download limit
Most important, you will get a 30 day money-back guarantee
A few products are available for free membership.
PLRmines is a leading digital product library for private label rights products. The site provides useful information on products that you can use to grow your business, as well as licenses for reselling the content. You can either purchase a membership or get access through a free trial, and you can find unlimited high-quality resources via the site's paid or free membership. Overall, the site is an excellent resource for finding outstanding private label rights content.
Lifetime membership: $97
4000+ ebooks from top categories
Members have access to more than 660 instructional videos covering all kinds of topics in a membership area.
You will receive outstanding graphics that are ready to use.
They also offer a variety of helpful resources and tools, such as PLR blogs, WordPress themes, and plugins
The free membership won't give you much value.
Super-Resell is another remarkable provider of PLR material. The platform was established in 2009 and offers valuable PLR content to users. Currently, the platform offers standard lifetime memberships and monthly plans at an affordable price. Interested users can purchase up to 10,000 products with digital rights or rights of re-sale. Super-Resell offers a wide range of products such as readymade websites, article packs, videos, ebooks, software, templates, and graphics, etc.
6 Months Membership: $49.90
Lifetime membership: $129
It offers you products that come with sales pages and those without sales pages.
You'll find thousands of digital products that will help your business grow.
Daily News update
The company has set up an automatic renewal system. This can result in costs for you even though you are not using the service.
7. Unstoppable PLR
UnStoppablePLR was launched in 2006 by Aurelius Tjin, an internet marketer. Over the last 15 years, UnStoppablePLR has provided massive value to users by offering high-quality PLR content. The site is one of the best PLR sites because of its affordability and flexibility.
Regular Price: $29/Month
You’ll get 30 PLR articles in various niches for free.
100% money-back guarantee.
Members get access to community
It gives you access to professionally designed graphics and much more.
People often complain that not enough PLR products are released each month.
8. Resell Rights Weekly
Resell Rights Weekly, a private label rights (PLR) website, provides exceptional PLR content. It is among the top free PLR websites that provide free membership. You will get 728+ PLR products completely free and new products every single week. The Resell Rights Weekly gives you free instant access to all products and downloads the ones you require.
Gold Membership: $19.95/Month
Lots of products available free of cost
Free access to the members forum
The prices for the products at this PLR site are very low quality compared to other websites that sell the same items.
MasterResellRights was established in 2006, and it has helped many successful entrepreneurs. Once you join MasterResellRights, you will get access to more than 10,000 products and services from other members. It is one of the top PLR sites that provide high-quality PLR products to members across the globe. You will be able to access a lot of other membership privileges at no extra price. The website also provides PLR, MRR, and RR license products.
⦁Access more than 10,000 high-quality, PLR articles in different niches. ⦁Get daily fresh new updates ⦁Users get 8 GB of hosting space ⦁You can pay using PayPal
⦁Only members have access to the features of this site.
BigProductStore is a popular private label rights website that offers tens of thousands of digital products. These include software, videos, video courses, eBooks, and many others that you can resell, use as you want, or sell and keep 100% of the profit. The PLR website updates its product list daily. It currently offers over 10,000 products. The site offers original content for almost every niche and when you register as a member, you can access the exclusive products section where you can download a variety of high-quality, unique, and exclusive products.
Monthly Plan: $19.90/Month 27% off
One-Time-Payment: $98.50 50% off
Monthly Ultimate: $29.90/Month 36% off
One-Time-Payment Ultimate: $198.50 50% off
You can use PLR products to generate profits, give them as bonuses for your affiliate promotion campaign, or rebrand them and create new unique products.
Lifetime memberships for PLR products can save you money if you’re looking for a long-term solution to bulk goods.
The website is updated regularly with fresh, quality content.
Product descriptions may not provide much detail, so it can be difficult to know just what you’re downloading.
Some product categories such as WP Themes and articles are outdated.
Match ID: 178 Score: 3.57 source: www.crunchhype.com age: 119 days qualifiers: 3.57 google
If you are looking for the best wordpress plugins, then you are at the right place. Here is the list of best wordpress plugins that you should use in your blog to boost SEO, strong your security and know every aspects of your blog . Although creating a good content is one factor but there are many wordpress plugins that perform different actions and add on to your success. So let's start
Those users who are serious about SEO, Yoast SEO will do the work for them to reach their goals. All they need to do is select a keyword, and the plugin will then optimize your page according to the specified keyword
Yoast offers many popular SEO WordPress plugin functions. It gives you real-time page analysis to optimize your content, images, meta descriptions, titles, and kewords. Yoast also checks the length of your sentences and paragraphs, whether you’re using enough transition words or subheadings, how often you use passive voice, and so on. Yoast tells Google whether or not to index a page or a set of pages too.
Let me summarize these points in bullets:
Enhance the readability of your article to reduce bounce rate
Optimize your articles with targetted keywords
Let Google know who you are and what your site is about
Improve your on-page SEO with advanced, real-time guidance and advice on keyword usage, linking, and external linking.
Keep your focus keywords consistent to help rank better on Google.
Preview how your page would appear in the search engine results page (SERP)
Crawl your site daily to ensure Google indexes it as quickly as possible.
Rate your article informing you of any mistakes you might have made so that you can fix them before publishing.
Stay up-to-date with Google’s latest algorithm changes and adapt your on-page SEO as needed with smartsuggestionss from the Yoast SEO plugin. This plugin is always up-to-date.
Free Version is available
Premium version=$89/year that comes with extra functions, allowing you to optimize your content up to five keywords, among other benefits.
2. WP Rocket
A website running WordPress can put a lot of strain on a server, which increases the chances that the website will crash and harm your business. To avoid such an unfortunate situation and ensure that all your pages load quickly, you need a caching plugin like WP Rocket.
WP Rocket plugin designed to increases your website speed. Instead of waiting for pages to be saved to cache, WP Rocket turns on desired caching settings, like page cache and gzip compression. The plugin also activates other features, such as CDN support and llazy image loadding, to enhance your site speed.
Features in bullets:
Preloading the cache of pages
Reducing the number of HTTP requests allows websites to load more quickly.
Decreasing bandwidth usage with GZIP compression
Apply optimal browser caching headers (expires)
Remove Unused CSS
Deferred loading of images (LazyLoad)
Critical Path CSS generation and deferred loading of CSS files
WordPress Heartbeat API control
Easy import/export of settings
Easy roll back to a previous version
Single License =$49/year for one website
Plus License =$99/year for 3 websites
Infinite License =$249/year for unlimited websites
Wordfence Security is a WordPress firewall and security scanner that keeps your site safe from malicious hackers, spam, and other online threats. This Plugin comes with a web application firewall (WAF) called tthread Defence Feed that helps to prevents brute force attacks by ensuring you set stronger passwords and limiting login attempts. It searches for malware and compares code, theme, and plugin files with the records in the WordPress.org repository to verify their integrity and reports changes to you.
Wordfence security scanner provides you with actionable insights into your website's security status and will alert you to any potential threats, keeping it safe and secure. It also includes login security features that let you activate reCAPTCHA and two-factor authentication for your website.
Features in Bullets.
Scans your site for vulnerabilities.
Alerts you by email when new threats are detected.
Supports advanced login security measures.
IP addresses may be blocked automatically if suspicious activity is detected.
Premium Plan= $99/Year that comes with extra security features like the real time IP backlist and country blocking option and also support from highly qualified experts.
Akismet can help prevent spam from appearing on your site. Every day, it automatically checks every comment against a global database of spam to block malicious content. With Akismet, you also won’t have to worry about innocent comments being caught by the filter or false positives. You can simply tell Akismet about those and it will get better over time. It also checks your contact form submissions against its global spam database and weed out unnecessary fake information.
Features in Bullets:
The program automatically checks comments and filters out spam.
Hidden or misleading links are often revealed in the comment body.
Akismet tracks the status of each comment, allowing you to see which ones were caught by Akismet and which ones were cleared by a moderator.
A spam-blocking feature that saves disk space and makes your site run faster.
Moderators can view a list of comments approved by each user.
Free to use for personal blog
5. Contact Form 7
Contact Form 7 is a plug-in that allows you to create contact forms that make it easy for your users to send messages to your site. The plug-in was developed by Takayuki Miyoshi and lets you create multiple contact forms on the same site; it also integrates Akismet spam filtering and lets you customize the styling and fields that you want to use in the form. The plug-in provides CAPTCHA and Ajax submitting.
Features in bullets:
Create and manage multiple contact forms
Easily customize form fields
Use simple markup to alter mail content
Add Lots of third-party extensions for additional functionality
Shortcode offers a way to insert content into pages or posts.
Akismet spam filtering, Ajax-powered submitting, and CAPTCHA are all features of this plugin.
Free to use
6. Monster Insights
When you’re looking for an easy way to manage your Google Analytics-related web tracking services, Monster Insights can help. You can add, customize, and integrate Google Analytics data with ease so you’ll be able to see how every webpage performs, which online campaigns bring in the most traffic, and which content readers engage with the most. It’s same as Google Analytics
It is a powerful tool to keep track of your traffic stats. With it, you can view stats for your active sessions, conversions, and bounce rates. You’ll also be able to see your total revenue, the products you sell, and how your site is performing when it comes to referrals.
MonsterInsights offers a free plan that includes basic Google Analytics integration, data insights, and user activity metrics.
Features in bullets:
Demographics and interest reports:
Anonymize the IPs of visitor
See the results of how far visitors Scroll down
Show the insights of multiple links to the same page and show you which links get more clicks
See sessions of two related sites as a single session
Google AdSense tracking
Send you weekly analytics report of your blog you can download it as pdf
Premium plan= $99.50/year that comes with extra features like page and post tracking, Adsense tracking, custom tracking and reports.
7. Pretty Links
Pretty Links is a powerful WordPress plugin that enables you to easily cloak affiliate links on your websiteIt even allows you to easily redirect visitors based on a specific request, including permanent 301 and temporary 302/307 redirects.
Pretty links also helps you to automatically shorten your url for your post and pages.
You can also enable auto-linking feature to automatically add affiliate links for certain keywords
Create clean, easy-to-remember URLs on your website (301, 302, and 307 redirects only)
Random-generator or custom URL slugs
Track the number of clicks
Easy to understand reports
View click details including ip address, remote host, browser, operating system, and referring site
You can pass custom parameters to your scripts when using pretty permalinks, and still have full tracking capability.
Exclude IP Addresses from Stats
Cookie-based system to track your activity across clicks
Create nofollow/noindex links
Toggle tracking on / off on each link.
Pretty Link Bookmarklet
Update redirected links easily to new URLs!
Beginner Plan=$79/year that can be used on 1 site
Marketer Plan: $99/year – that can be used on upto 2 sites
Super Affiliate Plan: $149/year – that can be use on upto 5 sites
We hope you’ve found this article useful. We appreciate you reading and welcome your feedback if you have it.
Match ID: 179 Score: 3.57 source: www.crunchhype.com age: 134 days qualifiers: 3.57 google
Ginger VS Grammarly: When it comes to grammar checkers, Ginger and Grammarly are two of the most popular choices on the market. This article aims to highlight the specifics of each one so that you can make a more informed decision about the one you'll use.
What is Grammarly?
If you are a writer, you must have heard of Grammarly before. Grammarly has over 10M users across the globe, it's probably the most popular AI writing enhancement tool, without a doubt. That's why there's a high chance that you already know about Grammarly.
But today we are going to do a comparison between Ginger and Grammarly, So let's define Grammarly here. Like Ginger, Grammarly is an AI writing assistant that checks for grammatical errors, spellings, and punctuation. The free version covers the basics like identifying grammar and spelling mistakes
While the Premium version offers a lot more functionality, it detects plagiarism in your content, suggests word choice, or adds fluency to it.
Features of Grammarly
Grammarly detects basic to advance grammatical errors and also help you why this is an error and suggest to you how you can improve it
Create a personal dictionary
Check to spell for American, British, Canadian, and Australian English.
Detect unclear structure.
Explore overuse of words and wordiness.
Get to know about the improper tones.
Discover the insensitive language aligns with your intent, audience, style, emotion, and more.
What is Ginger
Ginger is a writing enhancement tool that not only catches typos and grammatical mistakes but also suggests content improvements. As you type, it picks up on errors then shows you what’s wrong, and suggests a fix. It also provides you with synonyms and definitions of words and allows you to translate your text into dozens of languages.
Ginger Software: Features & Benefits
Ginger's software helps you identify and correct common grammatical mistakes, such as consecutive nouns, or contextual spelling correction.
The sentence rephrasing feature can help you convey your meaning perfectly.
Ginger acts like a personal coach that helps you practice certain exercises based on your mistakes.
The dictionary feature helps users understand the meanings of words.
In addition, the program provides a text reader, so you can gauge your writing’s conversational tone.
Ginger vs Grammarly
Grammarly and Ginger are two popular grammar checker software brands that help you to become a better writer. But if you’re undecided about which software to use, consider these differences:
Grammarly only supports the English language while Ginger supports 40+ languages.
Grammarly offers a wordiness feature while Ginger lacks a Wordiness feature.
Grammarly shows an accuracy score while Ginger lacks an accuracy score feature.
Grammarly has a plagiarism checker while ginger doesn't have such a feature.
Grammarly can recognize an incorrect use of numbers while Ginger can’t recognize an incorrect use of numbers.
Grammarly and Ginger both have mobile apps.
Ginger and Grammarly offer monthly, quarterly, and annual plans.
Grammarly allows you to check uploaded documents. while Ginger doesn't check uploaded documents.
Grammarly Offers a tone suggestion feature while Ginger doesn't offer a tone suggestion feature.
Ginger helps to translate documents into 40+ languages while Grammarly doesn't have a translation feature.
Ginger Offers text to speech features while Grammarly doesn't have such features.
Grammarly Score: 7/10
So Grammarly wins here.
Ginger VS Grammarly: Pricing Difference
Ginger offers a Premium subscription for 13.99$/month. it comes at $11.19/month for quarterly and $7.49/month for an annual subscription with 40$ off.
On the other hand, Grammarly offers a Premium subscription for $30/month for a monthly plan $20/month for quarterly, and $12/month for an annual subscription.
For companies with three or more employees, the Business plan costs $12.50/month for each member of your team.
Affordable Subscription plans (Additionals discounts are available)
Active and passive voice changer
Translates documents in 40+ languages
Browser extension available
Personal trainers help clients develop their knowledge of grammar.
Text-to-speech feature reads work out loud
Get a full refund within 7 days
Mobile apps aren't free
Limited monthly corrections for free users
No style checker
No plagiarism checker
Not as user-friendly as Grammarly
You are unable to upload or download documents; however, you may copy and paste files as needed.
Doesn't offer a free trial
Summarizing the Ginger VS Grammarly: My Recommendation
While both writing assistants are fantastic in their ways, you need to choose the one you want.
For example, go for Grammarly if you want a plagiarism tool included.
Choose Ginger if you want to write in languages other than English. I will to the differences for you in order to make the distinctions clearer.
Grammarly offers a plagiarism checking tool
Ginger provides text to speech tool
Grammarly helps you check uploaded documents
Ginger supports over 40 languages
Grammarly has a more friendly UI/UX
Both Ginger and Grammarly are awesome writing tools, without a doubt. Depending on your needs, you might want to use Ginger over Grammarly. As per my experience, I found Grammarly easier to use than Ginger.
Which one you like let me know in the comments section also give your opinions in the comments section below.
Match ID: 180 Score: 3.57 source: www.crunchhype.com age: 135 days qualifiers: 3.57 google
But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.
How is AI currently being used to design the next generation of chips?
Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.
Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.
What are the benefits of using AI for chip design?
Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.
So it’s like having a digital twin in a sense?
Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.
So, it’s going to be more efficient and, as you said, cheaper?
Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.
We’ve talked about the benefits. How about the drawbacks?
Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years.
Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together.
One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.
How can engineers use AI to better prepare and extract insights from hardware or sensor data?
Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.
One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.
What should engineers and designers consider when using AI for chip design?
Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.
How do you think AI will affect chip designers’ jobs?
Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.
How do you envision the future of AI and chip design?
Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
Match ID: 181 Score: 3.57 source: spectrum.ieee.org age: 137 days qualifiers: 3.57 google
Do you have the desire to become a content creator, but not have the money to start? Here are 7 free websites every content creator needs to know.
1.Exploding Topics (Trending Topics)
(Photo Credit:- Exploding Topics)
If you're a content creator, you might be wondering what better way to find new topic ideas than to see what people are searching for? This tool gives you this data without anyone else's explanation. It provides related hashtags and tips on how to use them effectively in your posts. It's a great tool for anyone who wants to keep up to date with what's most relevant in their niche. You can also see the most popular hashtags by country, making it easier to understand cross-border and demographic trends. This site makes your search for content easier than ever! There are countless ways to use explosive topics to your advantage as a content creator.
Some examples can be:
Use the most popular hashtags and keywords to get inspiration for ideas.
Find out what people are talking about in real-time.
Find new audiences you may not have known were interested in your topic.
There’s no excuse not to try this website — it’s free and easy to use!
Answer The public is an excellent tool for content creators. It gives you insight into what people are asking on social media sites and communities and lets you guess about topics that matter to your audience. Answer the public allows you to enter a keyword or topic related to your niche and it will show results with popular questions and keywords related to your topic. It's an amazing way to get insights into what people are searching online and allows you to identify topics driven by new blog posts or social media content on platforms like Facebook, Instagram, Youtube, and Twitter as well as the types of questions they ask and also want answers.
With this tool, content creators can quickly and easily check the ranking of their websites and those of other competitors. This tool allows you to see how your website compares to others in different categories, including:
Organic Search Ranking
Surfer Seo is free and the interface is very friendly. It's a great tool for anyone who wants to do quick competitor research or check their site's rankings at any time.
Canva is a free graphic design platform that makes it easy to create invitations, business cards, mobile videos, Instagram posts, Instagram stories, flyers, and more with professionally designed templates. You can even upload your photos and drag and drop them into Canva templates. It's like having a basic version of Photoshop. You can also remove background from images with one click.
Canva offers thousands of free, professionally designed templates that can be customized with just a few clicks. Simply upload your photos to Canva, drag them into the template of your choice, and save the file to your computer.
It is free to use for basic use but if you want access to different fonts or more features, then you need to buy a premium plan.
Facebook Audience Insights is a powerful tool for content creators when researching their target market. This can help you understand the demographics, interests, and behaviors of your target audience. This information helps determine the direction of your content so that it resonates with them. The most important tools to consider in Facebook Audience Insights are Demographics and Behavior. These two sections provide you with valuable information about your target market, such as their age and from where they belong, how much time they spend on social media per day, what devices they use to access it, etc.
There is another section of Facebook Audits that is very helpful. This will let you know the interests, hobbies, and activities that people in your target market are most interested in. You can use this information to create content for them about things they will be about as opposed to topics they may not be so keen on.
Pexels is a warehouse for any content creator with millions of free royalty images who wants to find high-quality images that can be used freely without having to worry about permissions or licensing so you are free to use the photos in your content and also there is no watermark on photos
The only cons are that some photos contain people, and Pexels doesn't allow you to remove people from photos. Search your keyword and download as many as you want!
So there you have it. We hope that these specially curated websites will come in handy for content creators and small businesses alike. If you've got a site that should be on this list, let us know! And if you're looking for more content creator resources, then let us know in the comments section below
Match ID: 182 Score: 3.57 source: www.crunchhype.com age: 149 days qualifiers: 3.57 google
The 10-day search in the Amazon for Indigenous expert Bruno Pereira and journalist Dom Phillips came to an end on Wednesday. Their deaths have horrified Brazil and underline the dangers faced by those defending the country's environment and Indigenous communities. Jonathan Watts, the Guardian's global environment editor and Phillips's friend, provides insight into their lives, their work and their legacy
In wireless communications, noise from the environment or other signals is almost always seen as something to overcome. The Mountain View, Calif.–based Artemis Networks, however, has a different idea.
Artemis has developed a system in which a group of antennas work together to flood an area with what on the surface looks like noise but in fact resolves into coherent signals at the receiving end. The company’s technique promises to be extremely spectrally efficient.
Artemis’s technology, which it calls pCells (short for “personal cells”), relies on radio waves’ additive nature. That is, when two signals cross paths, they will add to or subtract from one another’s phase, amplitude, and modulation.
Traditionally, this fact is treated as a nuisance by wireless communication schemes. But by deploying multiple antennas working together in an area, Artemis instead sends out carefully crafted noise that coheres into actual signals around individual devices. “So the trick is, yes, garbage comes out of the antennas,” says Steve Perlman, the founder and CEO of Artemis. “But the summation of those very carefully crafted noise signals adds up at the point of reception to be a completely coherent signal.”
Wireless devices, as part of their normal operation, send out regular updates called sounding reference signals (SRS) that enable a network to assess connection strength. A pCell network can use an SRS to establish the location of a specific device and shape a bubble of clear reception just a few millimeters in size around the device’s antenna. In 4G LTE and 5G networks, devices send an SRS every 5 milliseconds, meaning the pCell network can track devices in near–real time.
Artemis’s technology requires enough computer processing to assess where devices are, construct the appropriate noise to transmit so that it will resolve around the devices, and untangle received noise to determine what data has been sent back to the network by devices. Currently, according to Perlman, it can be done with three servers, each with dual 64-core AMD CPUs. He expects the company will be able to get it down to just one server within a year.
According to Perlman, the two big advantages of a pCell network are uniform coverage and spectral efficiency. In a typical network, a device tries to connect to the closest antenna and, if necessary, switch over to a new one when it moves. This creates unstable regions of coverage where the boundaries of individual cells interact. With a pCell network, all of the antennas are working together to blanket the area with the requisite noise. Wherever a device goes, its pCell bubble moves with it.
The other advantage, spectral efficiency, is perhaps more important. While spectral efficiency has been increasing with each new wireless generation, the rate of increase has slowed. A 5G base station using a 4-by-4 MIMO antenna array can send 1.7 times as many bits per second per hertz of bandwidth compared to a 5G base station using just a single antenna. Jump that up to a 16-by-16 array—which has 4 times the total number of antennas as a 4-by-4 array—and the spectral efficiency reaches only 2.9 times as many bits per second per hertz of bandwidth.
In a pCell network, however, a 16-by-16 antenna array can send 43 times as many bits per second per hertz of bandwidth, compared to a single antenna. And it seems to scale beyond that.
Artemis Networks pWave antennas—here shown mounted at the SAP Center arena in San Jose, Calif.—send out carefully crafted noise that resolves into coherent signals at a user’s phone or other device.Artemis Networks
Artemis announced its first large-scale pCell installation in May, at the SAP Center in San Jose, Calif., a hockey arena and concert venue. The network uses 56 antennas to cover the 13,000-square-meter arena. “If I increase to 112 antennas,” says Perlman, “there’s no reason for me to believe that it would not double its spectral efficiency” compared to the 56 installed antennas.
The company first announced a practical pCell network back in 2014. The announcement was met with initial skepticism, although according to Giuseppe Caire, a professor of electrical engineering at the Technical University of Berlin, that was (ironically enough) partly due to a miscommunication. “They marketed this thing as ‘We have this revolutionary idea called pCells, it’s totally different from whatever has been done before.’ ” But Caire says the theoretical ideas underpinning the tech—now more generally referred to as cell-free massive MIMO (multiple input, multiple output)—had been circulating in various forms for decades.
Caire was initially skeptical of what Artemis had accomplished, but once he visited the company in San Francisco, he quickly realized the significance of what it had achieved. Caire (who is now a technical advisor to the company) sees as Artemis’s big breakthrough its ability to integrate a cell-free massive MIMO approach with existing 4G LTE and 5G standards.
Now, eight years later, Perlman says any holdup in rolling out pCell networks has never been due to technology. Instead, the bottleneck has always been in getting access to spectrum, he says. After failed attempts to work with existing industry partners, the company eventually decided to go it alone when the Citizens Broadband Radio Service (CBRS) band in the 3.5-GHz band was established by the U.S. Federal Communications Commission in 2017.
Aside from sports arenas and indoor venues like the SAP Center, other potential fits for pCell networks are university campuses, warehouses, and other locations that would otherwise turn to localized private networks or require high cell density. In the future, Caire sees a natural development of cellular technology in which cell-free massive MIMO networks provide coverage in dense, high-traffic areas, while older, cell-based technologies like cell towers continue to provide coverage in rural areas.
Beyond cellular, Perlman sees one of the greatest opportunities for pCell networks in augmented reality. AR requires high data rates and extremely low latencies to feel immersive and prevent motion sickness. If pCell networks can provide uniform and spectrally efficient coverage, they may be a good fit for upcoming AR glasses, such as Apple’s rumored effort.
This story was updated 16 June 2022 to correct the location of the SAP Center.
Match ID: 191 Score: 2.86 source: spectrum.ieee.org age: 9 days qualifiers: 1.43 development, 1.43 apple
How China Hacked US Phone Networks Sat, 11 Jun 2022 13:00:00 +0000 Plus: Russia rattles its cyber sword, a huge Facebook phishing operation is uncovered, feds take down the SSNDOB marketplace, and more. Match ID: 192 Score: 2.86 source: www.wired.com age: 14 days qualifiers: 2.86 feds
Apple’s M1 processor made a big splash on its November 2020 release, noteworthy for its eye-popping performance and miserly power consumption. But the value of its security may not be as obvious at first blush. A lack of serious attacks since its launch nearly two years ago indicates that its security systems, among them a last line of defense called pointer authentication codes, are working well. But its honeymoon period could possibly be coming to an end.
At the International Symposium on Computer Architecture later this month, researchers led by MIT’s Mengjia Yan will present a mode of attack that so weakens the pointer authentication code (PAC) defense that the core of a computer’s operating system is made vulnerable. And because PACs may be incorporated in future processors built from the 64-bit Arm architecture, the vulnerability could become more widespread. It’s possible that other processors are already using PACs, but the M1 was the only one available to Yan’s lab.
“What we found is actually quite fundamental,” says Yan. “It’s a class of attack. Not one bug.”
How PACMAN picks the lock goes to the heart of modern computing.
The vulnerability, called PACMAN, assumes that there is already a software bug in operation on the computer that can read and write to different memory addresses. It then exploits a detail of the M1 hardware architecture to give the bug the power to execute code and possibly take over the operating system. “We assume the bug is there and we make it into a more serious bug,” says Joseph Ravichandran a student of Yan’s who worked on the exploit with fellow students Weon Taek Na and Jay Lang.
To understand how the attack works you have to get a handle on what pointer authentication is and how a detail of processor architecture called speculative execution works. Pointer authentication is a way to guard against software attacks that try to corrupt data that holds memory addresses, or pointers. For example, malicious code might execute a buffer overflow attack, writing more data than expected into a part of memory, with the excess spilling over into a pointer’s address and overwriting it. That might then mean that instead of the computer’s software executing code stored at the original address, it is diverted to malware stored at the new one.
Pointer authentication appends a cryptographic signature to the end of the pointer. If there’s any malicious manipulation of the pointer, the signature will no longer match up with it. PACs are used to guard the core of the system’s operating system, the kernel. If an attacker got so far as to manipulate a kernel pointer, the mismatch between the pointer and its authentication code would produce what’s called an “exception,” and the system would crash, ending the malware’s attack. Malware would have to be extremely lucky to guess the right code, about 1 in 65,000.
PACMAN finds a way for malware to keep guessing over and over without any wrong guesses triggering a crash. How it does this goes to the heart of modern computing. For decades now, computers have been speeding up processing using what’s called speculative execution. In a typical program, which instruction should follow the next often depends on the outcome of the previous instruction (think if/then). Rather than wait around for the answer, modern CPUs will speculate—make an educated guess—about what comes next and start executing instructions along those lines. If the CPU guessed right, this speculative execution has saved a bunch of clock cycles. If it turns out to have guessed wrong, all the work is thrown out, and the processor begins along the correct sequence of instructions. Importantly, the mistakenly computed values are never visible to the software. There is no program you could write that would simply output the results of speculative execution.
Initial solutions to PACMAN only tended to increase the processor’s overall vulnerability.
However, over the past several years, researchers have discovered ways to exploit speculative execution to do things like sneak data out of CPUs. These are called side-channel attacks, because they acquire data by observing indirect signals, such as how much time it takes to access data. Spectre and Meltdown, are perhaps the best known of these side-channel attacks.
Yan’s group came up with a way to trick the CPU into guessing pointer authentication codes in speculation so an exception never arises, and the OS doesn’t crash. Of course, the answer is still invisible to software. But a side-channel trick involving stuffing a particular buffer with data and using timing to uncover which part the successful speculation replaces, provides the answer. [A similar concept is explained in more detail in “How the Spectre and Meltdown Hacks Really Worked,” IEEE Spectrum, 28 February 2019.]
With regard to PACMAN, Apple’s product team provided this response to Yan’s group:
“We want to thank the researchers for their collaboration as this proof-of-concept advances our understanding of these techniques. Based on our analysis, as well as the details shared with us by the researchers, we have concluded this issue does not pose an immediate risk to our users and is insufficient to bypass device protections on its own.”
Other researchers familiar with PACMAN say that how dangerous it really is remains to be seen. However, PACMAN “increases the number of things we have to worry about when designing new security solutions,” says Nael Abu-Ghazaleh, chair of computer engineering at University of California, Riverside, and an expert in architecture security, including speculative execution attacks. Processors makers have been adding new security solutions to their designs besides pointer authentication in recent years. He suspects that now that PACMAN has been revealed, other research will begin to find speculative attacks against these new solutions.
Yan’s group explored some naive solutions to PACMAN, but they tended to increase the processor’s overall vulnerability. “It’s always an arms race,” says Keith Rebello, the former program manager of DARPA’s System Security Integrated Through Hardware and firmware (SSITH) program and currently a senior technical fellow at the Boeing Company. PACs are there “to make it much harder to exploit a system, and they have made it a lot harder. But is it the complete solution? No.” He’s hopeful that tools developed through SSITH, such as rapid re-encryption, could help.
Abu-Ghazaleh credits Yan’s group with opening a door to a new aspect of processor security.
“People used to think software attacks were standalone and separate from hardware attacks,” says Yan. “We are trying to look at the intersection between the two threat models. Many other mitigation mechanisms exist that are not well studied under this new compounding threat model, so we consider the PACMAN attack as a starting point.”
Match ID: 193 Score: 2.86 source: spectrum.ieee.org age: 15 days qualifiers: 1.43 california, 1.43 apple
In the midst of the COVID-19 pandemic, in 2020, many research groups sought an effective method to determine mobility patterns and crowd densities on the streets of major cities like New York City to give insight into the effectiveness of stay-at-home and social distancing strategies. But sending teams of researchers out into the streets to observe and tabulate these numbers would have involved putting those researchers at risk of exposure to the very infection the strategies were meant to curb.
Researchers at New York University’s (NYU) Connected Cities for Smart Mobility towards Accessible and Resilient Transportation (C2SMART) Center, a Tier 1 USDOT-funded University Transportation Center, developed a solution that not only eliminated the risk of infection to researchers, and which could easily be plugged into already existing public traffic camera feeds infrastructure, but also provided the most comprehensive data on crowd and traffic densities that had ever been compiled previously and cannot be easily detected by conventional traffic sensors.
To accomplish this, C2SMART researchers leveraged publicly available New York City Department of Transportation (DOT) video feeds from the cover over 700 locations throughout New York City and applied a deep-learning, camera-based object detection method that enabled researchers to calculate pedestrian and traffic densities without ever needing to go out onto the streets.
“Our idea was to take advantage of these DOT camera feeds and record them so we could better understand social distancing behavior of pedestrians,” said Kaan Ozbay, Director of C2SMART and Professor at NYU.
To do this, Ozbay and his team wrote a “crawler”—essentially a tool to index the video content automatically—to capture the low-quality images from the video feeds available on the internet. They then used an off-the-shelf deep-learning image-processing algorithm to process each frame of the video to learn what each frame contains: a bus, a car, a pedestrian, a bicycle, etc. The system also blurs out any identifying images such as faces, without impacting the effectiveness of the algorithm.
The system developed by the NYU team can help inform decision-makers’ understanding of a wide-range of questions ranging from crisis management responses such as social distancing behaviors to traffic congestion
“This allows us to identify what is in the frame to determine the relationship between the objects in that frame,” said Ozbay. “Then, based on a new method that obviates the need for actual in-situ referencing we devised, we’re able to accurately measure the distance between people in the frame to see if they are too close to each other, or it’s just too crowded.”
The easy thing would have been to just count how many people were within each frame. However, as Jingqin Gao, Senior Research Associate at NYU, explained, the reason they pursued an object detection method rather than mere enumeration is because the public feed is not continuous, with gaps lasting several seconds throughout the feed.
“Instead of trying to very accurately count pedestrians crossing a line, we are trying to understand pedestrian density in urban environments, especially for those places that are typically crowded, like bus stops and crosswalks,” said Gao. “We wanted to know whether they were changing their behavior amid the pandemic.”
Gao explained that the aim was to determine the pedestrian density and pedestrian social distancing patterns at scale and see how those patterns have changed since pre-COVID conditions, instead of tracking individual pedestrians.
“For instance, we wanted to know if there was a change from pre-COVID when people were going out in the early morning for commuting purposes versus during the lockdown when they might be going out later in the afternoon,” she added. “By exploring these different trends, we were trying to better understand if there are new patterns during and after the lockdown.”
In general, these kinds of short count studies in traffic engineering only cover a few hours over several days, according to Ozbay. In those studies, people go out and collect data, and then they process it manually, even sometimes having to count cars by hand, for example. But this method would be impossible at the scale of C2SMART’s work, Ozbay explained; in order to cover the hundreds of locations with 24-hour coverage over many months, the job has to be performed by an artificial intelligence (AI) algorithm instead of human or conventional traffic counters.
There are complications that the AI has to overcome from each video feed: the locations are different, the camera angles and height are different, and they are subject to different lighting and positional factors. “It’s not like the AI can learn just one intersection and automatically apply it to another one. It needs to learn each intersection individually,” added Ozbay.
To enable this AI solution, the C2SMART researchers started with an object detection model, namely You Only Look Once (YOLO), which is pre-trained using Microsoft’s COCO data set. Gao explained that they also retrained and localized this object detection model with additional images and various customized post-processing filters to compensate for the low-resolution image produced by New York City DOT video feeds.
While the off-the-shelf object detection model could work in this instance with some customization, when it came to measuring the distances between the objects, the NYU researchers had to develop a novel algorithm, which they refer to as a reference-free distance approximation algorithm.
“If you’re measuring something from an image, you may need some reference point,” said Gao. “Historically, researchers might need to actually go to the site and measure the distance. But with our methodology, we can use the pixel size on the image of the person and the real height of that person to determine distance.”
While this project was inspired by the COVID-19 pandemic, the fast-moving nature of the disease precluded these findings from significantly impacting New York City’s COVID policies. However, the project has produced a COVID-19 Data Dashboard and a video of how it was developed and operates is provided below.
Ozbay explained that the project demonstrated to several city agencies that they were sitting on very valuable actionable data that could be used for many different purposes.
“City agencies have approached us on several projects that are related to this one, but in a different context,” said Ozbay. “Now we are working with New York’s Department of Design and Construction (DDC) and the DOT to use the same kind of approach to analyze traffic around work zones and other key facilities such as intersections and on-street parking without them needing to actually go out to those locations.”
Ozbay notes that this initial project for COVID-19 has opened up possibilities for this kind of AI algorithm analysis of video feed data to be applied to a wide range of projects to provide critical understanding in a more efficient way.
Ozbay believes that much of the process NYU has developed can be handled internally by IT experts within their organization. For example, they should be able to handle the data acquisition and saving of it. But Ozbay believes that on the AI issues they will likely need to lean on experts within the academic or commercial realm to help them with this, since AI is always in a state of development, on a nearly monthly basis.
“This solution will never become like Microsoft Word,” said Ozbay. “It will always require some improvements and some changes and tweaking for the foreseeable future.”
Gao, who used to work for the DOT before taking on her current role, added that there’s always a steady stream of commercial entities offering the DOT their product suites. “These commercial solutions frequently recommend buying and installing new cameras,” she said. “What we have demonstrated here is that we can provide a solution based on current infrastructure.”
Based on his experience working with other cities and states throughout his career, Ozbay mentioned that most cities throughout the United States employ similar kinds of traffic camera systems used in New York City.
“This method allows for cities throughout the country to provide a dual or triple usage of their existing infrastructure,” said Ozbay. “There are a lot of opportunities to do this at a large scale for extended periods with little to no infrastructure cost.”
Ozbay hopes the success of the technology will lead to other DOTs across the country learning of the technology and taking an interest in adopting it themselves. “If you can make it in New York, you can make it anywhere,” he quipped. “We’ll be happy to share with them our code and anything that may be of value to them from our experience.”
While the final product of this research may change the way traffic information has collected and used, it has also served as an important training tool for NYU students—not just postdoctoral researchers, but two undergraduate students at NYU’s Tandon School of Engineering as well.
“Our aim as an engineering school is not just to write papers, but to develop products that can be commercialized, and also to train the next generation of engineers on real projects where they can see how engineering contributes to and can help improve society,” said Ozbay.
Gao and Ozbay added that the two undergraduate students who worked on this project for two years are going on to graduate school to study along the lines of this project. “These students come to us without much knowledge, they become exposed to different research, and we let them pick what they are interested in. We train them very slowly,” said Ozbay. “If they remain interested, they eventually become part of our research team.”
In future research, Ozbay envisions their work moving from just object recognition to building trajectories from these video feeds. If they are successful in this goal, Ozbay believes it has huge implications for applications like real-time traffic safety, an emerging area of research C2SMART is a major player.
He added: “With trajectory building we can see the movement of vehicles in relation to each other as well as to pedestrians. This will not only help us identify risks in real-time but also establish and implement measures to mitigate those risks using advanced versions of methods we have already developed in the past.”
This research utilizes real-time public traffic camera information, which is publicly streamed by the New York City Department of Transportation (DOT) through a publicly open web site at https://nyctmc.org/. Additional offline video data was provided by New York City DOT and New York City Department of Design and Construction (DDC) under the Memorandum of Understanding (“MOU”) between the City of New York, acting by and through DDC and DOT, and C2SMART, a center within New York University.
Match ID: 194 Score: 2.86 source: spectrum.ieee.org age: 16 days qualifiers: 1.43 microsoft, 1.43 development
The longest journey begins with a single step, and that step gets expensive when you’re in the space business. Take, for example, the Electron booster made by
Rocket Lab, a company with two launch pads on the New Zealand coast and another awaiting use in Virginia. Earth’s gravity is so stubborn that, by necessity, two-thirds of the rocket is its first stage—and it has historically ended up as trash on the ocean floor after less than 3 minutes of flight.
Making those boosters reusable—saving them from a saltwater grave, and therefore saving a lot of money—has been a goal of aerospace engineers since the early space age. Elon Musk’s
SpaceX has famously been landing its Falcon 9 boosters on drone ships off the Florida coast—mind-bending to watch but very hard to pull off.
Rocket Lab says it has another way. Iits next flight will carry 34 commercial satellites—and instead of being dropped in the Pacific, the spent first stage will be snared in midair by a helicopter as it descends by parachute. It will then be brought back to base, seared by the heat of reentry but inwardly intact, for possible refurbishment and reuse. The team, in its determination to minimize its odds of dropping the ball, so to speak, has pushed back the launch several times in order to wait out inclement weather. They reason that because this isn’t a game of horseshoes, close is not good enough.
“It’s a very complex thing to do,” says
Morgan Bailey of Rocket Lab. “You have to position the helicopter in exactly the right spot, you have to know exactly where the stage is going to be coming down, you have to be able to slow it enough,” she says. “We’ve practiced and practiced all of the individual puzzle pieces, and now it’s putting them together. It’s not a foregone conclusion that the first capture attempt will be a success.”
Still, people in the space business will be watching, since Rocket Lab has established a niche for itself as a viable space company. This will be its 26th Electron launch. The company says it has launched 112 satellites so far, many of them
so-called smallsats that are relatively inexpensive to fly. “Right now, there are two companies taking payloads to orbit: SpaceX and Rocket Lab,” says Chad Anderson, CEO of Space Capital, a firm that funds space startups.
Here's the flight profile. The Electron is 18 meters tall; the bottom 12 meters are the first stage. For this mission it will lift off from New Zealand on its way to a sun-synchronous orbit 520 kilometers high. The first stage burns out after the first 70 km. Two minutes and 32 seconds into the flight, it drops off, following a long arc that in the past would have sent it crashing into the ocean, about 280 km downrange.
But Rocket Lab has now equipped its booster with heat shielding, protecting it as it falls tail-first at up to 8,300 kilometers per hour. Temperatures should reach 2,400 °C as the booster is slowed by the air around it.
At an altitude of 13 km, a small
drogue parachute is deployed from the top end of the rocket stage, followed by a main chute at about 6 km, less than a minute later. The parachute slows the rocket substantially, so that it is soon descending at only about 36 km/h.
An artist’s conception shows the helicopter after catching the spent Electron rocket’s first stage in midair.Rocket Lab
But even that would make for a hard splashdown—which is why a Sikorsky S-92 helicopter hovers over the landing zone, trailing a grappling hook on a long cable. The plan is for the helicopter to fly over the descending rocket and snag the parachute cables. The rocket never gets wet; the chopper secures it and either lowers it onto a ship or carries it back to land. Meanwhile—let’s not lose sight of the prime mission—the second stage of the rocket should reach orbit about 10 minutes after launch.
“You have to keep the booster out of the water,” says Anderson. “If they can do that, it’s a big deal.” Many space people will recall NASA’s
solid rocket boosters, which helped launch the space shuttles and then parachuted into the Atlantic; towing them back to port and cleaning them up for reuse was slow and expensive. NASA’s giant SLS rocket uses the same boosters, but there are no plans to recover them.
So midair recovery is far better, though it’s not new. As long ago as 1960, the U.S. Air Force snagged a returning capsule from a mission called
Discoverer 14. But that had nothing to do with economy; the Discoverers were actually Corona reconnaissance satellites, and they were sending back film of the Soviet Union—priceless for Cold War intelligence.
Rocket Lab tries to sound more playful about its missions: It gives them names like “A Data With Destiny” or “Without Mission a Beat.” This newest flight, with its booster-recovery attempt, is called “There and Back Again.”
tweeted to CEO Peter Beck: “It would have been cool if the mission was called ‘Catch Me If You Can.’”
“Oh…that’s good!” Beck
replied. “Congratulations, you have just named the very next recovery mission.”
Update 22 April 2022: In a tweet, Rocket Lab announced that due to weather, the planned launch and recovery would be rescheduled for 27 April at the earliest.
This article appears in the July 2022 print issue as “Rocket Lab Catches Rocket Booster in Midair.”
Match ID: 195 Score: 2.86 source: spectrum.ieee.org age: 68 days qualifiers: 2.14 musk, 0.71 startup
It is the fate of many a dead satellite to spend its last years tumbling out of control. A fuel line may burst, or solar wind may surge, or there may be drag from the outer reaches of the atmosphere—and unless a spacecraft has been designed in some way that keeps it naturally stable, chances are good that it will begin to turn end over end.
That’s a problem, because Earth orbit is getting more and more crowded. Engineers would like to corral old pieces of space junk, but they can’t safely reach them, especially if they’re unstable. The European Space Agency says there are about 30,000 “debris objects” now being tracked in Earth orbit—derelict satellites, spent rocket stages, pieces sent flying from collisions in space. There may also be 900,000 smaller bits of orbital debris—everything from loose bolts to flecks of paint to shards of insulation. They may be less than 10 centimeters long, but they can still destroy a healthy satellite if they hit at orbital speeds.
“With more satellites being launched, we might encounter more situations where we have a defunct satellite that’s occupying a valuable orbit,” says Richard Linares, an assistant professor of aeronautics and astronautics at MIT. He’s part of an American-German project, called TumbleDock/ROAM, researching ways to corral and stabilize tumbling satellites so they can be deorbited or, in some cases, perhaps even refueled or repaired.
Engineers have put up with orbital debris for decades, but Linares says the picture is changing. For one thing, satellite technology is becoming more and more affordable—just look at SpaceX, which has been launching 40 satellites a week so far this year. For another, he says, the economic benefits those satellites offer—high-speed internet, GPS, climate and crop monitoring and other applications—will be threatened if the risk of impacts keeps growing.
“I think in the next few years we’ll have the technology to do something about space debris,” says Linares. “And there are economic drivers that will incentivize companies to do this.”
The TumbleDock/ROAM team has just finished a series of tests in the cabin of the International Space Station, using NASA robots called Astrobees to stand in for a tumbling satellite and a “chaser” spacecraft sent to catch it. The goal: to figure out algorithms so that a chaser can find its target, determine its tumble rates, and calculate the safest and most efficient approach to it.
Astrobee robot experiment aboard the ISS to reach a tumbling target in space.
“There’s a massive amount of large debris out there,” says Keenan Albee, a Ph.D. student on the team at MIT. “Look at some of them, with large solar panels that are ready to whack you if you don’t do the approach correctly.”
The researchers decided early on that a chase vehicle needs enough autonomy to close in on a disabled satellite on its own. Even the largest satellites are too distant for ground-based tracking stations to track their attitude with any precision. A chaser, perhaps equipped with navigation cameras, lidar, and other sensors, will need to do the job in real time.
“The tumbling motion of a satellite can be quite complex,” says Roberto Lampariello, the principal investigator on the project at the German Aerospace Center, or DLR. “And if you want to be sure you are not going to collide with any appendages while approaching the mating point, having an autonomous method of guidance is, I think, very attractive.”
The Astrobee tests on the space station showed that it can be done, at least in principle. Each Astrobee robot is a cube, about 30 centimeters on a side, with navigation cameras, compressed-air thrusters, and Snapdragon processors much like what you would find in a smartphone. For the latest test, last month, NASA astronaut Mark Vande Hei set up two Astrobees a couple of meters apart. They then took their commands from Albee on the ground. He started the test runs, with one robot tumbling and the other trying to rendezvous with it. There have been glitches; the Astrobees needed help determining their precise location relative to the station walls. But the results of the tests were promising.
A next step, say the researchers, is to determine how best for a chase spacecraft to grapple its target, which is especially difficult if it’s a piece of debris with no docking mechanism. Other plans over the years have involved big nets or lasers; TumbleDock/ROAM team members say they’re intrigued by grippers that use van der Waals forces between atoms, the kinds that help a gecko cling to a sheer surface.
The larger question is how to turn experiments like these into actual solutions to a growing, if lofty, problem. Low Earth orbit has been crowded enough, for long enough, that satellite makers add shielding to their vehicles and space agencies continuously scan the skies to prevent close calls. No space travelers have been killed, and there have only been a few cases in which satellites were actually pulverized. But the problem has become increasingly expensive and, in some cases, dangerous. SpaceX has launched 2,000 Starlink Internet satellites so far, may launch 30,000 more, and has other companies (like Amazon) racing to keep up. They see profits up there.
MIT’s Linares says that, in fact, is why it’s worth figuring out the space-junk problem. “There’s a reason why those orbits are valuable,” he says. Companies may spend billions to launch new satellites—and don’t want them threatened by old satellites.
“If your company’s benefiting from an orbit band,” he says, “then you’d probably better get someone to clean it up for you.”
Match ID: 196 Score: 2.86 source: spectrum.ieee.org age: 103 days qualifiers: 1.43 apple, 1.43 amazon
Adam Grosser wants to improve transportationof people, goods, and energy. That’s why the chairman and managing partner of the early-stage venture capital fund UP Partners is investing in several mobility projects. They include Beta Technologies’ electric vertical-takeoff-and-landing aircraft, Quincus’s operating system for supply-chain and logistics providers, and Teleo’s teleoperation platform for mining, construction, and other heavy equipment.
“Transportation is the underlying fabric of society,” Grosser says. “At UP, we invest in key enabling technologies that help move people and goods faster, safer, more efficiently, and sustainably. This can include anything from new kinds of ground, sea-born, air, or space vehicles to production lines, packages, and units of automation.”
The mobility sector is ripe for improvement, he says. “Arguably just about everything on a car, except for a few safety systems, was invented by 1920—although not necessarily put into widespread practice. But mobility hasn’t previously been an investable category.”
He credits several factors for this. Faster, smaller, and cheaper additive manufacturing is now available. Also, rapid shifts in battery capacity and electric motor torque have dramatically changed how mobility vehicles and methods are built and work.
The question, he says, is how to pull all these together into something that is safe and more environmentally friendly than what we do today—and which has a viable financial model.
One company that ticks all these boxes for Grosser is Kolors, in Acapulco, Mexico, which aims to transform the bus industry across Latin America by offering a website for riders to reserve a seat on an intercity bus. The company does not own the buses. Instead, it partners with small and medium-size bus lines that own their vehicles to provide a consistent customer experience and offer a single ticketing framework. He describes Kolors as an “asset-light bus company” and compares it to Uber, the ride-hailing service, which also doesn’t own the vehicles customers use.
At UP Partners, we invest in key enabling technologies that help move people and goods faster, safer, more efficiently, and sustainably.
“Most people who go into tech investing today do that with a fairly clear intention of being an investor,” Grosser says. “[By contrast], I would consider myself an inadvertent investor. I’ve spent decades working to solve meaningful challenges, first as an engineer, then as an entrepreneur, and for the past 21 years as an investor.”
What helped him succeed at his investing goals? Mentors.
“I have been lucky enough to have amazing mentors and partners, from my college days through to the present," Grosser says. One was the late Kathryn Gould, who founded Foundation Capital. In addition to being one of the first female venture capitalists, she was also a physicist and concert violinist.
“She pulled me in and said, ‘I think you will be a good investor. Let me teach you.’”
Grosser also credits his diverse engineering experiences with helping him talk knowledgeably about potential new technologies and companies to invest in as well as conduct due diligence.
Grosser uses some of what he learned through his hands-on experience with a variety of vehicles. He has built airplanes, boats, hydrofoils, and motorcycles. He even built a cycle-by-wire electric trike for a close friend who had been an avid cyclist but was essentially paralyzed by ALS. He also builds and restores classic cars and vintage military aircraft. He recently finished building a 1963 Porsche 356 B for his daughter. He also is converting the drivetrain of a 1974 Jaguar E-Type to an electric one.
Grosser has been a pilot for more than 40 years and has flown everything from gliders and biplanes to helicopters and seaplanes. He currently holds an Airline Transport Pilot rating certificate, which is the highest achievement of pilot certification.
His advice for would-be investors is to use the knowledge they already have.
“Courses like robotics or thermodynamics that may not have been part of your major can be integrated with whatever you’ve learned and done in software, hardware, and product design,” he says. “Any of this knowledge and experience can help you establish rapport and make more-informed selections.”
This article appears in the July 2022 print issue as “Adam Grosser.”
Match ID: 198 Score: 2.14 source: spectrum.ieee.org age: 11 days qualifiers: 1.43 apple, 0.71 uber
In the popular conception of a technological breakthrough, a flash of genius is followed quickly by commercial or industrial success, public acclaim, and substantial wealth for a small group of inventors and backers. In the real world, it almost never works out that way.
Advances that seem to appear suddenly are often backed by decades of development. Consider steam engines. Starting in the second quarter of the 19th century they began powering trains, and they soon revolutionized the transportation of people and goods. But steam engines themselves had been invented at the beginning of the 18th century. For 125 years they had been used to pump water out of mines and then to power the mills of the Industrial Revolution.
Lately we’ve become accustomed to seeing rocket boosters return to Earth and then land vertically, on their tails, ready to be serviced and flown again. (Much the same majestic imagery thrilled sci-fi moviegoers in the 1950s.) Today, both SpaceX and Blue Origin are using these techniques, and a third startup, Relativity Space, is on the verge of joining them. Such reusable rocketry is already cutting the cost of access to space and, with other advances yet to come, will help make it possible for humanity to return to the moon and eventually to travel to Mars.
Vertical landings, too, have a long history, with the same ground being plowed many times by multiple research organizations. From 1993 to 1996 a booster named DCX, for Delta Clipper Experimental, took off and landed vertically eight times at White Sands Missile Range. It flew to a height of only 2,500 meters, but it successfully negotiated the very tricky dynamics of landing a vertical cylinder on its end.
The key innovations that made all this possible happened 50 or more years ago. And those in turn built upon the invention a century ago of liquid-fueled rockets that can be throttled up or down by pumping more or less fuel into a combustion chamber.
In August 1954 the Rolls-Royce Thrust Measuring Rig, also known as the “flying bedstead,” took off and landed vertically while carrying a pilot. The ungainly contraption had two downward-pointing Rolls-Royce jet engines with nozzles that allowed the pilot to vector the thrust and control the flight. By 1957 another company, Hawker Siddeley, started work on turning this idea into a vertical take-off and landing (VTOL) fighter jet. It first flew in 1967 and entered service in 1969 as the Harrier Jump Jet, with new Rolls-Royce engines specifically designed for thrust vectoring. Thrust vectoring is a critical component of control for all of today’s reusable rocket boosters.
During the 1960s another rig, also nicknamed the flying bedstead, was developed in the United States for training astronauts to land on the moon. There was a gimbaled rocket engine that always pointed directly downward, providing thrust equal to five-sixths of the vehicle and the pilot’s weight, simulating lunar gravity. The pilot then controlled the thrust and direction of another rocket engine to land the vehicle safely.
It was not all smooth flying. Neil Armstrong first flew the trainer in March 1967, but he was nearly killed in May 1968 when things went awry and he had to use the ejection seat to rocket to safety. The parachute deployed and he hit the ground just 4 seconds later. Rocket-powered vertical descent was harder than it looked.
Vertical rocket landings have a long history, with the same ground being plowed many times by multiple research organizations.
Nevertheless, between 1969 and 1972, Armstrong and then five other astronauts piloted lunar modules to vertical landings on the moon. There were no ejection seats, and these have been the only crewed rocket-powered landings on a spaceflight. All other humans lofted into space have used Earth’s atmosphere to slow down, combining heat shields with either wings or parachutes.
In the early days of Blue Origin, the company returned to the flying-bedstead approach, and its vehicle took off and landed successfully in March 2005. It was powered by four jet engines, once again from Rolls-Royce, bought secondhand from the South African Air Force. Ten years later, in November 2015, Blue Origin’s New Shepard booster reached an altitude of 100 kilometers and then landed vertically. A month later SpaceX had its first successful vertical landing of a Falcon-9 booster.
Today’s reusable, or flyback, boosters also use something called grid fins, those honeycombed panels sticking out perpendicularly from the top of a booster that guide the massive cylinder as it falls through the atmosphere unpowered. The fins have an even longer history, as they have been part of every crewed Soyuz launch since the 1960s. They guide the capsule back to Earth if there’s an abort during the climb to orbit. They were last used in October 2018 when a Soyuz failed at 50 km up. The cosmonaut and astronaut who were aboard landed safely and had a successful launch in another Soyuz five months later.
The next big accomplishment will be crewed vertical landings, 50 years after mankind's last one, on the moon. It will almost certainly happen before this decade is out.
I’m less confident that we’ll see general-purpose quantum computers and abundant electricity from nuclear fusion in that time frame. But I’m pretty sure we’ll eventually get there with both. The arc of technology development is often long. And sometimes, the longer it is, the more revolutionary it is in the end.
This article appears in the April 2022 print issue as “The Long Road to Overnight Success .”
Match ID: 199 Score: 2.14 source: spectrum.ieee.org age: 96 days qualifiers: 1.43 development, 0.71 startup
The first commercial all-electric passenger plane is just weeks away from its maiden flight, according to its maker Israeli startup Eviation. If successful, the nine-seater Alice aircraft would be the most compelling demonstration yet of the potential for battery-powered flight. But experts say there’s still a long way to go before electric aircraft makes a significant dent in the aviation industry.
The Alice is currently undergoing high-speed taxi tests at Arlington Municipal Airport close to Seattle, says Eviation CEO Omer Bar-Yohay. This involves subjecting all of the plane’s key systems and fail-safe mechanisms to a variety of different scenarios to ensure they are operating as expected before its first flight. The company is five or six good weather days away from completing those tests, says Bar-Yohay, after which the plane should be cleared for takeoff. Initial flights won’t push the aircraft to its limits, but the Alice should ultimately be capable of cruising speeds of 250 knots (463 kilometers per hour) and a maximum range of 440 nautical miles (815 kilometers).
Electric aviation has received considerable attention in recent years as the industry looks to reduce its carbon emissions. And while the Alice won’t be the first all-electric aircraft to take to the skies, Bar-Yohay says it will be the first designed with practical commercial applications in mind. Eviation plans to offer three configurations—a nine-seater commuter model, a six-seater executive model for private jet customers, and a cargo version with a capacity of 12.74 cubic meters. The company has already received advance orders from logistics giant DHL and Massachusetts-based regional airline Cape Air.
“It’s not some sort of proof-of-concept or demonstrator,” says Bar-Yohay. “It’s the first all-electric with a real-life mission, and I think that’s the big differentiator.”
Getting there has required a major engineering effort, says Bar-Yohay, because the requirements for an all-electric plane are very different from those of conventional aircraft. The biggest challenge is weight, thanks to the fact that batteries provide considerably less mileage to the pound compared to energy-dense jet fuels.
That makes slashing the weight of other components a priority and the plane features lightweight composite materials “where no composite has gone before,”’, says Bar-Yohay. The company has also done away with the bulky mechanical systems used to adjust control surfaces on the wings, and replaced them with a much lighter fly-by-wire system that uses electronic actuators controlled via electrical wires.
The company’s engineers have had to deal with a host of other complications too, from having to optimize the aerodynamics to the unique volume and weight requirements dictated by the batteries to integrating brakes designed for much heavier planes. “There is just so much optimization, so many specific things that had to be solved,” says Bar-Yohay. “In some cases, there are just no components out there that do what you need done, which weren’t built for a train, or something like that.”
Despite the huge amount of work that’s gone into it, Bar-Yohay says the Alice will be comparable in price to similar sized turboprop aircraft like the Beechcraft King Air and cheaper than small business jets like the Embraer Phenom 300. And crucially, he adds, the relative simplicity of electrical motors and actuators compared with mechanical control systems and turboprops or jets means maintenance costs will be markedly lower.
This is a conceptual rendering of Eviation's Alice, the first commercial all-electric passenger plane, in flight.Eviation
Combined with the lower cost of electricity compared to jet fuel, and even accounting for the need to replace batteries every 3,000 flight hours, Eviation expects Alice’s operating costs to be about half those of similar sized aircraft.
But there are question marks over whether the plane has an obvious market, says aviation analyst Richard Aboulafia, managing director at AeroDynamic Advisory. It’s been decades since anyone has built a regional commuter with less than 70 seats, he says, and most business jets typically require more than the 440 nautical mile range the Alice offers. Scaling up to bigger aircraft or larger ranges is also largely out of the company’s hands as it will require substantial breakthroughs in battery technology. “You need to move on to a different battery chemistry,” he says. “There isn’t even a 10-year road map to get there.”
An aircraft like the Alice isn’t meant to be a straight swap for today’s short-haul aircraft though, says Lynette Dray, a research fellow at University College London who studies the decarbonization of aviation. More likely it would be used for short intercity hops or for creating entirely new route networks better suited to its capabilities.
This is exactly what Bar-Yohay envisages, with the Alice’s reduced operating costs opening up new short-haul routes that were previously impractical or uneconomical. It could even make it feasible to replace larger jets with several smaller ones, he says, allowing you to provide more granular regional travel by making use of the thousands of runways around the country currently used only for recreational aviation.
The economics are far from certain though, says Dray, and if the ultimate goal is to decarbonize the aviation sector, it’s important to remember that aircraft are long-lived assets. In that respect, sustainable aviation fuels that can be used by existing aircraft are probably a more promising avenue.
Even if the Alice’s maiden flight goes well, it still faces a long path to commercialization, says Kiruba Haran, a professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign. Aviation’s stringent safety requirements mean the company must show it can fly the aircraft for a long period, over and over again without incident, which has yet to be done with an all-electric plane at this scale.
Nonetheless, if the maiden flight goes according to plan it will be a major milestone for electric aviation, says Haran. “It’s exciting, right?” he says. “Anytime we do something more than, or further than, or better than, that’s always good for the industry.”
And while battery-powered electric aircraft may have little chance of disrupting the bulk of commercial aviation in the near-term, Haran says hybrid schemes that use a combination of batteries and conventional fuels (or even hydrogen) to power electric engines could have more immediate impact. The successful deployment of the Alice could go a long way to proving the capabilities of electric propulsion and building momentum behind the technology, says Haran.
“There are still a lot of skeptics out there,” he says. “This kind of flight demo will hopefully help bring those people along.”
Match ID: 200 Score: 2.14 source: spectrum.ieee.org age: 138 days qualifiers: 1.43 seattle, 0.71 startup
Since November 2000, there have always have been a select few people living apart from the rest of us—the astronauts and cosmonauts on the International Space Station. On any list of humanity’s greatest engineering achievements, the ISS almost always ranks near the top. It is as long as a football field, as spacious as a jumbo jet. It has made more than 123,000 orbits in 21 years.
The International Space Station—originally expected to be completed by 1994 for $8 billion—was completed in 2011 for more than $100 billion. Its mission may be extended till 2030.
The station is slowly leaking air, presumably from stress cracks in its hull that crew members have struggled to locate, and it’s getting increasingly expensive to maintain as parts wear out. NASA is officially supposed to retire the ISS by 2024, though it says that can probably be pushed back to 2030. Meanwhile, it’s trying to get the next station into orbit as quickly as possible.
So what might a new station look like? And who will lead the charge?
NASA has now awarded US $415.6 million to three aerospace consortiums to develop a new “commercial low Earth orbit destination.” One team is led by Jeff Bezos’ Blue Origin, another by the satellite company Nanoracks, and the third by the aerospace giant Northrop Grumman. NASA’s plan, for some years, has been for private industry to take over the routine work of space operations, with NASA as “one of many paying customers.” If it works—as it has in the case of SpaceX ferrying astronauts to and from the ISS—NASA says it will save billions of dollars and free the agency to go explore the cosmos.
Northrop Grumman’s Free Flyer
The Dulles, Va.-based Northrop Grumman has proposed a space station concept that features elements in development for other projects. Note the SpaceX Dragon spacecraft docked at bottom center.
Starlab, from Nanoracks, Voyager Space, and Lockheed Martin, is a free-flying commercial space station concept. The largest module is inflatable.
Nanoracks/Lockheed Martin/Voyager Space
Orbital Reef, First Stages
Illustration of the first components of Blue Origin and Sierra Nevada’s proposed Orbital Reef space station in orbit.
Blue Origin/Sierra Nevada
Artist's conception of the SpaceX Starship on the surface of the moon. SpaceX has said the same ship can be used for many purposes in Earth orbit and deep space.
“We don’t see, coming out of this, one winner,” says Jeffrey Manber, Nanoracks’ chair and co-founder. "We see, by the end of this decade, multiple, privately owned space stations.”
There is a fourth contender, by the way: Axiom Space, a startup company, signed a contract in 2020 to build at least one new module for the ISS by mid-decade. When the time comes, Axiom says, it would detach its components and reconfigure them with other parts for a new station.
“If the ISS is extended to 2030, this will ensure that we don’t have a gap in our access to low Earth orbit,” said Philip McAlister, NASA’s director of commercial spaceflight.
But the race to avoid that gap may have already been lost—and if that warning seems ominous, it also comes from the space agency’s own in-house watchdog, the Office of Inspector General, or OIG.
“In our judgment, even if early design maturation is achieved in 2025—a challenging prospect in itself—a commercial platform is not likely to be ready until well after 2030,” the OIG said in an audit report. It said that without a working station, “the nascent low Earth orbit commercial space economy would likely collapse, causing cascading impacts to commercial space transportation capabilities, in-space manufacturing, and microgravity research.”
Remember, says the OIG, that the U.S. went nine years between the retirement of the space shuttles and the first SpaceX crew launch. And President Ronald Reagan originally proposed a space station by 1994 for $8 billion, but the ISS wasn’t finished until 2011 for more than $100 billion.
What’s more, new stations “might be obsolete before they’re even launched,” says Chad Anderson, a venture capitalist whose firm, Space Capital, has funded companies including Nanoracks. He wonders if SpaceX, which has been silent about its plans, might swoop in with its giant, reusable multipurpose Starship—launching laboratories and space factories, carrying tourists, doing almost everything a permanent space station could at a cost other contenders cannot match. Starship has already been picked as NASA’s next moon lander.
“They’re going to pull this off,” says Anderson. “The unit cost is just the cost of fuel. What is that? $50 million—that’s the same price people are paying per seat to go to the International Space Station right now.”
The competitors for NASA’s support say they can meet the challenge. They’d go in the opposite direction from SpaceX; their approach is not to get too ambitious.
Conceptually, their designs borrow heavily from the ISS, with cylindrical modules and photovoltaic panels docked together. Most of these components already exist or are well along in development. Some sections are visibly larger and more bulbous because they’d be folded up for launch and inflated in orbit. That’s an excellent way to fit more cabin space in the payload fairing of a rocket—but even that idea is decades old. An inflatable storage compartment, called BEAM, has been docked to the ISS since 2016.
“There’s nothing that’s going to prevent the technology from being there for the hardware that flies,” says Doug Cooke, a former associate administrator at NASA who was heavily involved in planning the ISS. “We have the history and the experience.”
The competitors say they can head off delays and deliver on price. “It’s much better than an order of magnitude less than what it cost to develop the International Space Station,” says Brent Sherwood of Blue Origin. “Most if not all of the challenges that need to be worked…have already been solved by the International Space Station program.”
The race goes to the swift, and to the inexpensive. What will win out? The proven approach? Or the revolutionary?
Match ID: 201 Score: 2.14 source: spectrum.ieee.org age: 191 days qualifiers: 1.43 development, 0.71 startup
Delivering things by drone began as a stunt in 2012, when a model airplane dropped a burrito by parachute to a hungry customer waiting below. The concept then graduated, first to a proof-of-principle venture in Iceland using multicopters, then to a well-funded Amazon project in the United Kingdom. But these and similar attempts to solve the last-mile problem—the mile leading to the customer—have largely been disappointing. Amazon recently scaled back its drone-based delivery project in the U.K.
In 2022, Dronamics, a company based in London and Sofia, Bulgaria, will test-fly a drone in Europe that will carry far more than a mere burrito and over far longer distances. It addresses the less sexy but equally important middle-distance problem—the route that connects factories to warehouses. The point is to take a slice of business that’s now handled by regular air freight and by trucks—above all, the quick delivery of critical parts. If this service had been available a year or two ago, it might not have prevented the logistics logjam that now plagues the world, but it would have cleared away some of the more problematic bottlenecks.
Dronamics will run trials with its partners, including DHL and Hellmann Worldwide Logistics, in the hope of eventually fielding thousands of drones, each carrying as much as 350 kilograms of cargo up to 2,500 kilometers. The European Union has facilitated this sort of experimentation by instituting a single certification policy for drone aircraft. Once its aircraft are certified, Dronamics must get a route approved through one of the E.U.’s member countries; that done, it should be fairly easy to get other member countries to agree as well.
In October, Dronamics announced that it would use Malta as its base, with a view to connecting first to Italy and later to other Mediterranean countries.
One thing Dronamics doesn’t do is full-scale autonomy: Its planes do not detect and avoid obstacles. Instead, each flight is programmed in advance, in a purely deterministic way. Flights often take place in controlled airspace and always between drone ports that the company controls. Someone on the ground monitors the flight from afar, and if something unexpected arises, that person can redirect the plane.
“We operate like a proper airline, but we can intervene,” says Svilen Rangelov, the cofounder and CEO of Dronamics. “We’re looking for underserved airports, using time slots where there is no passenger traffic. In the United States there are 17,000 airports, but only about 400 are commercially used. The rest don’t have regular service at all.”
Unlike the multicopter burrito drones of years past, or even Amazon’s prototypes, these machines fly on fixed wings and are powered by internal combustion engines, the better to carry big loads long distances and to operate at off-the-grid airfields. “Anything less than 200 miles [about 320 kilometers] is not appropriate because, given the time to get to the airport, fly, and then pick up, you may as well truck it,” Rangelov says.
The company’s drone is called Black Swan, a phrase often used to describe important but unpredictable events. “That was precisely the reasoning” behind the name, Rangelov says, explaining what makes this drone so unique and rare. "We knew [the drone] had to be cheaper to produce and to operate than any existing models.”
The drone likely will not be carrying one pallet of the same things but multiple packages for many customers.
Because this vehicle is intended to transport cargo with no people on board, Dronamics could design the interior to fit cargo pallets. “It’s exactly the right cargo size for this business,” Rangelov says. “It likely will not be carrying one pallet of the same things but multiple packages for many customers.” And Dronamics claims it can carry cargo for half of what today’s air freighters charge.
Hellmann Worldwide Logistics sees a lot of potential for using Dronamics in Africa and other places with limited infrastructure. For now, though, the company is focused on the dense population, manageable distances, and supportive governmental institutions of Europe.
“Especially between north and south Europe—from Germany and Hungary, where there’s a lot of automotive business,” says Jan Kleine-Lasthues, Hellmann’s chief operating officer for air freight. There are also supply lines going into Italy that service the cruise ships on the Mediterranean Sea, he says, and fresh fish would be ideal cargo. Indeed, Dronamics is working on a temperature-controlled container.
What effect would massive fleets of such drones have had on today’s supply-chain problems? “It could help,” he says. “If the container isn’t arriving with production material, we could use drones to keep production alive. But it’s not replacing the big flow—it’s just a more flexible, more agile mode of transport.”
Before cargo drones darken the skies, though, Hellmann wants to see how the rollout goes.
“First of all, we want to try it,” Kleine-Lasthues says. “One use case is replacing commercial air freight—for example, Frankfurt to Barcelona by drone; also, there’s a use case replacing vans. If it is working, I think it can be quickly ramped up. The question is how fast can Dronamics add capacity to the market.”
This article appears in the January 2022 print issue as “Flying Pallets Without Pilots.”
Match ID: 202 Score: 2.14 source: spectrum.ieee.org age: 191 days qualifiers: 1.43 amazon, 0.71 startup
In the puzzle of climate change, Earth’s oceans are an immense and crucial piece. The oceans act as an enormous reservoir of both heat and carbon dioxide, the most abundant greenhouse gas. But
gathering accurate and sufficient data about the oceans to feed climate and weather models has been a huge technical challenge.
Over the years, though, a basic picture of ocean heating patterns has emerged. The sun’s infrared, visible-light, and ultraviolet radiation warms the oceans, with the heat absorbed particularly in Earth’s lower latitudes and in the eastern areas of the vast ocean basins. Thanks to wind-driven currents and large-scale patterns of circulation, the
heat is generally driven westward and toward the poles, being lost as it escapes to the atmosphere and space.
This heat loss comes mainly from a combination of evaporation and reradiation into space. This oceanic heat movement helps make Earth habitable by smoothing out local and seasonal temperature extremes. But the
transport of heat in the oceans and its eventual loss upward are affected by many factors, such as the ability of the currents and wind to mix and churn, driving heat down into the ocean. The upshot is that no model of climate change can be accurate unless it accounts for these complicating processes in a detailed way. And that’s a fiendish challenge, not least because Earth’s five great oceans occupy 140 million square miles, or 71 percent of the planet’s surface.
“We can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.”
Providing such detail is the purpose of the
Argo program, run by an international consortium involving 30 nations. The group operates a global fleet of some 4,000 undersea robotic craft scattered throughout the world’s oceans. The vessels are called “floats,” though they spend nearly all of their time underwater, diving thousands of meters while making measurements of temperature and salinity. Drifting with ocean currents, the floats surface every 10 days or so to transmit their information to data centers in Brest, France, and Monterey, Calif. The data is then made available to researchers and weather forecasters all over the world.
The Argo system, which produces more than 100,000 salinity and temperature profiles per year, is a huge improvement over traditional methods, which depended on measurements made from ships or with buoys. The remarkable technology of these floats and the systems technology that was created to operate them as a network was recognized this past May with the
IEEE Corporate Innovation Award, at the 2022 Vision, Innovation, and Challenges Summit. Now, as Argo unveils an ambitious proposal to increase the number of floats to 4,700 and increase their capabilities,
IEEE Spectrum spoke with Susan Wijffels, senior scientist at the Woods Hole Oceanographic Institution on Cape Cod, Mass., and cochair of the Argo steering committee.
Why do we need a vast network like Argo to help us understand how Earth’s climate is changing?
Susan Wijffels: Well, the reason is that the ocean is a key player in Earth’s climate system. So, we know that, for instance, our average climate is really, really dependent on the ocean. But actually, how the climate varies and changes, beyond about a two-to-three-week time scale, is highly controlled by the ocean. And so, in a way, you can think that the future of climate—the future of Earth—is going to be determined partly by what we do, but also by how the ocean responds.
Aren’t satellites already making these kind of measurements?
Wijffels: The satellite observing system, a wonderful constellation of satellites run by many nations, is very important. But they only measure the very, very top of the ocean. They penetrate a couple of meters at the most. Most are only really seeing what’s happening in the upper few millimeters of the ocean. And yet, the ocean itself is very deep, 5, 6 kilometers deep, around the world. And it’s what’s happening in the deep ocean that is critical, because things are changing in the ocean. It’s getting warmer, but not uniformly warm. There’s a rich structure to that warming, and that all matters for what’s going to happen in the future.
How was this sort of oceanographic data collected historically, before Argo?
Wijffels: Before Argo, the main way we had of getting subsurface information, particularly things like salinity, was to measure it from ships, which you can imagine is quite expensive. These are research vessels that are very expensive to operate, and you need to have teams of scientists aboard. They’re running very sensitive instrumentation. And they would simply prepare a package and lower it down the side into the ocean. And to do a 2,000-meter profile, it would maybe take a couple of hours. To go to the seafloor, it can take 6 hours or so.
The ships really are wonderful. We need them to measure all kinds of things. But to get the global coverage we’re talking about, it’s just prohibitive. In fact, there are not enough research vessels in the world to do this. And so, that’s why we needed to try and exploit robotics to solve this problem.
Pick a typical Argo float and tell us something about it, a day in the life of an Argo float or a week in the life. How deep is this float typically, and how often does it transmit data?
Wijffels: They spend 90 percent of their time at 1,000 meters below the surface of the ocean—an environment where it’s dark and it’s cold. A float will drift there for about nine and a half days. Then it will make itself a little bit smaller in volume, which increases its density relative to the seawater around it. That allows it to then sink down to 2,000 meters. Once there, it will halt its downward trajectory, and switch on its sensor package. Once it has collected the intended complement of data, it expands, lowering its density. As the then lighter-than-water automaton floats back up toward the surface, it takes a series of measurements in a single column. And then, once they reach the sea surface, they transmit that profile back to us via a satellite system. And we also get a location for that profile through the global positioning system satellite network. Most Argo floats at sea right now are measuring temperature and salinity at a pretty high accuracy level.
How big is a typical data transmission, and where does it go?
Wijffels: The data is not very big at all. It’s highly compressed. It’s only about 20 or 30 kilobytes, and it goes through the Iridium network now for most of the float array. That data then comes ashore from the satellite system to your national data centers. It gets encoded and checked, and then it gets sent out immediately. It gets logged onto the Internet at a global data assembly center, but it also gets sent immediately to all the operational forecasting centers in the world. So the data is shared freely, within 24 hours, with everyone that wants to get hold of it.
This visualization shows some 3,800 of Argo’s floats scattered across the globe.Argo Program
You have 4,000 of these floats now spread throughout the world. Is that enough to do what your scientists need to do?
Wijffels: Currently, the 4,000 we have is a legacy of our first design of Argo, which was conceived in 1998. And at that time, our floats couldn’t operate in the sea-ice zones and couldn’t operate very well in enclosed seas. And so, originally, we designed the global array to be 3,000 floats; that was to kind of track what I think of as the slow background changes. These are changes happening across 1,000 kilometers in around three months—sort of the slow manifold of what’s happening to subsurface ocean temperature and salinity.
So, that’s what that design is for. But now, we have successfully piloted floats in the polar oceans and the seasonal sea-ice zones. So we know we can operate them there. And we also know now that there are some special areas like the equatorial oceans where we might need higher densities [of floats]. And so, we have a new design. And for that new design, we need to get about 4,700 operating floats into the water.
But we’re just starting now to really go to governments and ask them to provide the funds to expand the fleet. And part of the new design calls for floats to go deeper. Most of our floats in operation right now go only as deep as about 2,000 meters. But we now can build floats that can withstand the oceans’ rigors down to depths of 6,000 meters. And so, we want to build and sustain an array of about 1,200 deep-profiling floats, with an additional 1,000 of the newly built units capable of tracking the oceans by geochemistry. But this is new. These are big, new missions for the Argo infrastructure that we’re just starting to try and build up. We’ve done a lot of the piloting work; we’ve done a lot of the preparation. But now, we need to find sustained funding to implement that.
A new generation of deep-diving Argo floats can reach a depth of 6,000 meters. A spherical glass housing protects the electronics inside from the enormous pressure at that depth.MRV Systems/Argo Program
What is the cost of a typical float?
Wijffels: A typical cold float, which just measures temperature, salinity, and operates to 2,000 meters, depending on the country, costs between $20,000 and $30,000 U.S. dollars. But they each last five to seven years. And so, the cost per profile that we get, which is what really matters for us, is very low—particularly compared with other methods [of acquiring the same data].
What kind of insights can we get from tracking heat and salinity and how they’re changing across Earth’s oceans?
Wijffels: There are so many things I could talk about, so many amazing discoveries that have come from the Argo data stream. There’s more than a paper a day that comes out using Argo. And that’s probably a conservative view. But I mean, one of the most important things we need to measure is how the ocean is warming. So, as the Earth system warms, most of that extra heat is actually being trapped in the ocean. Now, it’s a good thing that that heat is taken up and sequestered by the ocean, because it makes the rate of surface temperature change slower. But as it takes up that heat, the ocean expands. So, that’s actually driving sea-level rise. The ocean is pumping heat into the polar regions, which is causing both sea-ice and ice-sheet melt. And we know it’s starting to change regional weather patterns as well. With all that in mind, tracking where that heat is, and how the ocean circulation is moving it around, is really, really important for understanding both what's happening now to our climate system and what's going to happen to it in the future.
What has Argo’s data told us about how ocean temperatures have changed over the past 20 years? Are there certain oceans getting warmer? Are there certain parts of oceans getting warmer and others getting colder?
Wijffels: The signal in the deep ocean is very small. It’s a fraction, a hundredth of a degree, really. But we have very high precision instruments on Argo. The warming signal came out very quickly in the Argo data sets when averaged across the global ocean. If you measure in a specific place, say a time series at a site, there's a lot of noise there because the ocean circulation is turbulent, and it can move heat around from place to place. So, any given year, the ocean can be warm, and then it can be cool…that’s just a kind of a lateral shifting of the signal.
“We have discovered through Argo new current systems that we knew nothing about....There’s just been a revolution in our ability to make discoveries and understand how the ocean works.”
But when you measure globally and monitor the global average over time, the warming signal becomes very, very apparent. And so, as we’ve seen from past data—and Argo reinforces this—the oceans are warming faster at the surface than at their depths. And that’s because the ocean takes a while to draw the heat down. We see the Southern Hemisphere warming faster than the Northern Hemisphere. And there’s a lot of work that’s going on around that. The discrepancy is partly due to things like aerosol pollution in the Northern Hemisphere’s atmosphere, which actually has a cooling effect on our climate.
But some of it has to do with how the winds are changing. Which brings me to another really amazing thing about Argo: We’ve had a lot of discussion in our community about hiatuses or slowdowns of global warming. And that’s because of the surface temperature, which is the metric that a lot of people use. The oceans have a big effect on the global average surface temperature estimates because the oceans comprise the majority of Earth’s surface area. And we see that the surface temperature can peak when there’s a big El Niño–Southern Oscillation event. That’s because, in the Pacific, a whole bunch of heat from the subsurface [about 200 or 300 meters below the surface] suddenly becomes exposed to the surface. [
Editor’s note: The El Niño–Southern Oscillation is a recurring, large-scale variation in sea-surface temperatures and wind patterns over the tropical eastern Pacific Ocean.]
What we see is this kind of chaotic natural phenomena, such as the El Niño–Southern Oscillation. It just transfers heat vertically in the ocean. And if you measure vertically through the El Niño or the tropical Pacific, that all cancels out. And so, the actual change in the amount of heat in the ocean doesn’t see those hiatuses that appear in surface measurements. It’s just a staircase. And we can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.
Argo was obviously designed and established for research into climate change, but so many large scientific instruments turn out to be useful for scientific questions other than the ones they were designed for. Is that the case with Argo?
Wijffels: Absolutely. Climate change is just one of the questions Argo was designed to address. It’s really being used now to study nearly all aspects of the ocean, from ocean mixing to just mapping out what the deep circulation, the currents in the deep ocean, look like. We now have very detailed maps of the surface of the ocean from the satellites we talked about, but understanding what the currents are in the deep ocean is actually very, very difficult. This is particularly true of the slow currents, not the turbulence, which is everywhere in the ocean like it is in the atmosphere. But now, we can do that using Argo because Argo gives us a map of the sort of pressure field. And from the pressure field, we can infer the currents. We have discovered through Argo new current systems that we knew nothing about. People are using this knowledge to study the ocean eddy field and how it moves heat around the ocean.
People have also made lots of discoveries about salinity; how salinity affects ocean currents and how it is reflecting what’s happening in our atmosphere. There’s just been a revolution in our ability to make discoveries and understand how the ocean works.
During a typical 10-day cycle, an Argo float spends most of its time at a depth of 2,000 meters, making readings before ascending to the surface and then transmitting its data via a satellite network.Argo Program
As you pointed out earlier, the signal from the deep ocean is very subtle, and it’s a very small signal. So, naturally, that would prompt an engineer to ask, “How accurate are these measurements, and how do you know that they’re that accurate?”
Wijffels: So, at the inception of the program, we put a lot of resources into a really good data-management and quality-assurance system. That’s the Argo Data Management system, which broke new ground for oceanography. And so, part of that innovation is that we have, in every nation that deploys floats, expert teams that look at the data. When the data is about a year old, they look at that data, and they assess it in the context of nearby ship data, which is usually the gold standard in terms of accuracy. And so, when a float is deployed, we know the sensors are routinely calibrated. And so, if we compare a freshly calibrated float’s profile with an old one that might be six or seven years old, we can make important comparisons. What’s more, some of the satellites that Argo is designed to work with also give us ability to check whether the float sensors are working properly.
And through the history of Argo, we have had issues. But we’ve tackled them head on. We have had issues that originated in the factories producing the sensors. Sometimes, we’ve halted deployments for years while we waited for a particular problem to be fixed. Furthermore, we try and be as vigilant as we can and use whatever information we have around every float record to ensure that it makes sense. We want to make sure that there’s not a big bias, and that our measurements are accurate.
You mentioned earlier there’s a new generation of floats capable of diving to an astounding 6,000 meters. I imagine that as new technology becomes available, your scientists and engineers are looking at this and incorporating it. Tell us how advances in technology are improving your program.
Wijffels: [There are] three big, new things that we want to do with Argo and that we’ve proven we can do now through regional pilots. The first one, as you mentioned, is to go deep. And so that meant reengineering the float itself so that it could withstand and operate under really high pressure. And there are two strategies to that. One is to stay with an aluminum hull but make it thicker. Floats with that design can go to about 4,000 meters. The other strategy was to move to a glass housing. So the float goes from a metal cylinder to a glass sphere. And glass spheres have been used in ocean science for a long time because they’re extremely pressure resistant. So, glass floats can go to those really deep depths, right to the seafloor of most of the global ocean.
The game changer is a set of sensors that are sensitive and accurate enough to measure the tiny climate-change signals that we’re looking for in the deep ocean. And so that requires an extra level of care in building those sensors and a higher level of calibration. And so we’re working with sensor manufacturers to develop and prove calibration methods with tighter tolerances and ways of building these sensors with greater reliability. And as we prove that out, we go to sea on research vessels, we take the same sensors that were in our shipboard systems, and compare them with the ones that we’re deploying on the profiling floats. So, we have to go through a whole development cycle to prove that these work before we certify them for global implementation.
You mentioned batteries. Are batteries what is ultimately the limit on lifetime? I mean, I imagine you can’t recharge a battery that’s 2,000 meters down.
Wijffels: You’re absolutely right. Batteries are one of the key limitations for floats right now as regards their lifetime, and what they’re capable of. If there were a leap in battery technology, we could do a lot more with the floats. We could maybe collect data profiles faster. We could add many more extra sensors.
So, battery power and energy management Is a big, important aspect of what we do. And in fact, the way that we task the floats, it’s been a problem with particularly lithium batteries because the floats spend about 90 percent of their time sitting in the cold and not doing very much. During their drift phase, we sometimes turn them on to take some measurements. But still, they don’t do very much. They don’t use their buoyancy engines. This is the engine that changes the volume of the float.
And what we’ve learned is that these batteries can
passivate. And so, we might think we’ve loaded a certain number of watts onto the float, but we never achieved the rated power level because of this passivation problem. But we’ve found different kinds of batteries that really sidestep that passivation problem. So, yes, batteries have been one thing that we’ve had to figure out so that energy is not a limiting factor in float operation.
Match ID: 203 Score: 1.43 source: spectrum.ieee.org age: 8 days qualifiers: 1.43 development
Abstract: The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While such devices enable us to train large-scale neural networks in datacenters and deploy them on edge devices, their designers’ focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully-crafted sponge examples, which are inputs designed to maximise energy consumption and latency, to drive machine learning (ML) systems towards their worst-case performance. Sponge examples are, to our knowledge, the first denial-of-service attack against the ML components of such systems. We mount two variants of our sponge attack on a wide range of state-of-the-art neural network models, and find that language models are surprisingly vulnerable. Sponge examples frequently increase both latency and energy consumption of these models by a factor of 30×. Extensive experiments show that our new attack is effective across different hardware platforms (CPU, GPU and an ASIC simulator) on a wide range of different language tasks. On vision tasks, we show that sponge examples can be produced and a latency degradation observed, but the effect is less pronounced. To demonstrate the effectiveness of sponge examples in the real world, we mount an attack against Microsoft Azure’s translator and show an increase of response time from 1ms to 6s (6000×). We conclude by proposing a defense strategy: shifting the analysis of energy consumption in hardware from an average-case to a worst-case perspective...
Match ID: 205 Score: 1.43 source: www.schneier.com age: 9 days qualifiers: 1.43 microsoft
This is a new vulnerability against Apple’s M1 chip. Researchers say that it is unpatchable.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory, however, have created a novel hardware attack, which combines memory corruption and speculative execution attacks to sidestep the security feature. The attack shows that pointer authentication can be defeated without leaving a trace, and as it utilizes a hardware mechanism, no software patch can fix it.
The attack, appropriately called “Pacman,” works by “guessing” a pointer authentication code (PAC), a cryptographic signature that confirms that an app hasn’t been maliciously altered. This is done using speculative execution—a technique used by modern computer processors to speed up performance by speculatively guessing various lines of computation—to leak PAC verification results, while a hardware side-channel reveals whether or not the guess was correct...
Match ID: 206 Score: 1.43 source: www.schneier.com age: 10 days qualifiers: 1.43 apple
IEEE members across the globe came together to celebrate the first-ever IEEE Education Week from 4 to 8 April. The weeklong celebration highlighted educational opportunities provided by IEEE and its many organizational units. More than 60 IEEE operating units, regions, sections, and technical societies offered live events, virtual resources, special offers, and a daily online quiz that awarded a digital badge for participants who answered correctly.
“Education Week was a chance to show the collective impact IEEE has on lifelong learning and education at every level,” says Jamie Moesch, managing director, Educational Activities. “From preuniversity STEM programs and university offerings to continuing professional education courses and tutorials, there are so many ways to engage with education from IEEE. This week was about bringing all those resources together in one place and making sure our members know about all of the amazing educational opportunities available to them.”
The celebration highlighted resources for:
Engineers and professionals working in technical fields.
University students and faculty members.
Anyone looking for preuniversity STEM education resources and experiences to encourage the next generation of engineers and technologists.
A webinar featuring IEEE President K.J. Ray Liu that illustrated how lifelong learning opportunities are among the reasons IEEE is the professional home for hundreds of thousands of members across the globe.
A series of live virtual events offered by the IEEE Education Society that covered topics of importance to academic faculty, such as online learning solutions, insight into the requirements of publishing educational research work in journals, and a step-by-step road map for instructors who want to move their traditional curricula to the online learning management environment required by their institution.
The IEEE Romania Section offered a webinar that focused on the sustainability of engineering education and the role IEEE plays in supporting student training. It included a discussion on online and offline education, how to train future engineers, and ways to attract young people to study engineering.
The IEEE Foundation hosted an online panel that addressed where and how the philanthropic organization supports educational programs across the globe.
“For both young technical professionals and those who are more established in their fields, taking the time to learn new skills in this age of hybrid and remote working can help their careers flourish,” says Stephen Phillips, vice president, IEEE Educational Activities.
The IEEE Antennas, Propagation, Microwave Theory, and Techniques student branch chapter at the Indian Institute of Technology, in Kharagpur, celebrated IEEE Education Week at Hijli College, in West Bengal, India. On 9 April, they led a hands-on session on how to use basic electronic components like resistors, switches, buzzers, wires, breadboards, and DC battery sources. Pallab Kumar Gogoi
“IEEE Education Week highlighted all of the preuniversity STEM, university, and continuing professional education resources for students, engineers, and technical professionals,” says Babak Beheshti, chair of the IEEE Educational Activities continuing education committee. “As the private sector ramps up hiring, many are looking for candidates who have skills in emerging technologies. IEEE’s educational offerings directly address this increasing need.”
Save the date for next year’s IEEE Education Week, to be held from 2 to 8 April. Follow updates on social media via #EducationAtIEEE and sign up for email updates at educationweek.ieee.org.
The inaugural event also boasts some impressive stats:
• 225 events. • 102 resources provided. • 90 volunteer ambassadors from 23 countries. • Participation by 65 operating units, regions, sections, and technical society partners. • 434 quiz submissions. • 80 digital badges issued. • Visitors from 99 countries. • US $5,975 donated to the IEEE Foundation to support educational programs.
Match ID: 207 Score: 1.43 source: spectrum.ieee.org age: 11 days qualifiers: 1.43 development
Hackers Can Steal Your Tesla by Creating Their Own Personal Keys Thu, 09 Jun 2022 20:20:00 +0000 A researcher found that a recent update lets anyone enroll their own key during the 130-second interval after the car is unlocked with an NFC card. Match ID: 208 Score: 1.43 source: www.wired.com age: 16 days qualifiers: 1.43 tesla
The fourth European Service Module structure to power astronauts on NASA's Orion spacecraft to the Moon is now complete. The structure is seen here at a Thales Alenia Space site in Turin, Italy.
The module is now on its way to Airbus’ clean rooms in Bremen, Germany where engineers will complete the integration and carry out final tests.
As the powerhouse for the Orion spacecraft, the European Service Module provides propulsion and the consumables astronauts need to stay alive.
Much like the load-bearing frame of a car, this structure forms the basis for all further assembly of the spacecraft, including 11 km of wiring, 33 engines, four tanks to hold over 8000 litres of fuel, water and air for astronauts and the seven-metre ‘x-wing’ solar arrays that provide enough electricity to power two households.
The fourth European Service Module is part of the Artemis IV mission that will begin delivering elements of the Gateway, the next human outpost located in lunar orbit.
This includes the International Habitat, or I-Hab, module, built by Thales Alenia on behalf of ESA. It is a pressurised module that will provide living quarters for astronauts visiting the Gateway and include multiple docking ports for berthing vehicles as well as well other modules.
What’s up with the first three European Service Modules?
The first European Service Module is connected with the Orion spacecraft and awaiting launch for Artemis I later this year. The second European Service Module has been formally transferred to NASA and is completing integration at the Operations and Checkout building at Kennedy Space Center. Meanwhile, the third European Service Module continues to be built up in Bremen.
With four European Service Modules already delivered and in production, ESA is ensuring NASA’s Artemis programme continues to develop a sustainable presence on and around the Moon in international partnership.
The countdown to the Moon starts in Europe with 16 companies in ten countries supplying the components that make up humankind’s next generation spacecraft for exploration. Follow the latest on Orion developments on the blog.
Match ID: 209 Score: 1.43 source: www.esa.int age: 23 days qualifiers: 1.43 development
Win the race to design and deploy satellite technologies and systems. Learn how new digital engineering techniques can accelerate development and reduce your risk and costs. Download this free whitepaper now!
Our white paper covers:
Software-based digital twin models to reduce costly satellite system re-design
Ways to improve models throughout the product lifecycle, increase confidence, and reduce risks
Match ID: 210 Score: 1.43 source: connectlp.keysight.com age: 53 days qualifiers: 1.43 development
Take a look inside the box and join ESA astronaut Matthias Maurer from a very special perspective as he supports the @DLR Mason/Concrete Hardening experiment.
The Concrete Hardening experiment investigates the behaviour of various concrete mixtures containing cement and sand or simulated ‘Moon dust’ combined with water and various admixtures. On Earth, higher density components tend to move downward but in weightlessness they are likely to be more evenly distributed.
Researchers will analyse the concrete mixed by Matthias in space for strength, bubble and pore distribution as well as crystal structures, comparing this to ground samples. Their findings will facilitate the development of new, improved concrete mixes that could be used to construct habitats on the Moon or Mars and build more sustainable housing on Earth.
Match ID: 211 Score: 1.43 source: www.esa.int age: 53 days qualifiers: 1.43 development
NASA Awards Contracts for Aerospace Testing and Facilities Operations Mon, 11 Apr 2022 17:44 EDT NASA has awarded a contract to Jacobs Technology Inc. of Tullahoma, Tennessee, to provide the agency’s Ames Research Center in Silicon Valley, California with support services for ground-based aerospace test facilities at the center. Match ID: 212 Score: 1.43 source: www.nasa.gov age: 75 days qualifiers: 1.43 california
This presents some unique challenges for Gateway. On the ISS, astronauts spend a substantial amount of time on station upkeep, but Gateway will have to keep itself functional for extended periods without any direct human assistance.
“The things that the crew does on the International Space Station will need to be handled by Gateway on its own,” explains
Julia Badger, Gateway autonomy system manager at NASA’s Johnson Space Center. “There’s also a big difference in the operational paradigm. Right now, ISS has a mission control that’s full time. With Gateway, we’re eventually expecting to have just 8 hours a week of ground operations.” The hundreds of commands that the ISS receives every day to keep it running will still be necessary on Gateway—they’ll just have to come from Gateway itself, rather than from humans back on Earth.
“It’s a new way of thinking compared to ISS. If something breaks on Gateway, we either have to be able to live with it for a certain amount of time, or we’ve got to have the ability to remotely or autonomously fix it.” —Julia Badger, NASA JSC
To make this happen, NASA is developing a
vehicle system manager, or VSM, that will act like the omnipresent computer system found on virtually every science-fiction starship. The VSM will autonomously manage all of Gateway’s functionality, taking care of any problems that come up, to the extent that they can be managed with clever software and occasional input from a distant human. “It’s a new way of thinking compared to ISS,” explains Badger. “If something breaks on Gateway, we either have to be able to live with it for a certain amount of time, or we’ve got to have the ability to remotely or autonomously fix it.”
While Gateway itself can be thought of as a robot of sorts, there’s a limited amount that can be reasonably and efficiently done through dedicated automated systems, and NASA had to find a compromise between redundancy and both complexity and mass. For example, there was some discussion about whether Gateway’s hatches should open and close on their own, and NASA ultimately decided to leave the hatches manually operated. But that doesn’t necessarily mean that Gateway won’t be able to open its hatches without human assistance; it just means that there will be a need for robotic hands rather than human ones.
“I hope eventually we have robots up there that can open the hatches,” Badger tells us. She explains that Gateway is being designed with potential intravehicular robots (IVRs) in mind, including things like adding visual markers to important locations, placing convenient charging ports around the station interior, and designing the hatches such that the force required to open them is compatible with the capabilities of robotic limbs. Parts of Gateway’s systems may be modular as well, able to be removed and replaced by robots if necessary. “What we’re trying to do,” Badger says, “is make smart choices about Gateway’s design that don’t add a lot of mass but that will make it easier for a robot to work within the station.”
Robonaut at its test station in front of a manipulation task board on the ISS.JSC/NASA
NASA already has a substantial amount of experience with IVR.
Robonaut 2, a full-size humanoid robot, spent several years on the International Space Station starting in 2011, learning how to perform tasks that would otherwise have to be done by human astronauts. More recently, a trio of cubical, toaster-size, free-flying robots called Astrobees have taken up residence on the ISS, where they’ve been experimenting with autonomous sensing and navigation. A NASA project called ISAAC (Integrated System for Autonomous and Adaptive Caretaking) is currently exploring how robots like Astrobee could be used for a variety of tasks on Gateway, from monitoring station health to autonomously transferring cargo, although at least in the near term, in Badger’s opinion, “maintenance of Gateway, like using robots that can switch out broken components, is going to be more important than logistics types of tasks.”
Badger believes that a combination of a generalized mobile manipulator like Robonaut 2 and a free flyer like Astrobee make for a good team, and this combination is currently the general concept for Gateway IVR. This is not to say that the intravehicular robots that end up on Gateway will look like the robots that have been working on the ISS, but they’ll be inspired by them, and will leverage all of the experience that NASA has gained with its robots on ISS so far. It might also be useful to have a limited number of specialized robots, Badger says. “For example, if there was a reason to get behind a rack, you may want a snake-type of robot for that.”
An Astrobee robot (this one is named Bumble) on the ISS.JSC/NASA
While NASA is actively preparing for intravehicular robots on Gateway, such robots do not yet exist, and the agency may not be building these robots itself, instead relying on industry partners to deliver designs that meet NASA’s requirements. At launch, and likely for the first several years at least, Gateway will have to take care of itself without internal robotic assistants. However, one of the goals of Gateway is to operate itself completely autonomously for up to three weeks without any contact with Earth at all, mimicking the three-week solar conjunction between Earth and Mars where the sun blocks any communications between the two planets. “I think that we will get IVR on board,” Badger says. “If we really want Gateway to be able to take care of itself for 21 days, IVR is going to be a very important part of that. And having a robot is absolutely something that I think is going to be necessary as we move on to Mars.”
“Having a robot is absolutely something that I think is going to be necessary as we move on to Mars.” —Julia Badger, NASA JSC
Intravehicular robots are just half of the robotic team that will be necessary to keep Gateway running autonomously long-term. Space stations rely on complex external infrastructure for power, propulsion, thermal control, and much more. Since 2001, the ISS has been home to Canadarm2, a 17.6-meter robotic arm, which is able to move around the station to grasp and manipulate objects while under human control from either inside the station or from the ground.
The Canadian Space Agency, in partnership with space technology company MDA, is developing a new robotic-arm system for Gateway, called
Canadarm3, scheduled to launch in 2027. Canadarm3 will include an 8.5-meter-long arm for grappling spacecraft and moving large objects, as well as a smaller, more dexterous robotic arm that can be used for delicate tasks. The smaller arm can even repair the larger arm if necessary. But what really sets Canadarm3 apart from its predecessors is how it’s controlled, according to Daniel Rey, Gateway chief engineer and systems manager at CSA. “One of the very novel things about Canadarm3 is its ability to operate autonomously, without any crew required,” Rey says. This capability relies on a new generation of software and hardware that gives the arm a sense of touch as well as the ability to react to its environment without direct human supervision.
“With Canadarm3, we realize that if we want to get ready for Mars, more autonomy will be required.” —Daniel Rey, CSA
Even though Gateway will be a thousand times farther away from Earth than the ISS, Rey explains that the added distance (about 400,000 kilometers) isn’t what really necessitates Canadarm3’s added autonomy. “Surprisingly, the location of Gateway in its orbit around the moon has a time delay to Earth that is not all that different from the time delay in low Earth orbit when you factor in various ground stations that signals have to pass through,” says Rey. “With Canadarm3, we realize that if we want to get ready for Mars, where that will no longer be the case, more autonomy will be required.”
Canadarm3’s autonomous tasks on Gateway will include external inspection, unloading logistics vehicles, deploying science payloads, and repairing Gateway by swapping damaged components with spares. Rey tells us that there will also be a science logistics airlock, with a moving table that can be used to pass equipment in and out of Gateway. “It’ll be possible to deploy external science, or to bring external systems inside for repair, and for future internal robotic systems to cooperate with Canadarm3. I think that’ll be a really exciting thing to see.”
Even though it’s going to take a couple of extra years for Gateway’s robotic residents to arrive, the station will be operating mostly autonomously (by necessity) as soon as the Power and Propulsion Element and the Habitation and Logistics Outpost begin their journey to lunar orbit in November o2024. Several science payloads will be along for the ride, including heliophysics and space weather experiments.
Gateway itself, though, is arguably the most important experiment of all. Its autonomous systems, whether embodied in internal and external robots or not, will be undergoing continual testing, and Gateway will need to prove itself before we’re ready to trust its technology to take us into deep space. In addition to being able to operate for 21 days without communications, one of Gateway’s eventual requirements is to be able to function for up to three years without any crew visits. This is the level of autonomy and reliability that we’ll need to be prepared for our exploration of Mars, and beyond.
Match ID: 213 Score: 1.43 source: spectrum.ieee.org age: 79 days qualifiers: 1.43 development
If you’re planning a road trip through California, be sure to stop off at LA’s beachfront city Santa Monica to soak up the laid-back vibes that make it a haven for wellness-seekers
Santa Monica is an American road trip classic, not least because it marks the end of the legendary Route 66. So if you’re planning a dream Californian road trip, be sure to stop off here.
LA’s beach city is well known for its pier and the city has also served as a filming location for iconic movies such as Fast Times at Ridgemont High, Ocean’s Eleven and Forrest Gump. There is plenty of fun to be found on the beachfront, but be sure to look beyond the ferris wheel and you’ll find even more joy in the city’s booming wellness culture, buzzy restaurant scene and less obvious ways to spend some time in the sand.
Continue reading... Match ID: 214 Score: 1.43 source: www.theguardian.com age: 115 days qualifiers: 1.43 california
With everything from a Major League Baseball team to a live music scene, you’ll find the city of Anaheim is a destination in itself – and that’s before you check out the new Avengers Campus
Many California visitors know Anaheim as the home of Disneyland Resort, which is a great reason to make the journey south from Los Angeles. However, as the locals will tell you, there’s plenty to see, eat and do outside of the park gates.
Once home to musicians such as Gwen Stefani, her No Doubt bandmates and Jeff Buckley, Anaheim still has quite the music scene, with local and touring artists bringing live shows back to the city as the country emerges from the pandemic. And, if you’re looking for eclectic eats to rival LA’s lauded dining scene – with less competition for a reservation – you’ve come to the right place.
Continue reading... Match ID: 215 Score: 1.43 source: www.theguardian.com age: 115 days qualifiers: 1.43 california
With its proximity to Mexico and the Pacific ocean, you can find great dining, outdoor sports and culture in equal measure in this Californian city – but don’t miss these highlights …
San Diego’s year-round sunshine and surf makes it the perfect road trip stop for water babies, though there’s plenty to discover beyond the shore. With a diverse range of neighbourhoods, from the historic Old Town and lively Little Italy, to the picturesque seaside area of La Jolla, a visit to San Diego can easily feel like several holidays in one.
Bordering Mexico, the city’s cuisine is heavily influenced by its neighbour, offering some of the most authentic Mexican food in all of the United States – a cross border blend referred to locally as Cali-Baja cuisine. However, with an ever-growing dining scene, there is something to be found for every kind of foodie.
Continue reading... Match ID: 216 Score: 1.43 source: www.theguardian.com age: 115 days qualifiers: 1.43 california
To immerse yourself in the culture, cuisine and political radicalism that California is famous for, there’s no better place to start than the distinct neighbourhoods of San Francisco
San Francisco has long been a creative incubator and cultural melting pot – a catalyst for social change, technological innovation and good times. It’s also a must-visit for anyone considering a Californian road trip, or indeed any great American road trip – after all, it was repeatedly visited by Sal Paradise in Jack Kerouac’s seminal On the Road.
This ravishingly beautiful bay city was home to the beat generation of writers that Kerouac spearheaded in the 1950s, the birthplace of the counterculture and hippie movement of the late 1960s and 1970s, and fertile ground for the jazz, rock and experimental music scenes, in the bars of Haight-Ashbury. And despite the city’s countless historic charms, including the Golden Gate Bridge, crooked and colourful Victorian-era homes, and cable cars tracing the slopes of some of the city’s 48 hills, this fiercely progressive and fun-loving city continues to innovate and inspire, with a groundbreaking farm-to-table restaurant scene, vibrant street art and palpable progressive vibe.
Top: City Lights bookstore. Below: manager Elaine Katzenberger
Continue reading... Match ID: 217 Score: 1.43 source: www.theguardian.com age: 115 days qualifiers: 1.43 california
This isn’t hard to do when the project has been in the works for a long time and is progressing on schedule—the coming first flight of NASA’s Space Launch System, for example. For other stories, we must go farther out on a limb. A case in point: the description of a hardware wallet for Bitcoin that the company formerly known as Square (which recently changed its name to Block) is developing but won’t officially comment on. One thing we can predict with confidence, though, is that Spectrum readers, familiar with the vicissitudes of technical development work, will understand if some of these projects don’t, in fact, pan out. That’s still okay.
Engineering, like life, is as much about the journey as the destination.
When our solar system was very young, there were no planets—only a diffuse disk of gas and dust circled the sun. But within a few million years, that churning cloud of primordial material collapsed under its own gravity to form hundreds, or maybe thousands, of infant planets. Some of those planetesimals, as astronomers call them, grew to be hundreds of kilometers across as they swept up more dust and gas within the swirling solar nebula.
Once they had attained such a size, heat from the decay of the radioactive elements within them became trapped, raising temperatures enough to melt their insides. The denser components of that melt—iron and other metals—settled to the center, leaving lighter silicates to float up toward the surface. These lighter materials eventually cooled to form mantles of silicate rock around heavy metallic cores. In this way, vast amounts of iron and nickel alloys were trapped deep inside these planetesimals, forever hidden from direct scrutiny.
Or were they?
At this time, the solar system was still relatively crowded despite its vast size. And over the next 20 million or so years, many planetesimals crossed paths and collided. Some merged and grew into even larger protoplanets, eventually forming what became the familiar planets we know today.
In each of those protoplanet collisions, the metallic cores were battered and remixed with silicate mantle material, later separating again after being melted by the heat of accretion. Some collisions had enough energy to completely obliterate a protoplanet, leaving behind debris that contributed to the asteroid belt that now exists between the orbits of Mars and Jupiter.
But a few protoplanets may have escaped either of these fates. Astronomers hypothesize that a series of “hit and run” impacts caused these bodies to lose most of their mantles, leaving behind only a small quantity of silicate rock and a large amount of metal. These materials combined to form a rare kind of world. If this theory is correct, the largest example would be an asteroid called
16 Psyche—named after the Greek goddess of the soul, Psyche, and because it was the 16th member of the asteroid belt to be discovered (in 1852).
This artist’s rendering suggests the kind of surface the asteroid 16 Psyche might have.Peter Rubin/JPL-Caltech/Arizona State University/NASA
16 Psyche is about as wide as Massachusetts and has metal-like density. This makes it large and dense enough to account for a full 1 percent of the total mass of the asteroid belt. Metal miners of the future may one day
stake claims on it.
Psyche is also the name of a NASA mission to visit that asteroid. Led by
Lindy Elkins-Tanton of Arizona State University and managed by NASA’s Jet Propulsion Laboratory, the Psyche mission will test astronomers’ theories about planetary-core formation and composition while it explores a world with a landscape unlike any that space probes have visited so far.
Lindy Elkins-Tanton of Arizona State University leads the Psyche mission’s scientific team.Bill Ingalls/NASA
The Psyche mission is scheduled to launch in August 2022, with the spacecraft reaching its destination more than three years later. What will it find there? Astronomers think we might see enormous surface faults from the contraction of freezing metal, glittering cliffs of green crystalline mantle minerals, frozen flows of sulfur lava, and vast fields of metal shards scattered over the surface from millennia of high-speed impacts. There will no doubt be plenty of surprises, too.
The long journey this space probe must make to reach its destination will be especially demanding. 16 Psyche resides in the outer part of the main asteroid belt, well beyond the orbit of Mars. The probe will begin circling the asteroid in January of 2026 and will study it for nearly two years.
Counterintuitively, arranging for a probe to orbit a small body like an asteroid is harder than orbiting a planet. Big planets have deep gravity wells, which allow spacecraft to enter orbit with a single low-altitude rocket burn. Small bodies have little gravity and provide essentially no gravitational leverage, so the spacecraft’s propulsion system must do all the work.
Astronomers think we might see enormous surface faults, glittering cliffs of green crystalline mantle minerals, frozen flows of sulfur lava, and vast fields of metal shards.
Not long ago, NASA managed this maneuver successfully with its Dawn mission, which sent a probe to orbit the asteroids Vesta and Ceres. The Dawn spacecraft used solar-electric propulsion. Its three highly efficient engines converted electricity from solar arrays into thrust by ionizing a propellant gas and accelerating it though a high-voltage electric field.
When our team at the Jet Propulsion Laboratory was designing the Psyche probe, we planned to do something similar. The main problem was figuring out how to do it without exceeding the mission’s budget. JPL engineers solved this problem by using what was for the most part existing technology, manufactured by
Maxar, a company based in Westminster, Colo. It is one of the world’s largest providers of commercial geosynchronous communication satellites, produced at a division located in Palo Alto, Calif.
The Psyche spacecraft is built on the “chassis” used for those satellites, which includes high-power solar arrays, electric-propulsion thrusters, and associated power and thermal control elements. In many ways, the Psyche spacecraft resembles a standard Maxar communications satellite. But it also hosts JPL’s avionics, flight software, and the many fault-protection systems required for autonomous deep-space operation.
Technicians at NASA’s Jet Propulsion Laboratory work on the Psyche spacecraft.Maxar
Making this concept work was difficult from the get-go. First, NASA management was rightfully wary of such cost-cutting measures, because the “
faster, better, cheaper” model of missions mounted in the 1990s produced some spectacular failures. Second, using Earth-orbiting systems on the Dawn mission resulted in large cost overruns during the development phase. Finally, many people involved believed (erroneously) that the environment of deep space is very special and that the Psyche spacecraft would thus have to be very different from a communications satellite intended only to orbit Earth.
We and our many NASA colleagues addressed each of these issues by teaming with engineers at Maxar. We kept costs under control by using hardware from the company’s standard product line and by minimizing changes to it. We could do that because the thermal environment in geosynchronous orbit isn’t in fact so different from what the Psyche probe will encounter.
Soon after launch, the Psyche spacecraft will experience the same relatively high solar flux that communications satellites are built for. It will also have to handle the cold of deep space, of course, but Maxar’s satellites must endure similar conditions when they fly through Earth’s shadow, which they do once a day during certain times of the year.
Because they serve as high-power telecommunications relays, Maxar’s satellites must dissipate the many kilowatts of waste heat generated by their microwave power amplifiers. They do this by radiating that heat into space. Radiating lots of heat away would be a major problem for our space probe, though, because in the vicinity of 16 Psyche the flux of light and heat from the sun is one-tenth of that at Earth. So if nothing were done to prevent it, a spacecraft designed for orbiting Earth would soon become too cold to function this far out in the asteroid belt.
Maxar addressed this challenge by installing multilayer thermal blanketing all over the spacecraft, which will help to retain heat. The company also added custom louvers on top of the thermal radiators. These resemble Venetian blinds, closing automatically to trap heat inside when the spacecraft gets too cold. But plenty of other engineering challenges remained, especially with respect to propulsion.
To reduce the mass of propellant needed to reach the asteroid, the Psyche spacecraft will use solar-electric thrusters that accelerate ions to very high velocities—more than six times as high as what can be attained with chemical rockets. In particular, it will use a type of ion thruster known as a Hall thruster.
A Hall thruster, four of which will propel the Psyche spacecraft, produces an eerie blue glow during testing [left]. The unit consists of a ring-shaped anode, which has a diameter similar to that of a dinner plate, and a narrow, cylindrical cathode mounted to one side [right].JPL-Caltech/NASA
Soviet engineers pioneered the use of Hall thrusters in space during the 1970s. And we use four Russian-made Hall thrusters on the Psyche spacecraft for the simple reason that Maxar uses that number to maintain the orbits of their communications satellites.
Hall thrusters employ a clever strategy to accelerate positively charged ions [see sidebar, “How a Hall Thruster Works”]. This is different from what is done in the ion thrusters on the Dawn spacecraft, which used high-voltage grids. Hall thrusters, in contrast, use a combination of electric and magnetic fields to accelerate the ions. While Hall thrusters have a long history of use on satellites, this is the first time they will go on an interplanetary mission.
How a Hall Thruster Works
A Hall thruster uses an electron discharge to create a plasma—a quasi-neutral collection of positive ions and electrons—not unlike what goes on in a fluorescent lamp.
The thruster includes a hollow cathode (negative electrode), placed outside the thruster body, and an anode (positive electrode) positioned inside a ring-shaped discharge chamber. If these electrodes were all there was, the power applied to the thruster would just go into making a current of electrons flowing from cathode to anode, emitting some blue glow along the way. Instead, a Hall thruster applies a radially directed magnetic field across its discharge channel.
The electrons emitted by the cathode are very light and fast. So this magnetic field impedes the flow of electrons to the anode, forcing them instead to go in circular orbits around the center line of the thruster. The positive xenon ions that are generated inside the discharge chamber accelerate toward the cloud of circling electrons, but these ions are too massive to be affected by the weak magnetic field. So they shoot straight out in a beam, sweeping up electrons along the way. The ejection of that material at high speed creates thrust. It’s not much thrust—equal to about the weight of a few quarters—but applied steadily for months on end, it’s enough to get the spacecraft zooming.
We kept costs under control by using hardware from Maxar's standard product line and by minimizing changes to it.
You might think that thrusting around Earth isn’t any different from doing so in deep space. There are, in fact, some big differences. Remember, the power to run the thrusters comes from solar panels, and that power must be used as it is generated—there is no great big battery to store it. So the power available to run the thrusters will diminish markedly as the spacecraft moves away from the sun.
That’s an issue because electric thrusters are usually designed to run best at their maximum power level. It turns out to be pretty easy to throttle them a little, maybe to about half their maximum output. For example, the Hall thrusters Maxar uses on its communications satellites can run at as much as 4.5 kilowatts when the satellite’s orbit needs to be raised. For more routine station keeping, these thrusters run at 3 kW. We needed these thrusters to run at less than 1 kW when the spacecraft neared its destination.
The problem is that efficiency decreases when you do this kind of throttling. In that sense, a Hall thruster is like the engine in your car. But the situation is worse than in a car: The electrical discharge inside a thruster can become unstable if the power is decreased too much. The throttled thruster can even quit firing altogether—like a flameout in a jet engine.
But with some clever engineering, we were able to make modifications to how we run Maxar’s thruster so that it could operate stably at power levels as low as 900 W. We then tested our reengineered thruster in facilities at NASA’s Glenn Research Center and at JPL to prove to ourselves that it would indeed operate reliably for the full six-year Psyche mission.
The Psyche mission will test equipment for sending and receiving data optically. This Deep Space Optical Communications (DSOC) system must be pointed with great precision and kept isolated from vibration.JPL-Caltech/Arizona State University/NASA
The Psyche probe will venture more than three times as far from the sun as Earth ever does. Generating the 2 kW of power needed to operate the spacecraft and fire its thrusters when it reaches its destination requires an array of solar cells large enough to generate more than 20 kW near Earth. That’s a lot of power as these things go.
Fortunately for NASA, the cost of solar power has dropped dramatically over the past decade. Today, the commercial satellites that beam television and Internet signals across the globe generate these power levels routinely. Their solar-power systems are effective, reliable, and relatively inexpensive. But they are designed to work while circling Earth, not at the outer edges of the asteroid belt.
When the Psyche mission was conceived in 2013, Maxar had successfully flown more than 20 spacecraft with power levels greater than 20 kW. But the company had never built an interplanetary probe. JPL, on the other hand, had years of experience operating equipment in deep space, but it had never built a power system of the size required for the Psyche mission. So JPL and Maxar combined forces.
The challenge here was more complicated than just dealing with the fact that sunlight at 16 Psyche is so dim. The solar cells on the Psyche spacecraft would also have to operate at temperatures much lower than normal. That’s a serious issue because the voltage from such cells rises as they get colder.
When orbiting Earth, Maxar’s solar arrays generate 100 volts. If these same arrays were used near 16 Psyche, they would produce problematically high voltages. While we could have added electronics to reduce the voltage coming out of the array, the new circuitry would be costly to design, build, and test for space. Worse, it would have reduced the efficiency of power generation when the spacecraft is far from the sun, where producing adequate amounts of power will be tough in any case.
Fortunately, Maxar already had a solution. When one of their communications satellites passes into Earth’s shadow, it’s powered by a bank of lithium-ion batteries about the size of what’s found in electric cars. That’s big enough to keep the satellite running while it is in darkness behind Earth, which is never for much longer than an hour. But the voltage from such batteries varies over time—perhaps from as low as 40 V on some satellites when the battery is deeply discharged all the way up to 100 V. To handle that variability, Maxar’s satellites include “discharge converters,” which boost voltage to provide power at a constant 100 V. These converters were flight proven and highly efficient—ideal to repurpose for Psyche.
The key was to rewire the solar array, lowering the voltage it produced in the vicinity of Earth to about 60 V. As the spacecraft moves away from the sun, the voltage will gradually rise as the arrays get colder until it reaches about 100 V at 16 Psyche. Maxar’s discharge converters, normally attached to batteries, are connected to the solar array instead and used to provide the spacecraft with power at a constant 100 V over the entire mission.
This approach incurs some energy losses, but those are greatest when the spacecraft is close to Earth and power is abundantly available. The system will operate at its highest efficiency when the spacecraft nears 16 Psyche, where generating power will be a lot harder. It uses flight-proven hardware and is far more economical than sophisticated systems designed to eke out peak power from a solar array throughout a deep-space mission.
One day the technology being tested may enable you to watch astronauts tromping around the Red Planet in high-definition video.
In addition to the set of scientific instruments that will be used to study the asteroid, the Psyche spacecraft will also be carrying what NASA calls a “technology demonstration” payload. Like so many things at NASA, it goes by an acronym: DSOC, which stands for Deep Space Optical Communications.
DSOC is a laser-based communications system intended to outdo current radio technology by as much as a hundredfold. DSOC will demonstrate its capability by transmitting data at up to 2 megabits per second from beyond the orbit of Mars. One day similar technology may enable you to watch astronauts tromping around the Red Planet in high-definition video.
The DSOC instrument has a “ground segment” and a “flight segment,” each of which includes both a laser transmitter and a receiver. The transmitter for the ground segment, a 7-kW laser, will be installed at JPL’s
Optical Communications Telescope Laboratory, located about 60 kilometers northeast of Los Angeles. A sensitive receiver, one capable of counting individual photons, will be attached to the 5.1-meter-wide Hale Telescope at Caltech’s Palomar Observatory, located a similar distance northeast of San Diego.
The Psyche spacecraft’s high-gain radio antenna, shown here being tested at the Maxar's facilities in Palo Alto, Calif., will provide the data communications throughout the mission.Maxar
DSOC’s flight segment, the part on the spacecraft, contains the same type of equipment, but much scaled down: a laser with an average power of 4 watts and a 22-centimeter telescope. The flight segment sounds simple, like something you could cobble together yourself at home. In fact, it’s anything but.
For one, it needs some rather elaborate gear to point it in the right direction. The Psyche spacecraft itself is able to keep DSOC pointed toward Earth to within a couple of milliradians—about a tenth of a degree. Using built-in actuators, DSOC then searches for the laser beacon sent from the ground. After detecting it, the actuators stabilize the pointing of DSOC’s own laser back at Earth with an accuracy measured in microradians.
The flight segment is able to point so steadily in the same direction because it’s housed in a special enclosure that provides thermal and mechanical isolation from the rest of the spacecraft. DSOC also uses a long sun shield to eliminate stray light on its laser receiver, with a deployable aperture cover to ensure that the unit remains clean.
During DSOC operations in space, the spacecraft cannot use its thrusters or gimbal its solar arrays, which would introduce problematic movements. Instead, it will keep its attitude fixed solidly in one direction and will use its star-tracking system to determine what that direction is. The constraints on what the spacecraft can do at these times is not an impediment, though, because DSOC will be used only for tests during the first year of the mission, while traveling to just past the orbit of Mars. When the spacecraft reaches 16 Psyche, it will transmit data back to Earth over a microwave radio link.
Having emerged from nearly a decade of planning, and having traveled for more than three years, the Psyche spacecraft will finally reach its target in early 2026. There will no doubt be plenty of tension in the air when controllers at JPL maneuver the spacecraft into orbit, waiting the many minutes it will take signals to be returned to find out whether all went well in this distant corner of the asteroid belt.
If all goes according to plan, for the following two years this communications-satellite-turned-space-probe will provide scientists with a close-up look at this odd metallic world, having already demonstrated an advanced optical system for high-data-rate communications. These achievements will have been a long time coming for us—but we expect that what is learned will be well worth the many years we’ve put into trying to ensure that this mission is a success.
Match ID: 219 Score: 1.43 source: spectrum.ieee.org age: 180 days qualifiers: 1.43 development
NASA’s 2021 Included Mars Landing, First Flight, Artemis, More Tue, 21 Dec 2021 10:00 EST In 2021, NASA completed its busiest year of development yet in low-Earth orbit, made history on Mars, continued to make progress on its Artemis plans for the Moon, tested new technologies for a supersonic aircraft, finalized launch preparations for the next-generation space telescope, and much more – all while safely operating during a pandemic and Match ID: 220 Score: 1.43 source: www.nasa.gov age: 186 days qualifiers: 1.43 development
NASA Awards SETI Institute Contract for Planetary Protection Support Fri, 10 Jul 2020 12:04 EDT NASA has awarded the SETI Institute in Mountain View, California, a contract to support all phases of current and future planetary protection missions to ensure compliance with planetary protection standards. Match ID: 221 Score: 1.43 source: www.nasa.gov age: 715 days qualifiers: 1.43 california
In 1997, Harvard Business School professor Clayton Christensen created a sensation among venture capitalists and entrepreneurs with his book The Innovator's Dilemma. The lesson that most people remember from it is that a well-run business can’t afford to switch to a new approach—one that ultimately will replace its current business model—until it is too late.
One of the most famous examples of this conundrum involved photography. The large, very profitable companies that made film for cameras knew in the mid-1990s that digital photography would be the future, but there was never really a good time for them to make the switch. At almost any point they would have lost money. So what happened, of course, was that they were displaced by new companies making digital cameras. (Yes, Fujifilm did survive, but the transition was not pretty, and it involved an improbable series of events, machinations, and radical changes.)
A second lesson from Christensen’s book is less well remembered but is an integral part of the story. The new companies springing up might get by for years with a disastrously less capable technology. Some of them, nevertheless, survive by finding a new niche they can fill that the incumbents cannot. That is where they quietly grow their capabilities.
For example, the early digital cameras had much lower resolution than film cameras, but they were also much smaller. I used to carry one on my key chain in my pocket and take photos of the participants in every meeting I had. The resolution was way too low to record stunning vacation vistas, but it was good enough to augment my poor memory for faces.
This lesson also applies to research. A great example of an underperforming new approach was the second wave of neural networks during the 1980s and 1990s that would eventually revolutionize artificial intelligence starting around 2010.
Neural networks of various sorts had been studied as mechanisms for machine learning since the early 1950s, but they weren’t very good at learning interesting things.
In 1979, Kunihiko Fukushima first published his research on something he called shift-invariant neural networks, which enabled his self-organizing networks to learn to classify handwritten digits wherever they were in an image. Then, in the 1980s, a technique called backpropagation was rediscovered; it allowed for a form of supervised learning in which the network was told what the right answer should be. In 1989, Yann LeCun combined backpropagation with Fuksuhima's ideas into something that has come to be known as convolutional neural networks (CNNs). LeCun, too, concentrated on images of handwritten digits.
In 2012, the poor cousin of computer vision triumphed, and it completely changed the field of AI.
Over the next 10 years, the U.S. National Institute of Standards and Technology (NIST) came up with a database, which was modified by LeCun, consisting of 60,000 training digits and 10,000 test digits. This standard test database, called MNIST, allowed researchers to precisely measure and compare the effectiveness of different improvements to CNNs. There was a lot of progress, but CNNs were no match for the entrenched AI methods in computer vision when applied to arbitrary images generated by early self-driving cars or industrial robots.
But during the 2000s, more and more learning techniques and algorithmic improvements were added to CNNs, leading to what is now known as deep learning. In 2012, suddenly, and seemingly out of nowhere, deep learning outperformed the standard computer vision algorithms in a set of test images of objects, known as ImageNet. The poor cousin of computer vision triumphed, and it completely changed the field of AI.
A small number of people had labored for decades and surprised everyone. Congratulations to all of them, both well known and not so well known.
But beware. The message of Christensen’s book is that such disruptions never stop. Those standing tall today will be surprised by new methods that they have not begun to consider. There are small groups of renegades trying all sorts of new things, and some of them, too, are willing to labor quietly and against all odds for decades. One of those groups will someday surprise us all.
I love this aspect of technological and scientific disruption. It is what makes us humans great. And dangerous.
This article appears in the July 2022 print issue as “The Other Side of The Innovator’s Dilemma.”
Match ID: 222 Score: 0.71 source: spectrum.ieee.org age: 9 days qualifiers: 0.71 startup
A top military official says there’s a simmering shadow conflict playing out in space, with U.S. satellites coming under regular attack. But what does an attack on a spacecraft look like, who is committing them, and how can operators protect themselves?
General David Thompson, the vice chief of space operations at the US Space Force, recently told the Washington Post that China and Russia are targeting U.S. government satellites on a daily basis. While that might conjure up images of satellites being blown out of orbit left, right, and center, the reality is more low-key.
Thompson said the bulk of the incidents they’re seeing are “reversible attacks”, which temporarily disrupt a satellites operations rather than causing permanent damage. This can be achieved in a variety of ways, from jamming satellite signals to carrying out cyber-attacks.
“I would call those activities concerning, because there isn't a shared understanding of where the thresholds are for retaliation.” —Laura Grego, MIT
These kinds of attacks exist in a legal and political gray area, says Laura Grego, a Stanton Nuclear Security Fellow at the MIT Laboratory for Nuclear Security and Policy. But while most states don’t currently treat them as acts of war, the lack of clarity and their growing frequency is a worry.
“They’re testing the boundaries, trying to explore how far you can go before you get a reaction,” says Grego. “I would call those activities concerning, because there isn't a shared understanding of where the thresholds are for retaliation.”
The most common kind of attack involves interfering with the radio signals coming to and from satellites, particularly those used by GPS satellites, Brian Weeden, director of program planning at the Secure World Foundation, writes in an email. This can involve beaming a signal from a ground-based transmitter at a satellite to interfere with its ability to pick up communications from its control station. Alternatively, it’s possible to direct a rogue transmission towards ground receivers to block the signal, or replace it with a fake one, something known as spoofing.
A 2019 report from security research non-profit C4ADS found evidence that Russia regularly uses jamming and spoofing attacks against GPS satellites to protect sensitive locations from drone attacks or surveillance. But the US also has similar capabilities - last year the US Navy tested GPS jamming technology that interfered with signals across six states.
The US, Russia and China have also all developed ground-based laser systems designed to dazzle the optical sensors of spy satellites as they pass over sensitive sites. “That's akin to shining a really bright flashlight in someone's eyes,” writes Weeden. “It can temporarily prevent the satellite from taking a picture, or in some cases might actually physically damage the [image sensor] if it's strong enough. ”
The advent of software-defined radios has made it far easier and cheaper to carry out jamming and spoofing attacks on satellites.
The leading space powers still have the capability to commit more obvious acts of war in space. Russia recently drew widespread condemnation for testing an anti-satellite missile on one of its own defunct surveillance satellites, and the US, China, and India have all carried out similar tests. One disincentive to carrying out this kind of attack, however, is that it litters Earth’s orbit with debris that can unintentionally damage other spacecraft, says Grego.
There are ways to physically attack a satellite without causing so much collateral damage, though. So-called “co-orbital anti-satellite weapons” are essentially spacecraft that can maneuver close to an adversary's satellite before attacking them with a projectile or clawing at them with a robotic arm to cause damage. They can also be used to snoop on enemy satellites, says Grego, either to intercept signals or try to work out what their mission is.
Attacks on spacecraft are no longer just the preserve of nation states though, says Frank Schubert, from Airbus Cybersecurity. The advent of software defined radios, which use a digital processor rather than specialized electronics to modulate radio signals, has made it far easier and cheaper to carry out jamming and spoofing attacks. And in 2019, researchers showed that they could intercept signals from a satellite broadband service and identify users and their browsing activity using just €285 ($322) worth of equipment.
Satellite operators are also subject to constant attacks from hackers, says Schubert. Typically, these are targeted at the ground stations that control and communicate with satellites, but if successful could be used to do everything from steal data to interfere with the operation of spacecraft. There’s also a growing threat from “hybrid attacks”, says Schubert. For instance, attackers might jam a signal from a satellite and quickly follow this up with a well-crafted phishing email to the operator claiming to be able to resolve the problem.
Countering these threats involves following the same kinds of cybersecurity best practices as any other industry. But given the complex supply chains involved in building spacecraft it’s also critical to ensure the provenance of every part of the system. “Security by design is key,” says Schubert. “I can have the best defense for cyber-attacks, but if the bug is already inside my system that went in through the supply chain, I'm in big trouble.”
Another important way to counteract many of these threats is to build resiliency into space systems, says Grego. This could involve investing in the ability to rapidly launch replacement satellites or swapping a single large, expensive satellite with a network of smaller ones that can still operate if one or two are knocked out.
Match ID: 223 Score: 0.71 source: spectrum.ieee.org age: 190 days qualifiers: 0.71 uber