********** BUSINESS **********
return to top
Understanding the Silicon Valley Bank Run
Fri, 17 Mar 2023 10:00:18 +0000
Damon Silvers, deputy chair of the Congressional Oversight Panel for the 2008 bank bailout, explains how deregulation paved the way for SVB’s collapse.
The post Understanding the Silicon Valley Bank Run appeared first on The Intercept.
Ozan Kose/Agence France-Presse/Getty ImagesRecent stress in the banking sector isn’t the only sign that something is breaking or about to crack in the wake of the Federal Reserve’s yearlong rate-hike campaign.On Tuesday, Goldman Sachs GS warned that it’s seeing signs of a broad-based deterioration in market functioning and a “material uptick” in most of the indicators it follows to gauge market impairment across assets. “A closer look reveals much of the stress to be stemming from the Treasury and funding markets,” said Ravi Raj, a rates strategist and quantitative economist at the firm.Out of almost three dozen indicators, seven were in a red zone as of Monday and in the highest percentile of the firm’s “Temperature Check” chart, meaning they show the most signs of impairment or stress relative to the past five years, Raj said in an email to MarketWatch. Those red-hot readings include the UST Bloomberg Liquidity Index and the 10-year security’s market depth, which refers to the market’s ability to absorb large orders without significantly impacting the underlying price.Source: Goldman Sachs Global Investment ResearchTuesday’s warning from Goldman Sachs came on a day in which investors seemed willing to look past the banking sector’s problems, for now. As financial markets turned to Wednesday’s policy update from the Fed, Treasury yields shot higher across the board, sending the 2-year rate BX:TMUBMUSD02Y to its biggest one-day jump since June 5, 2009. Meanwhile, U.S. stocks DJIASPXCOMP finished higher. Last year, traders, academics and other analysts raised concerns that the Treasury market was fragile, potentially even vulnerable to becoming the source of the next financial crisis, should there be large-scale forced selling or some other surprise.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Gold futures fell by more than 2% on Tuesday, marking their largest single session loss in about six weeks, according to FactSet data. Prices for the metal touched an intraday high above $2,000 on Monday as “concerns over the banking system boosted the appetite for safe-haven assets,” said Lukman Otunuga, manager, market analysis at FXTM. Fears of a full-blown crisis later eased following the historic takeover of Credit Suisse and “this blunted appetite for gold.” Even so, the precious metal is “set to glow amid the fragile sentiment with expectations around a less aggressive Federal Reserve limiting downside losses,” he said. The Federal Open Market Committee will announce its monetary policy decision on Wednesday afternoon. Gold for April delivery GCJ23 fell $41.70, or 2.1%, to settle at $1,941.10 an ounce on Comex.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Dead, and in a jacket and tie. That’s how he was on 1 December 1948, when two men found him slumped against a retaining wall on the beach at Somerton, a suburb of Adelaide, Australia.
The Somerton Man’s body was found on a beach in 1948. Nobody came forward to identify him.
JAMES DURHAM
Police distributed a photograph, but no one came forward to claim the body. Eyewitnesses reported having seen the man, whom the newspapers dubbed the Somerton Man and who appeared to be in his early 40s, lying on the beach earlier, perhaps at one point moving his arm, and they had concluded that he was drunk. The place of death led the police to treat the case as a suicide, despite the apparent lack of a suicide note. The presence of blood in the stomach, a common consequence of poisoning, was noted at the autopsy. Several chemical assays failed to identify any poison; granted, the methods of the day were not up to the task.
There was speculation of foul play. Perhaps the man was a spy who had come in from the cold; 1948 was the year after the Cold War got its name. This line of thought was strengthened, a few months later, by codelike writings in a book that came to be associated with the case.
These speculations aside, the idea that a person could simply die in plain view and without friends or family was shocking. This was a man with an athletic build, wearing a nice suit, and showing no signs of having suffered violence. The problem nagged many people over the years, and eventually it took hold of me. In the late 2000s, I began working on the Somerton Man mystery, devoting perhaps 10 hours a week to the research over the course of about 15 years.
Throughout my career, I have always been interested in cracking mysteries. My students and I used computational linguistics to identify which of the three authors of The Federalist Papers—Alexander Hamilton, James Madison, and John Jay—was responsible for any given essay. We tried using the same method to confirm authorship of Biblical passages. More recently, we’ve been throwing some natural-language processing techniques into an effort to decode the Voynich Manuscript, an early 15th-century document written in an unknown language and an unknown script. These other projects yield to one or another key method of inquiry. The Somerton Man problem posed a broader challenge.
My one great advantage has been my access to students and to scientific instruments at the University of Adelaide, where I am a professor of electrical and electronic engineering. In 2009, I established a working group at the university’s Center for Biomedical Engineering.
One question surrounding the Somerton Man had already been solved by sleuths of a more literary bent. In 1949, a pathologist had found a bit of paper concealed in one of the dead man’s pockets, and on it were printed the words Tamám Shud, the Persian for “finished.” The phrase appears at the end of Edward FitzGerald’s translation of the Rubáiyát of Omar Khayyám, a poem that remains popular to this day.
The police asked the public for copies of the book in which the final page had been torn out. A man found such a book in his car, where apparently it had been thrown in through an open window. The book proved a match.
The back cover of the book also included scribbled letters, which were at first thought to constitute an encrypted message. But statistical tests carried out by my team showed that it was more likely a string of the initial letters of words. Through computational techniques, we eliminated all of the cryptographic codes known in the 1940s, leaving as a remaining possibility a one-time pad, in which each letter is based on a secret source text. We ransacked the poem itself and other texts, including the Bible and the Talmud, but we never identified a plausible source text. It could have been a pedestrian aide-mémoire—to list the names of horses in an upcoming race, for example. Moreover, our research indicates that it doesn’t have the structural sophistication of a code. The Persian phrase could have been the man’s farewell to the world: his suicide note.
Also scribbled on the back cover was a telephone number that led to one Jo Thomson, a woman who lived merely a five-minute walk from where the Somerton Man had been found. Interviewers then and decades later reported that she had seemed evasive; after her death, some of her relatives and friends said they speculated that she must have known the dead man. I discovered a possible clue: Thomson’s son was missing his lateral incisors, the two teeth that normally flank the central incisors. This condition, found in a very small percentage of the population, is often congenital; oddly, the Somerton Man had it, too. Were they related?
And yet the attempt to link Thomson to the body petered out. Early in the investigation, she told the police that she had given a copy of the Rubáiyát to a lieutenant in the Australian Army whom she had known during the war, and indeed, that man turned out to own a copy. But Thomson hadn’t seen him since 1945, he was very much alive, and the last page of his copy was still intact. A trail to nowhere, one of many that were to follow.
We engineers in the 21st century had several other items to examine. First was a plaster death mask that had been made six months after the man died, during which time the face had flattened. We tried several methods to reconstruct its original appearance: In 2013 we commissioned a picture by Greg O’Leary, a professional portrait artist. Then, in 2020, we approached Daniel Voshart, who designs graphics for Star Trek movies. He used a suite of professional AI tools to create a lifelike reconstruction of the Somerton Man. Later, we obtained another reconstruction by Michael Streed, a U.S. police sketch artist. We published these images, together with many isolated facts about the body, the teeth, and the clothing, in the hope of garnering insights from the public. No luck.
As the death mask had been molded directly off the Somerton Man’s head, neck, and upper body, some of the man’s hair was embedded in the plaster of Paris—a potential DNA gold mine. At the University of Adelaide, I had the assistance of a hair forensics expert, Janette Edson. In 2012, with the permission of the police, Janette used a magnifying glass to find where several hairs came together in a cluster. She was then able to pull out single strands without breaking them or damaging the plaster matrix. She thus secured the soft, spongy hair roots as well as several lengths of hair shaft. The received wisdom of forensic science at the time held that the hair shaft would be useless for DNA analysis without the hair root.
Janette performed our first DNA analysis in 2015 and, from the hair root, was able to place the sample within a maternal genetic lineage, or haplotype, known as “H,” which is widely spread around Europe. (Such maternally inherited DNA comes not from the nucleus of a cell but from the mitochondria.) The test therefore told us little we hadn’t already known. The concentration of DNA was far too low for the technology of the time to piece together the sequencing we needed.
Fortunately, sequencing tools continued to improve. In 2018, Guanchen Li and Jeremy Austin, also at the University of Adelaide, obtained the entire mitochondrial genome from hair-root material and narrowed down the maternal haplotype to H4a1a1a.
However, to identify Somerton Man using DNA databases, we needed to go to autosomal DNA—the kind that is inherited from both parents. There are more than 20 such databases, 23andMe and Ancestry being the largest. These databases require sequences of from 500,000 to 2,000,000 single nucleotide polymorphisms, or SNPs (pronounced “snips”). The concentration levels of autosomes in the human cell tend to be much lower than those of the mitochondria, and so Li and Austin were able to obtain only 50,000 SNPs, of which 16,000 were usable. This was a breakthrough, but it still wasn’t good enough to work on a database.
In 2022, at the suggestion of Colleen Fitzpatrick, a former NASA employee who had trained as a nuclear physicist but then became a forensic genetics expert, I sent a hair sample to Astrea Forensics, a DNA lab in the United States. This was our best hair-root sample, one that I had nervously guarded for 10 years. The result from Astrea came back—and it was a big flop.
Seemingly out of options, we tried a desperate move. We asked Astrea to analyze a 5-centimeter-long shaft of hair that had no root at all. Bang! The company retrieved 2 million SNPs. The identity of the Somerton Man was now within our reach.
So why did the rootless shaft work in our case?
The DNA analysis that police use for standard crime-solving relies on only 20 to 25 short tandem repeats (STRs) of DNA. That’s fine for police, who mostly do one-to-one matches to determine whether the DNA recovered at a crime scene matches a suspect’s DNA.
But finding distant cousins of the Somerton Man on genealogical databases constitutes a one-to-many search, and for that you typically need around 500,000 markers. For these genealogical searches, SNPs are used because they contain information on ethnicity and ancestry generally. Note that SNPs have around 50 to 150 base pairs of nucleotides, whereas typical STRs tend to be longer, containing 80 to 450 base pairs. The hair shaft contains DNA that is mostly fragmented, so it’s of little use when you’re seeking longer STR segments but it’s a great source of SNPs. So this is why crime forensics traditionally focused on the root and ignored the shaft, although this practice is now changing very slowly.
Another reason the shaft was such a trove of DNA is that keratin, its principal component, is a very tough protein, and it had protected the DNA fragments lodged within it. The 74-year-old soft spongy hair root, on the other hand, had not protected the DNA to the same extent. We set a world record for obtaining a human identification, using forensic genealogy, from the oldest piece of hair shaft. Several police departments in the United States now use hair shafts to retrieve DNA, as I am sure many will start to do in other countries, following our example.
Libraries of SNPs can be used to untangle the branching lines of descent in a family tree. We uploaded our 2 million SNPs to GEDmatch Pro, an online genealogical database located in Lake Worth, Fla. (and recently acquired by Qiagen, a biotech company based in the Netherlands). The closest match was a rather distant relative based in Victoria, Australia. Together with Colleen Fitzpatrick, I built out a family tree containing more than 4,000 people. On that tree we found a Charles Webb, son of a baker, born in 1905 in Melbourne, with no date of death recorded.
Charles never had children of his own, but he had five siblings, and I was able to locate some of their living descendants. Their DNA was a dead match. I also found a descendant of one of his maternal aunts, who agreed to undergo a test. When a positive result came through on 22 July 2022, we had all the evidence we needed. This was our champagne moment.
In late 2021, police in South Australia ordered an exhumation of the Somerton Man’s body for a thorough analysis of its DNA. At the time we prepared this article, they had not yet confirmed our result, but they did announce that they were “cautiously optimistic” about it.
All at once, we were able to fill in a lot of blank spaces. Webb was born on 16 November 1905, in Footscray, a suburb of Melbourne, and educated at a technical college, now Swinburne University of Technology. He later worked as an electrical technician at a factory that made electric hand drills. Our DNA tests confirmed he was not related to Thomson’s son, despite the coincidence of their missing lateral incisors.
We discovered that Webb had married a woman named Dorothy Robinson in 1941 and had separated from her in 1947. She filed for divorce on grounds of desertion, and the divorce lawyers visited his former place of work, confirming that he had quit around 1947 or 1948. But they could not determine what happened to him after that. The divorce finally came through in 1952; in those days, divorces in Australia were granted only five years after separation.
At the time of Webb’s death his family had become quite fragmented. His parents were dead, a brother and a nephew had died in the war, and his eldest brother was ill. One of his sisters died in 1955 and left him money in her will, mistakenly thinking he was still alive and living in another state. The lawyers administering the will were unable to locate Charles.
We got more than DNA from the hair: We also vaporized a strand of hair by scanning a laser along its length, a technique known as laser ablation. By performing mass spectrometry on the vapor, we were able to track Webb’s varying exposure to lead. A month before Webb’s death, his lead level was high, perhaps because he had been working with the metal, maybe soldering with it. Over the next month’s worth of hair growth, the lead concentration declined; it reached its lowest level at his death. This might be a sign that he had moved.
With a trove of photographs from family albums and other sources, we were able to compare the face of the young Webb with the artists’ reconstructions we had commissioned in 2013 and 2021 and the AI reconstruction we had commissioned in 2020. Interestingly, the AI reconstruction had best captured his likeness.
A group photograph, taken in 1921, of the Swinburne College football team, included a young Webb. Clues found in newspapers show that he continued to participate in various sports, which would explain the athletic condition of his body.
What’s interesting about solving such a case is how it relies on concepts that may seem counterintuitive to forensic biologists but are quite straightforward to an electronics engineer. For example, when dealing with a standard crime scene that uses only two dozen STR markers, one observes very strict protocols to ensure the integrity of the full set of STRs. When dealing with a case with 2 million SNPs, by contrast, things are more relaxed. Many of the old-school STR protocols don’t apply when you have access to a lot of information. Many SNPs can drop out, some can even be “noise,” the signal may not be clean—and yet you can still crack the case!
Engineers understand this concept well. It’s what we call graceful degradation—when, say, a few flipped bits on a digital video signal are hardly noticed. The same is true for a large SNP file.
And so, when Astrea retrieved the 2 million SNPs, the company didn’t rely on the traditional framework for DNA-sequencing reads. It used a completely different mathematical framework, called imputation. The concept of imputation is not yet fully appreciated by forensics experts who have a biological background. However, for an electronics engineer, the concept is similar to error correction: We infer and “impute” bits of information that have dropped out of a received digital signal. Such an approach is not possible with a few STRs, but when handling over a million SNPs, it’s a different ball game.
Much of the work on identifying Charles Webb from his genealogy had to be done manually because there are simply no automated tools for the task. As an electronics engineer, I now see possible ways to make tools that would speed up the process. One such tool my team has been working on, together with Colleen Fitzpatrick, is software that can input an entire family tree and represent all of the birth locations as colored dots on Google Earth. This helps to visualize geolocation when dealing with a large and complex family.
The Somerton Man case still has its mysteries. We cannot yet determine where Webb lived in his final weeks or what he was doing. Although the literary clue he left in his pocket was probably an elliptical suicide note, we cannot confirm the exact cause of death. There is still room for research; there is much we do not know.
This article appears in the April 2023 print issue as “Finding Somerton Man.”
Success of AI-powered rivals ChatGPT and Bing Chat has forced its hand, but release brings risks for tech giant
Can Google save its golden goose or will it simply kill it trying? That’s the question that lurks behind the launch of the company’s Bard chatbot, hurriedly announced after the overnight success of ChatGPT in early 2023.
With Bard, Google has to walk a tightrope: offer users an experience that can compete with the AI-powered Bing Chat and ChatGPT without cannibalising its enormously profitable search business in the process.
Continue reading...Moody’s Investors Service on Tuesday revised its outlook on Swiss bank UBS AG UBSCH:UBSG to negative from stable, after a similar move by S&P Global Ratings on Monday.UBS agreed to take over Credit Suisse in a $3.25 billion deal announced on Sunday and brokered by the Swiss government and central bank. The tie-up “poses significant financial, cultural and franchise related integration challenges for UBSG, including: the need to retain key CSG personnel while the transaction is underway; the need to minimize the loss of overlapping clients in its Swiss banking and wealth management businesses; and the need to unify the cultures of two somewhat different organizations while ensuring that overall risk appetite and controls are both enhanced and or maintained at levels defined by UBSG,” said a team at Moody’s led by Alessandro Roccati,senior vice president, Financial Institutions Group. He added that “creditor protection at UBSG is expected to remain strong, supported by UBSG’s robust capitalization, strong global wealth management franchise and leading position in Swiss banking.” Shares of UBS rose 3.5% in Europe and U.S.-listed shares were 3% higher.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Mais de 100 afetados pela onda de demissões em empresas de tecnologia contam como o encanto virou decepção.
The post Demissões na Meta, Twitter, Google, XP e empresas de tecnologia têm ‘leve ameaça’, cortes durante licença e bônus menor para brasileiros appeared first on The Intercept.
The S&P 500 on Tuesday posted its highest close since the collapse of Silicon Valley Bank earlier this month, which sent shockwaves through financial markets and raised concerns about the stability of the U.S. banking system. The S&P 500 index SPX closed up about 51 points, or 1.3%, ending near 4,003, according to preliminary data from FactSet. That was its highest close since May 6, four days before the failure of Silicon Valley, the biggest bank collapse since the 2008 global financial crisis. The Dow Jones Industrial Average DJIA rose 1% Tuesday, while the Nasdaq Composite Index COMP swept to a 1.6% gain. Banks and companies with heavy exposure to rate-sensitive assets, including property loans, have been under pressure since Silicon Valley Bank’s implosion. It drew attention to some $600 billion in paper losses at banks from their holdings of “safe” but low-coupon securities that have fallen in value in the year since the Federal Reserve began rapidly increasing interest rates to combat high inflation. Those older bonds end up worth less when investors have access to new securities with higher yields, with a similar low-risk profile in terms of credit risks. The failure of several regional banks in March, plus the sale of Credit Suisse CS to rival bank UBS UBS over the weekend, has reawakened fears of potentially broader problems in the banking system as central bank have increased rates and ended an era of easy money. Even so, stocks were rallying as the Federal Reserve at the conclusion of its 2-day policy meeting on Wednesday is expected to raise its policy rate by another 25 basis points. See: The Fed will either pause or hike interest rates by 25 basis points. What are the pros and cons of each approach?
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
They’re all doing great, thanks for asking.
The post The Architects of the Iraq War: Where Are They Now? appeared first on The Intercept.
Grand jury investigating ex-president over hush money payment to adult film star appears poised to complete its work soon
Law enforcement officials in New York on Tuesday continued preparing for possible unrest on the streets of Manhattan as a grand jury investigating Donald Trump over a hush money payment to the adult film-maker and star Stormy Daniels appeared poised to complete its work by criminally indicting the former president.
Barriers were brought to the area around the Manhattan criminal courthouse in the lower part of the island. Uniformed police were out in force. So were reporters and protesters.
Continue reading...The focus must be on retaining existing staff and reducing private sector work, writes consultant haematologist John Hanley. Plus letters from Robin Davies, Jo Pike and Orest Mulka
A senior NHS official once told me that “workforce planning in the NHS is like astrology, but nowhere near as reliable”. Now that there is an emerging political consensus that the long-term solution to the NHS workforce crisis will require a better approach to planning, I am more optimistic about the future.
In the short term, the focus has to be on the retention of existing staff. The budget changes on pension rules may help, but the drain of NHS staff to other countries and the private sector is likely to continue unless action is taken. Your article (NHS doctors offered up to £5,000 to recruit colleagues for private hospitals, 17 March) highlights the threat from the private sector, which has long been subsidised by access to staff trained by the NHS at taxpayers’ expense.
Continue reading...Intel Corp. INTCannounced Tuesday that Stuart Pann will take over as general manager of Intel Foundry Services, a key part of Chief Executive Patrick Gelsinger’s plans to reinvigorate the semiconductor giant. Gelsinger hopes to build out Intel’s chip-manufacturing capabilities over several years, and make semiconductors for other companies. In charge of that effort will be Pann, who started his career with Intel before spending six years at HP Inc. HPQ and returning after Gelsinger came on board in 2021. “With deep expertise in capital and capacity strategies, supply chain management, and sales and operations planning across internal and external manufacturing, Stuart is an ideal leader to accelerate this momentum and drive long-term growth for IFS,” Gelsinger said in a statement. The former president of the foundry business, Randir Thakur, will depart the company at the end of March, and Chief Architect Raja Koduri is also leaving, Gelsinger announced on Twitter. Pann will report directly to Gelsinger, Intel said. Intel shares were down more than 3% in Tuesday afternoon trading, the largest decline of the day for a Dow Jones Industrial Average DJIA component.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Head of Resolution Foundation says other measures to retain NHS staff could have offered better value for money
Jeremy Hunt’s pensions tax break for the highest 1% of savers in Britain stands to benefit almost as many bankers as doctors, an economist has said, as the government insisted the budget giveaway was designed to cut NHS waiting lists.
On a day of renewed pressure over the £1bn giveaway, Rishi Sunak argued that scrapping the tax-free lifetime allowance on pensions would encourage more doctors to stay in employment rather than taking retirement.
Continue reading...Nvidia Corp. NVDA said Tuesday it was launching DGX Cloud, a service where businesses can get instant access to artificial intelligence models. At Nvidia’s annual GTC developer conference, CEO Jensen Huang said the service allows a customer “to access its own AI supercomputer” over a web browser at $37,000 per instance per month. “We are at the iPhone moment of AI,” Huang said during the presentation. “Startups are racing to build disruptive products and business models, and incumbents are looking to respond.” Nvidia also launched its Foundations model-making service that can handle language, images, video and 3D, with its NeMo and Picasso services.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Shares of FleetCor Technologies Inc. FLT rallied 5.4% in morning trading Tuesday, adding to the 6.4% leap in the previous session, after Raymond James upgraded the business-payments company. Analyst John Davis raised his rating to outperform from market perform, saying he believes that fact that activist investor D.E. Shaw’s involvement and strategic review of its business “skews the risk/reward favorably,” as his analysis suggests separating the company’s corporate payments business would create “significant shareholder value.” Davis also believes the eventual resolution of Federal Trade Commission suit and expected earnings growth acceleration in 2024 could provide a other boosts to the stock. FleetCor shares have run up 15.4% over the past three months while the S&P 500 SPX has gained 2.9%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
U.S.-based cannabis company Acreage Holdings Inc. ACRDFACRHF said Tuesday it has won final approval from the Supreme Court of British Columbia for its previously-announced floating share agreement as part of an acquisition of the company by Canopy Growth Corp. CGCCA:WEED. Last week, Acreage shareholders approved the deal at a special meeting. Canopy and Canopy USA and Acreage amended their floating share arrangement to extend the exercise outside date to May 31 from March 31, or such later date as may be agreed to in writing. Canopy Growth initially announced the transaction in October to exercise options it had to acquire Acreage, Wana Brands and Jetty Extracts to fast-track its U.S. business. Canada-based Canopy Growth is backed by spirits giant Constellation Brands Inc. STZ. Acreage stock is up 1% in 2023 compared to a 20% drop by the Global X Cannabis ETF POTX and a loss of 12.9% by the AdvisorShares Pure U.S. Cannabis ETF MSOS.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
New York Community Bancorp. Inc. NYCB was upgraded to buy by DA Davidson on Tuesday, with analysts saying the acquisition of certain assets and liabilities of Signature Bridge Bank would accelerate its transformation to a more commercial bank. The deal will lower the loan to deposit ratio to under 1003 and increase commercial and industrial loans. analyst Peter J. Winter wrote in a note to clients. “The deal also provides significant funding advantages, by lowering deposit costs with an increase in noninterest bearing deposits and significant cash ($25B), which will be used to pay down some high-cost borrowings,” Winter wrote. It’s also expected to boost per-share earnings by 20% in 2024 and to immediately lift tangible book value. The bank said Monday it would acquire $12.9 billion of C&I loans, bumping its overall portion of such loans to about 19% from 7% previously. New York Community Bank is also acquiring about $34 billion in deposits, about 54% of which are uninsured. “Although there is a risk of outflow of deposits from Signature, NYCB made a valid point that Signature has already realized a significant outflow of deposits; which were $89B at 12/31 and NYCB is not taking any crypto or venture capital related deposits; which have been very volatile,” the analyst wrote. The bank is hanging on to all of Signature’s branches, private banking and deposit gathering teams, “so a high level of the remaining deposits are operational business accounts and no disruption to the client relationship,” he said. The stock was up 3.6% premarket.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
This morning at the ProMat conference in Chicago, Agility Robotics is introducing the latest iteration of Digit, its bipedal multipurpose robot designed for near-term commercial success in warehouse and logistics operations. This version of Digit adds a head (for human-robot interaction) along with manipulators intended for the very first task that Digit will be performing, one that Agility hopes will be its entry point to a sustainable and profitable business bringing bipedal robots into the workplace.
So that’s a bit of background, and if you want more, you should absolutely read the article that Agility CTO and cofounder Jonathan Hurst wrote for us in 2019 talking about the origins of this bipedal (not humanoid, mind you) robot. And now that you’ve finished reading that, here’s a better look at the newest, fanciest version of Digit:
The most visually apparent change here is of course Digit’s head, which either makes the robot look much more normal or a little strange depending on how much success you’ve had imagining the neck-mounted lidar on the previous version as a head. The design of Digit’s head is carefully done—Digit is (again) a biped rather than a humanoid, in the sense that the head is not really intended to evoke a humanlike head, which is why it’s decidedly sideways in a way that human heads generally aren’t. But at the same time, the purpose of the head is to provide a human-robot interaction (HRI) focal point so that humans can naturally understand what Digit is doing. There’s still work to be done here; we’re told that this isn’t the final version, but it’s at the point where Agility can start working with customers to figure out what Digit needs to be using its head for in practice.
Digit’s hands are designed primarily for moving totes.Agility
Digit’s new hands are designed to do one thing: move totes, which are the plastic bins that control the flow of goods in a warehouse. They’re not especially humanlike, and they’re not fancy, but they’re exactly what Digit needs to do the job that it needs to do. This is that job:
Yup, that’s it: moving totes from some shelves to a conveyor belt (and eventually, putting totes back on those shelves). It’s not fancy or complicated and for a human, it’s mind-numbingly simple. It’s basically an automated process, except in a lot of warehouses, humans are doing the work that robots like Digit could be doing instead. Or, in many cases, humans aren’t doing this work, because nobody actually wants these jobs and companies are having a lot of trouble filling these positions anyway.
For a robot, a task like this is not easy at all, especially when you throw legs into the mix. But you can see why the legs are necessary: they give Digit the same workspace as a human within approximately the same footprint as a human, which is a requirement if the goal is to take over from humans without requiring time-consuming and costly infrastructure changes. This gives Digit a lot of potential, as Agility points out in today’s press release:
Digit is multipurpose, so it can execute a variety of tasks and adapt to many different workflows; a fleet of Digits will be able to switch between applications depending on current warehouse needs and seasonal shifts. Because Digit is also human-centric, meaning it is the size and shape of a human and is built to work in spaces designed for people, it is easy to deploy into existing warehouse operations and as-built infrastructure without costly retrofitting.
We should point out that while Digit is multipurpose in the sense that it can execute a variety of tasks, at the moment, it’s just doing this one thing. And while this one thing certainly has value, the application is not yet ready for deployment, since there’s a big gap between being able to do a task most of the time (which is where Digit is now) and being able to do a task robustly enough that someone will pay you for it (which is where Digit needs to get to). Agility has some real work to do, but the company is already launching a partner program for Digit’s first commercial customers. And that’s the other thing that has to happen here: At some point Agility has to make a whole bunch of robots, which is a huge challenge by itself. Rather than building a couple of robots at a time for friendly academics, Agility will need to build and deliver and support tens and eventually hundreds or thousands or billions of Digit units. No problem!
Turning a robot from a research project into a platform that can make money by doing useful work has never been easy. And doing this with a robot that’s bipedal and is trying to do the same tasks as human workers has never been done before. It’s increasingly obvious that someone will make it happen at some point, but it’s hard to tell exactly when—if it’s anything like autonomous cars, it’s going to take way, way longer than it seems like it should. But with its partner program and a commitment to start manufacturing robots at scale soon, Agility is imposing an aggressive timeline on itself, with a plan to ship robots to its partners in early 2024, followed by general availability the following year.The media mogul’s planned nuptials prove yet again that, in age-gap relationships, it’s only ever women who get called cougars or gold diggers
“We’re both looking forward to spending the second half of our lives together,” said 92-year-old Rupert Murdoch of his planned summer nuptials. Ann Lesley Smith, his 66-year-old fiancee, remarked meanwhile, “I speak Rupert’s language. We share the same beliefs. For us both it’s a gift from God.” What manner of god would want Murdoch to live to be 184 is anyone’s guess, but obviously we wish the happy couple all the best.
Just as a thought experiment, though, imagine how Murdoch’s own media empire would take it if a 92-year-old woman announced her fifth engagement and asked the world to join her in looking forward to her next chapter. Obviously, this incorrigible romantic wouldn’t exist, because no woman of 92 would be allowed to occupy the public eye. Her relevance would have started to wane maybe 40 years before; she would have had a brief flash of the spotlight in her 50s, for the purposes of wondering how she had kept her six-pack – pray God she still has one, and isn’t in a “what she looks like now will amaze you” sidebar, for the bad reasons. After that, she would either make a dignified exit from public life, or she would be the Queen.
Zoe Williams is a Guardian columnist
Continue reading...Takeaway delivery firm shifts back to gig-economy model and ditches sick pay and holiday pay
Just Eat is planning to make 1,700 couriers redundant in the UK as the takeaway delivery firm shifts back towards a gig-economy model and ditches guaranteed minimum pay, sick pay and holiday pay.
A further 170 head office staff will also lose their jobs as the firm attempts to cut costs in a highly competitive market.
Continue reading...Steven Knight’s Digbeth Loc. Studios in Birmingham will house ska drama This Town, UB40 and Peaky Blinders film
By order of the Peaky Blinders’ creator Steven Knight, construction has begun on his new multimillion-pound TV and film studio in Birmingham – which is designed to put the city on the media map, create over 700 jobs, add £30m to the local economy and house Knight’s new ska music BBC drama This Town as well the Peaky Blinders film.
Eight years after Knight began working on his idea, building has started on the new studios, which will also provide a home for the band UB40, offer training for local residents to get into TV, as well as restaurants, a hotel and bars. Advanced talks are also in place for an outpost of the media industry’s most famous chain of clubs, Soho House.
Continue reading...Report on 3 million people living below the breadline shows welfare payments are ‘totally inadequate’ and action is needed in May budget, Acoss says
The majority of people on the jobseeker and parenting payments are living in poverty while about a third of single parents are also below the breadline, according to a new study.
A report from the University of New South Wales and the Australian Council of Social Service, to be released on Wednesday, provides further insight into the demographics of 3 million people, including 761,000 children, previously identified as living in poverty in the 2019-20 financial year.
Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup
Continue reading...Players, pundits and fans complain bitterly that referees are getting worse each season – but is that fair?
Six minutes into referee Darren England’s fourth Premier League match of the season, he found himself with a decision to make. A Fulham midfielder, Nathaniel Chalobah, had made a late challenge and caught a Newcastle player, who fell to the ground with a yelp so loud it cut through the noise of the Geordie away fans. “That’s fucking red,” an old-timer seated in front of me yelled.
It was a moment that could determine the course of the match, and Darren England’s season. Competition among elite domestic referees is fierce. Their performances are meticulously dissected, reviewed and ranked by their bosses at Professional Game Match Officials Ltd (PGMOL), the body that runs officiating in English professional football. Among the 19 referees who work predominantly in the Premier League, the best performers are appointed most often, and they are the ones who get the most sought-after matches, those between the top-six clubs, which officials call “golden games”. If, as senior PGMOL figures like to say, the Premier League officials are the 21st team in the division, then its star players are Anthony Taylor and Michael Oliver, who are appointed to most of the big matches. “Just like Liverpool will always play [Virgil] van Dijk in a big game, we’ll appoint our big hitters,” Martin Atkinson, a former referee who now works as a coach for PGMOL, told me.
Continue reading... ![]() | submitted by /u/chrisdh79 [link] [comments] |
The officially sanctioned conspiracy theory that Saddam Hussein was behind 9/11 set a dangerous precedent.
The post Bush’s Iraq War Lies Created a Blueprint for Donald Trump appeared first on The Intercept.
“I think that we need to see what has actually transpired.”
The post Senators Aren’t Ready to Blame Themselves for Silicon Valley Bank Implosion appeared first on The Intercept.
![]() | submitted by /u/Nicolas-matteo [link] [comments] |
![]() | submitted by /u/sagacious-tendencies [link] [comments] |
Michael Aron praised facility part-owned by British American Tobacco at ribbon-cutting event in 2019
A UK ambassador took part in the opening ceremony of a Jordanian cigarette factory part-owned by British American Tobacco (BAT) and praised the new facility in a televised interview, in the latest example of British diplomats breaching strict guidelines against mixing with the tobacco industry overseas.
The envoy stood at the ribbon as it was cut and later appeared in promotional material on the tobacco company’s website, but no record of his presence at the event was kept by the British embassy in Amman because the event was not considered a “formal meeting”.
Continue reading...A trove of secret intelligence cables obtained by The Intercept reveals Tehran’s political gains in Iraq since the 2003 invasion.
The post How Iran Won the U.S. War in Iraq appeared first on The Intercept.
Com a saída de Amoedo para salvar sua biografia e a contratação de Leandro Narloch, partido mostra que quer abertamente se abraçar com a ultradireita.
The post O Novo não precisa mais disfarçar: já pode alinhar os sapatênis com os coturnos appeared first on The Intercept.
Most threats are directed at law enforcement and government officials, report says, after ex-president urged supporters to protest
Lindsey Graham is one of Donald Trump’s allies in the Senate, so it was little surprise that he predicted dire consequences if the former president is indicted, CNN reports:
He also criticized Florida governor and Trump’s chief rival for the Republican presidential nomination next year Ron DeSantis for his comments yesterday about the potential charges. “I don’t know what goes into paying hush money to a porn star to secure silence over some type of alleged affair. I just, I can’t speak to that,” DeSantis said.
Continue reading...Shares of Ingersoll Rand Inc. IR rallied 3.6% in afternoon trading Tuesday, after the diversified industrial company became what many on Wall Street refer to as a “rising star,” as the company’s credit rating was upgraded out of “junk” territory at S&P Global Ratings. The credit rating was raised by one notch to BBB-, which is S&P’s lowest investment-grade rating, from BB+, while the outlook remained positive. “The contribution of recent acquisitions and our expectation for Ingersoll Rand to remain acquisitive should boost revenue growth to the 10% area this year, in our opinion,” S&P said. “Despite recent acquisitions and the initiation of shareholder returns, we expect leverage to remain very low relative to similarly rated peers.” The stock has advanced 5.9% over the past three months, while the S&P 500 SPX has tacked on 3.1%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Oil prices climbed on Tuesday to post a second straight session gain. Prices have bounced back sharply off their recent lows as “risk sentiment improved following the coordinated actions of major central banks at the weekend and UBS’s takeover of Credit Suisse, said Fawad Razaqzada, market analyst at City Index and FOREX.com. Still, oil prices had been trending lower for months, and their “breakdown last week from a multi-week consolidation pattern suggests there may be more downside potential in oil prices,” he said. For now, “news that Russia has decided to keep its oil production at a reduced level through June and calmer market conditions has helped to keep prices in the positive territory.” On the contract’s expiration day, April West Texas Intermediate crude CLJ23 rose $1.69, or 2.5%, to settle at $69.33 a barrel on the New York Mercantile Exchange. May WTI CLK23 settled at $69.67, up $1.85, or 2.7%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Shares of GameStop Corp. GME ran up 6.0% in afternoon trading Tuesday, and have hiked up 12.0% in the four days since they closed at a two-year low, ahead of the consumer electronics retailer’s earnings report, which is due out after the close. The average estimates of analysts surveyed by FactSet are for fiscal fourth-quarter per-share losses to narrow to 13 cents from 47 cents and for sales to fall 3.3% to $2.18 billion. GameStop has reported wider-than-expected loss in three of the past four quarters, but the has gained the day average each of the past four reports, by an average of 8.2%, according to FactSet data. The former “meme” stock, which closed March 15 ($15.95) at the lowest price since February 2020, last has slumped 13.1% the past three months, while the S&P 500 SPX has gained 2.6%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
The one-month T-bill rate dropped almost 30 basis points to below 3.95% after Tuesday’s $12 billion 20-year bond auction, which was described as “fair” by BMO Capital Market’s Ben Jeffery. The 2-month T-bill rate also fell 10 basis points to 4.38%, though rates further out the Treasury curve remained higher on the day. Tuesday’s sale was the first term auction to take place since the “meltdowns” in the banking sector this month, and the auction went well, according to Jim Vogel of FHN Financial.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Nvidia Corp. NVDA said Tuesday it was launching four new platforms that allowed developers to build specialized artificial intelligence models. At Nvidia’s annual GTC developer conference, Chief Executive Jensen Huang introduced the L4 for AI video, the L40 for image generation, the H100 NVL for large language model deployment, and Grace Hopper for recommendation models. “The rise of generative AI is requiring more powerful inference computing platforms,” said Huang, Nvidia chief executive. “The number of applications for generative AI is infinite, limited only by human imagination.” Inference is the process by which a neural network makes predictions based on its training. Nvidia said that Alphabet Inc.’s GOOGGOOGL Google Cloud Platform is an early adopter of the L4, and was integrating it into its Vertex AI machine-learning platform, making it a “premium Nvidia AI cloud,” Huang said. Grace Hopper and the H100 NVL will be available in the second half of the year, while the L40 is available now, and the L4 is available in a private preview from Google.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Shares of Chubb Ltd. CB climbed 2.6% in midday trading Tuesday, after the insurer disclosed that it had no exposure to any of Credit Suisse AG’s CSCH:CSGN contingent convertible bonds, known as CoCos. As part of the deal in which troubled Credit Suisse gets sold to UBS AGUBSCH:UBSG, regulators wrote down the value of Credit Suisse’s U.S. dollar-denominated CoCo debt to zero. Chubb said its disclosure was in response to recently published reports that were incorrect. Prior to Tuesday’s bounce, Chubb’s stock had dropped 8.9% over the previous two weeks, while the SPDR S&P Insurance exchange-traded fund KIE had shed 10.4% and the S&P 500 SPX had lost 2.4%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
The Food and Drug Administration is getting close to a decision on whether to authorize a second round of the updated booster that targets omicron — the dominant variant — for the elderly and others at high risk of severe disease, the Wall Street Journal reported Tuesday, citing people familiar with the matter. A decision could come within weeks, the people told the paper. For now, officials are looking at people aged 65 and older or those with weakened immune systems. Some people in those groups have been asking their doctors for a second booster, although scientists have no data to prove one is needed. As it stands, the first round of the booster has had low uptake. Just 54.3 million people living in the U.S., or 16.4% of the overall population, have had a booster, according to data from the Centers for Disease Control and Prevention.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Shares of First Republic Bank FRC rocketed 43.7% on heavy volume, putting them on track for a record one-day gain, as Treasury Secretary Janet Yellen said the U.S. government was committed to keeping the banking system safe, and amid reports JPMorgan Chase & Co. JPM was working to help the bank. The previous record rally was 27.0% on March 14, 2023. Trading volume ballooned to 87.8 million shares, already nearly triple the full-day average, and enough to make stock the the most actively traded on major U.S. exchanges. Meanwhile, the stock’s price gain of $5.33 means it has only recovered about 49% of Monday’s $10.85, or 47.1% selloff, that took the stock to a record-low close of $12.18. The stock has plummeted 85.6% year to date, while the SPDR S&P Regional Banking exchange-traded fund KRE has tumbled 22.2% and the S&P 500 SPX has gained 3.7%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
The home-building sector enjoyed a broad rally in morning trading Tuesday, after data showing existing-home sales in February rose a lot more than expected. The iShares U.S. Home Construction exchange-traded fund ITB climbed 1.3% toward a five-week high, with all 48 equity components gaining ground. Among the ETF’s more active components, shares of Home Depot Inc. HD advanced 0.9%, D.R. Horton Inc. DHI rose 0.5%, KB Home KBH tacked on 2.4%, Lennar Corp. LEN rallied 1.3% and PulteGroup Inc. PHM was up 1.1%. The National Association of Realtors said Tuesday that existing-home sales for February leapt 14.5% to an annual rate of 4.58 million, the largest increase since July 2020, enough to reverse 12 months of losses and well above expectations of 4.2 million. The home construction ETF has hiked up 12.0% over the past three months, while the S&P 500 SPX has gained 2.7%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Moody’s Investors Service has lifted its rating on Tesla Inc.’s TSLA debt to Baa3, the first rung of investment grade. The outlook is stable, the ratings agency said. The new rating “reflects Moody’s expectation that Tesla will remain one of the foremost manufacturers of battery electric vehicles with an expanding global presence and very high profitability,” it said. It also took into account Tesla’s “prudent” financial policy and management’s operational track record, Moody’s said. Liquidity “will remain very good, underpinned by a very sizeable and growing balance of cash and investments, prospects for free cash flow of more than $7 billion, and limited debt maturities in the next two years,” it said. As of Tuesday, Tesla shares have lost 38% in the last 12 months, compared with losses of about 11% for the S&P 500 index. SPX
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Shares of Liberty Global PLC LBTYA tacked on 0.9% in morning trading Tuesday, putting them on track for a sixth straight gain, after the broadband, video and mobile communications company announced plans to buy the rest of the shares of Belgium-based cable television services company Telenet Group Holdings N.V. BE:TNETTLGHY that it doesn’t already own. Liberty currently owns 59.2% of Telenet’s outstanding shares, while Telenet’s market capitalization was recently EUR1.64 billion ($1.77 billion). Liberty intends to pay EUR22.00 for each share it doesn’t already own, which Liberty said represents a 59% premium to the March 15 closing price. Telenet’s board of directors voted unanimously to support Liberty’s buyout plans. “We believe an offer of EUR 22.00 per share provides a good opportunity for Telenet shareholders to monetize their investment at an attractive premium,” Liberty Chief Executive Mike Fries said. Liberty’s stock, which is headed for the longest win streak since the six-day stretch that ended July 22, 2022, has gained 2.5% year to date, while the S&P 500 SPX has advanced 4.0%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
U.S. stocks opened higher on Tuesday with the Dow climbing 300 points immediately after the opening bell as fears about banking-sector instability eased while investors awaited the Federal Reserve’s rate-hike decision, due out Wednesday afternoon in New York. The S&P 500 SPX gained 35 points, or 0.9%, to 3,986. The Dow Jones Industrial Average DJIA rose by 305 points, or 1%, to 32,550. The Nasdaq Composite COMP increased by 99 points, or 0.9%, to 11,775.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
The 2-year Treasury yield jumped 23 basis points to 4.15% in New York morning trading amid a broad-based selloff of government debt ahead of the Federal Reserve’s policy decision on Wednesday. Tuesday’s selloff also produced a 21- and 20-basis-point jump respectively in the 1- and 3-year rates, as traders factored in fewer rate cuts by year-end. Meanwhile, the 10-year rate was 10 basis points higher, at 3.576%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
U.S. Xpress Enterprises Inc. USX announced Tuesday an agreement to be acquired by Knight-Swift Transportation Holdings Inc. KNX, in a deal that values U.S. Xpress shares at more than four times the latest closing price. Under terms of the deal, which the companies said is valued at $808 million, shareholders of the provider of truckload carrier services will receive $6.15 in cash for each U.S. Xpress share they own, which is 310% above Monday’s close of $1.50. The stock is currently halted for news. The acquisition, which is expected to close in the second quarter or early third quarter of 2023, is expected to add to Knight-Swift’s adjusted earnings starting in 2024, and is expected to boost Knight-Swifts revenue run-rate to nearly $10 billion. U.S. Xpress will continue to operate as a separate brand after the deal closes. “The increased scale, operating expertise and resources of the combined entity will allow U.S. Xpress to pursue new levels of service and efficiency,” U.X. Xpress Chief Executive Eric Fuller said. U.S. Xpress shares have tumbled 17.1% over the past three months through Monday, while Knight-Swift’s stock has gained 1.8% and the S&P 500 SPX has tacked on 1.9%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Meten Holding Group Ltd. METX stock rose 12% in premarket trades Tuesday after it said it will include the widely watched ChaptGPT application in its Web 3 education platform that it’s building. The company is planning to launch the platform by the end of the year for the English language training market. It plans to weave in blockchain education and other areas of study in the platform. The company’s Web 3 education platform will offer personalized training programs and ChatGPT, a writing and communicating app that uses artificial intelligence, to speed learning. Students will interact with ChatGPT through texts, voice and video to offer a “richer learning experience.”
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Benchmark reiterated its sell rating on Netflix Inc. stock NFLX on Tuesday, and said it remains cautious even after a Sunday report from Bloomberg that the streaming service has gained about 1 million active users of its ad-supported offering after two months on the market. “Advertising initiatives and the nettlesome password sharing crackdown should be ARM (average revenue per member) accretive but in Benchmark’s view largely position the stock to offset SVOD (streaming video on demand) competitive pressure,” analyst Matthew Harrigan wrote in a note to clients. While Benchmark is expecting the ad alternative to become a significant member share component, the company is operating in the same difficult streaming market condition as peers, although its operating margins benefit from maturity, compared with newer entrants. The analyst is sticking with his $250 price target, based on an S&P 500 linked discounted cash flow valuation through 2027, he wrote, although running the forecast through 2033 raises fair value to $284. The stock closed Monday at $305.13. But Gen Z preference for short form content like TikTok versus traditional linear streaming content is another competitive headwind, he said. Netflix said Monday it plans to launch about 40 more videogames over the rest of 2023, in addition to 70 in development with partners and 16 being developed by its in-house studio. The stock has fallen 19% in the last 12 months, while the S&P 500 SPX has fallen 11%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Shares of Citi Trends Inc. CTRN sank 5.9% toward a four-month low in premarket trading Tuesday, after the discount apparel and accessories retailer for African American and multicultural families provided a downbeat sales outlook, while beating fourth-quarter results beat expectations. The company expects first-quarter sales to decline in the low double-digit percentage range, while the FactSet sales consensus of $212.2 million implies a 1.9% increase, as the macroeconomic environment keeps pressure on low income families, which is the bulk of its customer base. “Our customers are expected to remain under pressure through the first half of 2023, impacted by ongoing inflationary factors, in addition to the reduction in SNAP benefits and lower tax refunds,” said Chief Executive David Makuen. “As a result, our first quarter is off to a slow start.” For the fourth quarter, net income fell to $6.6 million, or 81 cents a share, from $9.8 million, or $1.16 a share, in the year-ago period. Excluding nonrecurring items, earnings per share of 83 cents topped the FactSet consensus of 81 cents. Sales fell 13.1% to $209.5 million, but was above the FactSet consensus of $208.8 million. The stock has tumbled 15.0% over the past three months through Monday, while the S&P 500 SPX has gained 1.9%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
BlackBerry Ltd. BBCA:BB stock is up 6.1% in premarket trades Tuesday after the mobile device maker said it agreed to sell about 32,000 non-core patents and patent applications to Malikie Innovations Ltd., a newly-formed subsidiary of Key Patent Innovations Ltd., for up to $900 million. Funding has been secured from a U.S.-based investment firm, which the company did not name. BlackBerry will receive $170 million in cash on closing and an additional $30 million in cash by no later than the third anniversary of closing, as well as annual cash royalties from the profits generated from the BlackBerry patents. The deal does not include 120 “monetizable non-core patent families relating to mobile devices” as well as all existing revenue generating agreements. “BlackBerry will receive a license back to the patents being sold and the transaction will not impact customers’ use of any of BlackBerry’s products, solutions or services,” the company said. Blackberry terminated a deal with Catapult announced on Dec. 20 because the buyer was unable to secure financing.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Shares of Terran Orbital Corp. LLAP shot up 22.5% in premarket trading Tuesday, after the satellite products maker reported fourth-quarter revenue that tripled to beat expectations, while losses narrowed but missed expectations. Net losses were $33.1 million, or 23 cents a share, after a loss of $40.2 million, or 51 cents a share, in the year-ago period. The FactSet consensus for per-share losses was 21 cents. Revenue soared 197% to $31.9 million, above the FactSet consensus of $31.2 million, as the company completed delivery of 10 satellites to Lockheed Martin Corp. LMT for the Space Development Agency’s Transport Layer Tranche 0 program. Terran expects to begin delivery of 42 Transport Layer Tranche 1 satellites in 2023. The stock has rallied 10.9% over the past three months through Monday, while the S&P 500 SPX has gained 1.9%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Shares of First Republic Bank FRC climbed 20% in premarket trading on Tuesday, amid reports that JPMorgan Chase & Co. JPM was working to help bolster the bank. The bank’s shares fell to an all-time low of $12.18, a drop of 47%, on Monday as investors questioned its balance sheet and financial health. The slide came despite a report in The Wall Street Journal on Monday that JPMorgan CEO Jamie Dimon was working to raise more support for the bank. That’s after he helped orchestrate a $30 billion-deposit infusion last week. After Monday’s close CNBC reported JPMorgan was advising First Republic on strategic alternatives that included a capital raise or sale. Also, investors were mulling reports that the U.S. Treasury is considering unlimited deposit guarantees if the current crisis hitting the banking sector continues.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Ferrari NV RACE said late Monday that its Italian subsidiary was victim of a “ransom demand” related to client contact details. The company contacted authorities and engaged a “leading” cybersecurity firm to start an investigation, it said. Clients were informed of the potential data exposure, the legendary car maker said. “We have worked with third-party experts to further reinforce our systems and are confident in their resilience,” it said. “We can also confirm the breach has had no impact on the operational functions of our company.”
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
With migrant deaths at record highs, researchers say intensified border militarization is making a deadly problem much worse.
The post Mapping Project Reveals Locations of U.S. Border Surveillance Towers appeared first on The Intercept.
Nearly 90% of the multibillion-dollar federal lobbying apparatus in the United States serves corporate interests. In some cases, the objective of that money is obvious. Google pours millions into lobbying on bills related to antitrust regulation. Big energy companies expect action whenever there is a move to end drilling leases for federal lands, in exchange for the tens of millions they contribute to congressional reelection campaigns.
But lobbying strategies are not always so blunt, and the interests involved are not always so obvious. Consider, for example, a 2013 ...
Growing signs that Coalition leadership could swing weight behind bill to alter constitution. Follow the day’s news live
It’s world water day!
Which I didn’t know. And I didn’t get water anything. Awkward.
We’re not supporters of changes that enable companies to buy offsets, because this is just an easy means to cover obligations.
I’ve had the major fossil fuels companies of the world try and argue with me that they can go zero net carbon per barrel of oil just by buying offsets. Which is code for ‘we’re not going to change a thing, we’re just going to buy these half-real carbon credits’.”
Continue reading...Protesters cut up credit cards and march to Washington branches of JPMorgan Chase, Citibank, Bank of America and Wells Fargo
Hundreds of older Americans gathered in Washington on Tuesday to protest against four of the country’s largest financial institutions, cutting up their credit cards in an act of defiance meant to condemn the banks’ funding of oil and gas projects.
The protesters marched to the downtown DC branches of the four targeted “dirty banks” – JPMorgan Chase, CitiBank, Bank of America and Wells Fargo – before staging a “die-in” to symbolize the global threat posed by fossil fuels. In a nod to the age of the protest’s participants, demonstrators sat in painted rocking chairs as they chanted “Cut it up!” to those slashing their credit cards outside the banks’ branches.
Continue reading... ![]() | submitted by /u/slodden [link] [comments] |
Animal Justice party’s Georgie Purcell wants scanning technology used on every dog, in real time, at every stage of life
The Victorian parliament will be asked to consider the introduction of a compulsory digital system to track illegally exported greyhounds.
The plight of missing and abused greyhounds will be raised in parliament on Wednesday by the Animal Justice party.
Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup
Continue reading...Exclusive: Mother and partner of First Nations woman argue for the right to bail and request reforms be named Poccum’s Law in her honour
The family of First Nations woman Veronica Nelson, who died while on remand in a Victorian maximum security prison, have outlined what they want the state’s new bail laws to look like and asked that they are named “Poccum’s Law” in her honour.
The Victorian government has committed to reforming the Bail Act after a damning coroner’s report into the proud Gunditjmara, Dja Dja Wurrung, Wiradjuri and Yorta Yorta woman’s death found it was “incompatible” with the state’s charter of human rights and discriminatory towards First Nations people.
Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup
Continue reading...A pact to phase out fossil fuels in November’s UN climate talks is the only credible response to the warnings of scientists
Yesterday the Intergovernmental Panel on Climate Change released a new synthesis report. The document is important because 195 governments commissioned it and the summary was agreed line by line. It is accepted fact by nations worldwide, and a shared basis for future action.
The report’s conclusions are terrifying and wearily familiar. Every region is experiencing “widespread adverse impacts”. Almost half the world’s population is “highly vulnerable” to climate change impacts. Expected repercussions will escalate rapidly. It concludes that there is a “rapidly closing window of opportunity” to secure a livable future.
Simon Lewis is professor of global change science at University College London and University of Leeds
Continue reading...We’re keen to hear how consumers and staff feel about the John Lewis brand, what their stores mean to them and whether this has changed
We’re interested to hear about people’s relationships with John Lewis and the brand’s stores.
The company that runs John Lewis department stores and supermarket Waitrose has said it would have to cut staff numbers and scrap bonuses this year, flagging an uncertain outlook as customers struggle with inflation.
Continue reading...At least three of the California governor's wine companies are held by SVB, and a bank president sits on the board of his wife’s charity.
The post Cheering Silicon Valley Bank Bailout, Gavin Newsom Doesn’t Mention He’s a Client appeared first on The Intercept.
New nuclear looks different, which requires new types of financing. New investment and partnerships are seemingly occurring every day across the industry, including SK Group’s $250million investment into Terrapower, and X-energy’s partnership with Dow Chemical.
What can be done to encourage financial investment and improve the economic viability and the ROI of SMRs? How does new nuclear differ, and how do we finance that?
Reuters Events latest report – Capital Funding, Financing & Economic Viability of SMRs – dives into the vehicles that will assist with advancing financing to support SMRs and advanced reactors deployment and commercialization. What to expect from the report:
Electric cars barely existed in 2010, when the Tesla Model S was still a glint in Elon Musk’s eye. Now more than 20 million EVs girdle the globe, according to BloombergNEF—and that count is expected to nearly quadruple to 77 million by 2025. A battery will be the high-voltage heart of each of those 77 million electric vehicles, and by far their most expensive component, setting off a worldwide race to ethically source their materials and crank up production to meet exploding demand.
EVs may have seized a record 5.8 percent of the United States market in 2022, according to J.D. Power, and could approach 11 percent of the global market this year. But experts still believe that better batteries, and many more of them, are a key to EVs reaching a market tipping point, even as Reuters projects automakers spending a whopping $1.2 trillion to develop and produce EVs through 2030.
IEEE Spectrum asked five industry experts to gaze deeply into their own crystal balls and outline what needs to happen in the EV battery space to wean the world off fossil-fueled transportation and onto the plug. Here’s what they said:
Upstart Lucid Motors hasn’t built many cars, but it’s built a reputation with the record-setting, 830-kilometer driving range of the Air Grand Touring Performance sedan. That range is a testament to Lucid’s obsessive pursuit of efficiency: The Air uses the same 2170-format cylindrical cells (supplied by Samsung SDI) as many EVs, but ekes out more miles via superior battery management, compact-yet-muscular power units and slippery aerodynamics.
Sophisticated chassis and battery design gives new life to “lesser” chemistries—especially lithium iron phosphate that’s the hottest thing in batteries around the world—that would otherwise be uncompetitive and obsolete.
One might think Lucid would call for every electric model to cover such vast distances. Instead, Lucid leaders see a bright future in cars that aim for maximum efficiency — rather than range per se — via smaller, more-affordable batteries.
Lucid’s latest Air Touring model is its most efficient yet on a per-mile basis. Now the world’s most aerodynamic production vehicle, with a 0.197 coefficient of drag, the Air Touring delivers an EPA-rated 7.44 kilometers from each onboard kilowatt hour. Yet propelling this full-size luxury barge still demands a 92 kWh battery aboard.
With all that in mind, the company is developing its next generation of batteries. Extrapolating from company targets, a future compact-size Lucid—think the size of Tesla Model 3 or Model Y—could decisively downsize its battery without sacrificing useful range.
“Our target is to improve efficiency even more,” Dlala says.
“If we do a 250-mile car, we could have a battery that’s just 40 kWh,” or less than half the size of the Air’s. That’s the same size battery as a relatively tiny, base-model Nissan Leaf, whose lesser efficiency translates to just 240 km of EPA-rated driving range.
Such compact batteries would not just save serious money for manufacturers and consumers. They would require fewer raw and refined materials., allowing automakers to theoretically build many more cars from a finite supply. That pack would also weigh about one-third as much as Lucid’s beefiest current battery. The upshot would be a chain of gains that would warm the heart of the most mass-conscious engineer: A lighter chassis to support the smaller battery, slimmer crash structures, downsized brakes. More useable space for passengers and cargo. All those savings would further boost driving range and performance.
This grand design, naturally, would demand an attendant burst of charger development. Once chargers are as ubiquitous and reliable as gas stations—and nearly as fast for fillups—“then I don’t need 400 miles of range,” Dlala says.
All this could grant the ultimate, elusive wish for EV makers: Price parity with internal-combustion automobiles.
“That combination of efficiency and infrastructure will allow us to create competitive prices versus internal combustion cars,” Dlala says.
Castilloux says that game-changing EV battery breakthroughs have to date been rare. Yet EV batteries are still central to automakers’ calculus, as they seek a sustainable, affordable supply in a period of explosive growth. In a marketplace starving for what they see as their rightful share of kilowatt-hours, smaller or less-connected automakers especially may go hungry.
“Everyone is competing for a limited supply,” says Ryan Castilloux. “That makes for a lumpy growth trajectory in EVs. It’s an immense challenge, and one that won’t go away until the growth slows and the supply side can keep up.”
“In recent decades, it wouldn’t have made sense to think of an automaker becoming a processing or mining company, but now with scarcity of supplies, they have to take drastic measures.”
—Ryan Castilloux, Adamas Intelligence
A battery industry that has succeeded in boosting nickel content for stronger performance, and cutting cobalt to reduce costs, has hit a wall of diminishing returns via chemistry alone. That leaves battery pack design as a new frontier: Castilloux lauds the push to eliminate “aluminum and other zombie materials” to save weight and space. The effort shows in innovations such as large-format cylindrical batteries with higher ratios of active material to surrounding cases—as well as so-called “cell-to-pack” or “pack-to-frame” designs. BMW’s critical “Neue Klasse” EVs, the first arriving in 2025, are just one example: Large-format cells, with no traditional cased modules required, fill an entire open floorpan and serve as a crash-resistant structural member.
“That becomes a low-cost way to generate big improvements in pack density and bolster the mileage of a vehicle,” Castillloux says.
That kind of sophisticated chassis and battery design can also help level the playing field, giving new life to “lesser” chemistries—especially lithium iron phosphate that’s the hottest thing in batteries around the world—that would otherwise be uncompetitive and obsolete.
“Things are moving in the right direction in North America and Europe, but it’s too little too late at the moment, and the West is collectively scrambling to meet demand.”
The drivetrain and battery of a Mercedes-Benz EQS electric vehicle on the assembly line at the Mercedes-Benz Group plant in Sindelfingen, Germany, on Monday, February 13, 2023. Krisztian Bocsi/Getty Images
The tragedy, Castilloux says, is that EV demand was anticipated for several years, “but the action is only happening now.”
“China was only one that acted on it, and is now a decade ahead of the rest of the world,” in both refining and processing battery materials, and cell production itself.
Tesla also got out in front of legacy automakers by thinking in terms of vertical integration, the need to control the entire supply chain, from lithium brine and cobalt mines to final production and recycling.
“In recent decades, it wouldn’t have made sense to think of an automaker becoming a processing or mining company, but now with scarcity of supplies, they have to take drastic measures.”
Automakers are racing to meet soaring EV demand and fill yawning gaps in the market, including building a homegrown supply chain of battery materials as well as batteries. In the United States alone, Atlas Public Policy tallies U.S. $128 billion in announced investments in EV and battery factories and recycling. That still leaves another blind spot: Charging infrastructure. Tesla’s dominant Superchargers aside, many experts cite a patchwork, notoriously unreliable charging network as a leading roadblock to mainstream EV adoption.
“Charging infrastructure is on our wish list of things that need to improve,” said Dan Nicholson, who helps lead General Motors’ new charger initiatives.
The 2021 U.S. Infrastructure Law is providing $7.5 billion to build a network of 500,000 EV chargers by 2030. But rather than own and operate their own chargers like Tesla—akin to automakers running chains of proprietary gas stations—GM, Ford and others argue that standardized, open-source chargers are critical to convince more Americans to kick the ICE habit. Those chargers must be available everywhere people live and work, Nicholson said, and open to drivers of any car brand.
It will help if those chargers actually work: A 2022 study showed nearly 25 percent of public chargers in the San Francisco Bay area—itself a mecca for EV ownership—weren’t functioning properly.
Automakers and battery manufacturers are on board with multiple solutions, including the stunning rise of lithium-iron-phosphate cells in Teslas, Fords and other models.
To fill gaps in public networks, GM is collaborating with EVGo on a national network of 2,000 DC fast-charging stalls, located at 500 Pilot and Flying J travel centers, most along major corridors. To reach people where they live, including people with no access to home charging, GM is tapping its more than 4,400 dealers to build up to 10 Level 2 charging stations each, at both dealers and key locations, including underserved urban and rural communities. Nicholson notes that 90 percent of the U.S. population lives within 16 kilometers of a GM dealer.
In his role as an SAE board member, Nicholson also supports future-proof standards for EVs, connectors and chargers. That includes the ISO 15118 international standard that defines two-way communication between EVs and chargers. That standard is key to “Plug and Charge,” the budding interoperability system that allows drivers of any EV to plug into any DC fast charger and simply be billed on the back end. That’s how Teslas have worked since 2012, though with the advantage of a closed system that need only recognize and communicate with Tesla models.
Nicholson said GM is also seeking “uptime guarantees” with charging collaborators. That will allow drivers to see in advance if a charger is operational, and to hold a spot.
“People need to be able to reserve a station, and know it’s going to work when they get there,” he said.
Despite an electric boom year in 2022, some analysts are downgrading forecasts of EV adoption, due to monkey wrenches of unpredictable demand, looming recession and supply-chain issues. S&P Global Mobility remains bullish, predicting that 42 percent of global buyers will choose an EV in 2030, within sight of President Biden’s goal of 50-percent EV penetration.
“That’s a lot of growth, but there are plenty of people who won’t move along as quickly,” Brinley said. Pushing EVs to a market majority will require stars to align. Brinley says the most critical key is a continued explosion of new EV models at every price point—including SUVs and pickups that are the lifeblood of U.S. buyers.
Regarding batteries, Brinley says ICE manufacturers with an existing manufacturing footprint, labor force and know-how could find an advantage over relative newcomers. The issue will be how well the likes of General Motors and Ford can manage the transition, from scaling back on ICE production to retraining workers — fewer of whom may be required to produce batteries and motors than ICE powertrains. In February, Ford announced a new $3.5 billion plant in Michigan to build LFP batteries, licensing tech from China’s CATL, currently the world’s largest lithium-ion producer.
“Some (legacy) automakers will use LFP for certain use cases, and solid-state in development could change the dynamic again,” Brinley says. “But for the time being, you need both batteries and engines, because people will be buying both,” Brinley says.
At some point, Brinley says, it’s a zero-sum game: A flat global market for cars can’t comfortably accommodate both types of powertrains.
“ICE sales have to come down for BEV sales to come up,” Brinley says. “And that’s going to make for a wild market in the next few years.”
NanoGraf is among several start-ups wishing for not just longer-lasting batteries, but a stable, competitive North American supply chain to counter China’s battery dominance. The Inflation Reduction Act has spurred an unprecedented tsunami of homegrown investment, by requiring robust domestic sourcing of batteries and battery materials as a condition of EV tax breaks for manufacturers and consumers. That includes a $35-per-kWh tax credit on every lithium-ion cell produced, and a $7,500 consumer tax break on eligible EVs.
Connor Hund says NanoGraf aims to onshore production of its silicon-anode material at a new Chicago facility beginning in Q2 this year. The company, whose backers include the Department of Defense, claims to have created the most energy-dense 18650 cylindrical cell yet, at 3.8 amp-hours. The technology key is a pre-lithiated core that allows an anode silicon percentage as high as 25 percent, versus cells that typically top out at 5-to-7 percent silicon.
“There’s certainly room to boost the range of EVs by 20, 30 or even 50 percent by using silicon,” he says.
But whether it’s NanoGraf, or the drive toward large-format 4680 cylindrical cells led by Tesla and Panasonic, scaling up to mass production remains a major hurdle. NanoGraf plans enough initial capacity for 35 to 50 tonnes of its anode materials. But it would need 1,000 tonnes annually to crack the automotive space, with its now-bottomless appetite for batteries—at competitive cost with what automakers currently pay for cells from China, South Korea or elsewhere.
“It’s so cutthroat in that space, and there’s a scale you have to reach,” Hund says.
One wish is being granted: No one is waiting for a magic bullet in technology, including from solid state batteries that many experts now insist won’t be ready for automobiles until 2030 or later. Instead, automakers and battery manufacturers are on board with multiple solutions, including the stunning rise of LFP cells in Teslas, Fords and other models.
“There’s a shortage of all these materials, not enough nickel, cobalt or manganese, so companies targeting different consumers with different solutions is really helpful.”
Western countries have struggled to take a holistic view of everything that’s required, especially when incumbent solutions from China are available. It’s not just raw materials, anodes or cathodes, but the cells, modules, electrolyte and separators.
“You need companies onshoring all those components to have a robust U.S. supply chain,” he says. “We need everyone upstream and downstream of us, whether it’s the graphite, electrolyte or separator. Everyone is just one piece of the puzzle.”
Hund says safer batteries should also be on the industry wish-list, as high-profile fires in Teslas and other models threaten to sully EVs reputation or keep skeptical consumers on the fence.
“We can’t have batteries self-discharging at the rate they are now,” he says, especially with automakers gearing up around the world for their biggest EV invasion yet.
“Getting ahead of this now, versus pushing millions of cars onto the road and dealing with safety later, is very important.”
Update 14 March 2023: This story was corrected to reflect that Lucid’s Air Touring model carries a 92 kWh battery. (A previous version of this story stated that the battery’s capacity was 112 kWh.)
Every year, online job search firms collect data about the salaries, skills, and overall job market for tech professionals, generally focusing on software engineers
The numbers from job search firms Dice and Hired have been released. These 2022 numbers have been eagerly anticipated, given the turmoil generated by a spate of tech layoffs in the latter part of the year, which Dice estimates at more than 140,000. The data they collect doesn’t allow for apples-to-apples comparisons, but I’ve read through both reports, pulled out data from past years to give the numbers some perspective when possible, and summarized it in eight charts. Dice’s numbers come from a survey administered to its registered job seekers and site visitors between 16 August 2022 and 17 October 2022, for a total of 7,098 completed surveys. Hired’s analysis included data from 68,500 job candidates and 494,000 interview requests collected from the site between January 2021 through December 2022, supplemented by a survey of 1,300 software engineers.
According to Dice’s numbers, tech salaries grew 2.3 percent in 2022 compared with 2021, reflecting a steady upward trend since 2017 (with 2020 omitted due to the pandemic disruption). However, it’s clear that the 2022 news isn’t so good when considering inflation. These numbers have been adjusted from those previously reported by IEEE Spectrum; Dice recently tightened its survey to focus on tech professionals in more tech-specific job functions.
If you want the highest pay, it’s a no-brainer: Get yourself into the C-suite. That is not, of course, a particularly useful takeaway from Dice’s data. Perhaps of more interest is that scrum masters are commanding higher pay than data scientists, and that cloud and cybersecurity engineers continue to hold solid spots in the top ranks.
Specific skills often command a big pay boost, but exactly what skills are in demand is a moving target. Because the data from Dice and Hired—and the way they crunch it—varies widely, we present two charts. Dice, looking at average salaries, puts MapReduce at the top of its charts; Hired, looking at interview requests, puts Ruby on Rails and Ruby on top.
If you’re a software engineer and you don’t know Python, you’d better start studying. That’s the opinion of the 1,300 software engineers surveyed by Hired. If you don’t know C, however, don’t worry too much about that.
You probably don’t want to leave Silicon Valley if you’re looking for the highest pay. The San Francisco Bay Area hasn’t lost its dominant position at the top of the tech salary charts, discounting, however, the local cost-of-living differences. But some tech hubs are reducing the gap, with Tampa, Fla. salaries up 19 percent and Charlotte, N.C., salaries up 11 percent. Charlotte, in fact, edged out Austin for number nine in Dice’s rankings, and every cost-of-living calculator I checked considers Charlotte a significantly cheaper place to live. Hired, which considered a shorter list, puts Austin at number five.
That artificial intelligence tops the list of booming businesses this year is no surprise, given the attention brought to AI by the public release of Dall-E 2 and Chat GPT last year, alongside GPT-4 this month. Hired asked more than 1,300 software engineers their opinions on the hottest industries to watch out for in 2023, and AI and machine learning came out on top. Not so hot? E-commerce, media, and transportation.
Why did Eleanor Williams, a young woman from a remote coastal town in England, pretend she was a victim of a grooming gang?
In lockdown a Facebook post by Eleanor Williams went viral. In it the young woman showed her badly beaten face, injured hand – and claimed she was the victim of a grooming gang run by Asian men. Her post caught the attention not just of her local community in Barrow-in-Furness, but people across the country.
A campaign was launched, tens of thousands of pounds were raised and people started displaying purple elephants to show their support for “justice for Ellie”. Rallies began, then reprisals. A list circulating on social media with names and businesses said to be linked to Williams’ “ordeal” led to homes and Asian-owned businesses being attacked. Hate crimes tripled in the area.
Continue reading...For Synopsys Chief Executive Aart de Geus, running the electronic design automation behemoth is similar to being a bandleader. He brings together the right people, organizes them into a cohesive ensemble, and then leads them in performing their best.
De Geus, who helped found the company in 1986, has some experience with bands. The IEEE Fellow has been playing guitar in blues and jazz bands since he was an engineering student in the late 1970s.
Much like jazz musicians improvising, engineers go with the flow at team meetings, he says: One person comes up with an idea, and another suggests ways to improve it.
“There are actually a lot of commonalities between my music hobby and my other big hobby, Synopsys,” de Geus says.
Employer
Synopsys
Title
CEO
Member grade
Fellow
Alma mater
École Polytechnique Fédérale de Lausanne, Switzerland
Synopsys is now the largest supplier of software that engineers use to design chips, employing about 20,000 people. The company reported US $1.36 billion in revenue in the first quarter of this year.
De Geus is considered a founding father of electronic design automation (EDA), which automates chip design using synthesis and other tools. It was pioneered by him and his team in the 1980s. Synthesis revolutionized digital design by taking the high-level functional description of a circuit and automatically selecting the logic components (gates) and constructing the connections (netlist) to build the circuit. Virtually all large digital chips manufactured today are largely synthesized, using software that de Geus and his team developed.
“Synthesis changed the very nature of how digital chips are designed, moving us from the age of computer-a ided design (CAD) to electronic design automation (EDA),” he says.
During the past three and a half decades, logic synthesis has enabled about a 10 millionfold increase in chip complexity, he says. For that reason, Electrical Business magazine named him one of the 10 most influential executives in 2002, as well as its 2004 CEO of the Year.
Born in Vlaardingen, Netherlands, de Geus grew up mostly in Basel, Switzerland. He earned a master’s degree in electrical engineering in 1978 from the École Polytechnique Fédérale de Lausanne, known as EPFL, in Lausanne.
In the early 1980s, while pursuing a Ph.D. in electrical engineering from Southern Methodist University, in Dallas, de Geus joined General Electric in Research Triangle Park, N.C. There he developed tools to design logic with multiplexers, according to a 2009 oral history conducted by the Computer History Museum. He and a designer friend created gate arrays with a mix of logic gates and multiplexers.
That led to writing the first program for synthesizing circuits optimized for both speed and area, known as SOCRATES. It automatically created blocks of logic from functional descriptions, according to the oral history.
“The problem was [that] all designers coming out of school used Karnaugh maps, [and] knew NAND gates, NOR gates, and inverters,” de Geus explained in the oral history. “They didn’t know multiplexers. So designing with these things was actually difficult.” Karnaugh maps are a method of simplifying Boolean algebra expressions. With NAND and NOR universal logic gates, any Boolean expression can be implemented without using any other gate.
SOCRATES could write a function and 20 minutes later, the program would generate a netlist that named the electronic components in the circuit and the nodes they connected to. By automating the function, de Geus says, “the synthesizer typically created faster circuits that also used fewer gates. That’s a big benefit because fewer is better. Fewer ultimately end up in [a] smaller area on a chip.”
With that technology, circuit designers shifted their focus from gate-level design to designs based on hardware description languages.
Eventually de Geus was promoted to manager of GE’s Advanced Computer-Aided Engineering Group. Then, in 1986, the company decided to leave the semiconductor business. Facing the loss of his job, he decided to launch his own company to continue to enhance synthesis tools.
He and two members of his GE team, David Gregory and Bill Krieger, founded Optimal Solutions in Research Triangle Park. In 1987 the company was renamed Synopsys and moved to Mountain View, Calif.
De Geus says he picked up his management skills and entrepreneurial spirit as a youngster. During summer vacations, he would team up with friends to build forts, soapbox cars, and other projects. He usually was the team leader, he says, the one with plenty of imagination.
“An entrepreneur creates a vision of some crazy but, hopefully, brilliant idea,” he says, laughing. The vision sets the direction for the project, he says, while the entrepreneur’s business side tries to convince others that the idea is realistic enough.
“The notion of why it could be important was sort of there,” he says. “But it is the passion that catalyzes something in people.”
That was true during his fort-building days, he says, and it’s still true today.
“Synthesis changed the very nature of how digital designs are being constructed.”
“If you have a good team, everybody chips in something,” he says. “Before you know it, someone on the team has an even better idea of what we could do or how to do it. Entrepreneurs who start a company often go through thousands of ideas to arrive at a common mission. I’ve had the good fortune to be on a 37-year mission with Synopsys.”
At the company, de Geus sees himself as “the person who makes the team cook. It’s being an orchestrator, a bandleader, or maybe someone who brings out the passion in people who are better in both technology and business. As a team, we can do things that are impossible to do alone and that are patently proven to be impossible in the first place.”
He says a few years ago the company came up with the mantra “Yes, if …” to combat a slowly growing “No, because …” mindset.
“‘Yes, if …’ opens doors, whereas the ‘No, because …’ says, ‘Let me prove that it’s not possible,’” he says. “‘Yes, if … ’ leads us outside the box into ‘It’s got to be possible. There’s got to be a way.’”
De Geus says his industry is going through “extremely challenging times—technically, globally, and business-wise—and the ‘If … ’ part is an acknowledgment of that. I found it remarkable that once a group of people acknowledge [something] is difficult, they become very creative. We’ve managed to get the whole company to embrace ‘Yes, if …’
“It is now in the company’s cultural DNA.”
One of the issues Synopsys is confronted with is the end of Moore’s Law, de Geus says. “But no worries,” he says. “We are facing an unbelievable new era of opportunity, as we have moved from ‘Classic Moore’ scale complexity to ‘SysMoore,’ which unleashes systemic complexity with the same Moore’s Law exponential ambition!”
He says the industry is moving its focus from single chips to multichip modules, with chips closely placed together on top of a larger, “silicon interposer” chip. In some cases, such as for memory, chips are stacked on top of each other.
“How do you make the connectivity between those chips as fast as possible? How can you technically make these pieces work? And then how can you make it economically viable so it is producible, reliable, testable, and verifiable? Challenging, but so powerful,” he says. “Our big challenge is to make it all work together.”
Pursuing engineering was a calling for de Geus. Engineering was the intersection of two things he loved: carrying out a vision and building things. Notwithstanding the recent wave of tech-industry layoffs, he says he believes engineering is a great career.
“Just because a few companies have overhired or are redirecting themselves doesn’t mean that the engineering field is in a downward trend,” he says. “I would argue the opposite, for sure in the electronics and software space, because the vision of ‘smart everything’ requires some very sophisticated capabilities, and it is changing the world!”
During the Moore’s Law era, one’s technical knowledge has had to be deep, de Geus says.
“You became really specialized in simulation or in designing a certain type of process,” he says. “In our field, we need people who are best in class. I like to call them six-Ph.D.-deep engineers. It’s not just schooling deep; it’s schooling and experientially deep. Now, with systemic complexity, we need to bring all these disciplines together; in other words we now need six-Ph.D.-wide engineers too.”
To obtain that type of experience, he recommends university students should get a sense of multiple subdisciplines and then “choose the one that appeals to you.”
“For those who have a clear sense of their own mission, it’s falling in love and finding your passion,” he says. But those who don’t know which field of engineering to pursue should “engage with people you think are fantastic, because they will teach you things such as perseverance, enthusiasm, passion, what excellence is, and make you feel the wonder of collaboration.” Such people, he says, can teach you to “enjoy work instead of just having a job. If work is also your greatest hobby, you’re a very different person.”
De Geus says engineers must take responsibility for more than the technology they create.
“I always liked to say that ‘he or she who has the brains to understand should have the heart to help.’” With the growing challenges the world faces, I now add that they should also have the courage to act,” he says. “What I mean is that we need to look and reach beyond our field, because the complexity of the world needs courageous management to not become the reason for its own destruction.”
He notes that many of today’s complexities are the result of fabulous engineering, but the “side effects—and I am talking about CO2, for example—have not been accounted for yet, and the engineering debt is now due.”
De Geus points to the climate crisis: “It is the single biggest challenge there is. It’s both an engineering and a social challenge. We need to figure out a way to not have to pay the whole debt. Therefore, we need to engineer rapid technical transitions while mitigating the negatives of the equation. Great engineering will be decisive in getting there.”
![]() |
Imagine a world in which you can do transactions and many other things without having to give your personal information. A world in which you don’t need to rely on banks or governments anymore. Sounds amazing, right? That’s exactly what blockchain technology allows us to do.
It’s like your computer’s hard drive. blockchain is a technology that lets you store data in digital blocks, which are connected together like links in a chain.
Blockchain technology was originally invented in 1991 by two mathematicians, Stuart Haber and W. Scot Stornetta. They first proposed the system to ensure that timestamps could not be tampered with.
A few years later, in 1998, software developer Nick Szabo proposed using a similar kind of technology to secure a digital payments system he called “Bit Gold.” However, this innovation was not adopted until Satoshi Nakamoto claimed to have invented the first Blockchain and Bitcoin.
A blockchain is a distributed database shared between the nodes of a computer network. It saves information in digital format. Many people first heard of blockchain technology when they started to look up information about bitcoin.
Blockchain is used in cryptocurrency systems to ensure secure, decentralized records of transactions.
Blockchain allowed people to guarantee the fidelity and security of a record of data without the need for a third party to ensure accuracy.
To understand how a blockchain works, Consider these basic steps:
Let’s get to know more about the blockchain.
Blockchain records digital information and distributes it across the network without changing it. The information is distributed among many users and stored in an immutable, permanent ledger that can't be changed or destroyed. That's why blockchain is also called "Distributed Ledger Technology" or DLT.
Here’s how it works:
And that’s the beauty of it! The process may seem complicated, but it’s done in minutes with modern technology. And because technology is advancing rapidly, I expect things to move even more quickly than ever.
Even though blockchain is integral to cryptocurrency, it has other applications. For example, blockchain can be used for storing reliable data about transactions. Many people confuse blockchain with cryptocurrencies like bitcoin and ethereum.
Blockchain already being adopted by some big-name companies, such as Walmart, AIG, Siemens, Pfizer, and Unilever. For example, IBM's Food Trust uses blockchain to track food's journey before reaching its final destination.
Although some of you may consider this practice excessive, food suppliers and manufacturers adhere to the policy of tracing their products because bacteria such as E. coli and Salmonella have been found in packaged foods. In addition, there have been isolated cases where dangerous allergens such as peanuts have accidentally been introduced into certain products.
Tracing and identifying the sources of an outbreak is a challenging task that can take months or years. Thanks to the Blockchain, however, companies now know exactly where their food has been—so they can trace its location and prevent future outbreaks.
Blockchain technology allows systems to react much faster in the event of a hazard. It also has many other uses in the modern world.
Blockchain technology is safe, even if it’s public. People can access the technology using an internet connection.
Have you ever been in a situation where you had all your data stored at one place and that one secure place got compromised? Wouldn't it be great if there was a way to prevent your data from leaking out even when the security of your storage systems is compromised?
Blockchain technology provides a way of avoiding this situation by using multiple computers at different locations to store information about transactions. If one computer experiences problems with a transaction, it will not affect the other nodes.
Instead, other nodes will use the correct information to cross-reference your incorrect node. This is called “Decentralization,” meaning all the information is stored in multiple places.
Blockchain guarantees your data's authenticity—not just its accuracy, but also its irreversibility. It can also be used to store data that are difficult to register, like legal contracts, state identifications, or a company's product inventory.
Blockchain has many advantages and disadvantages.
I’ll answer the most frequently asked questions about blockchain in this section.
Blockchain is not a cryptocurrency but a technology that makes cryptocurrencies possible. It's a digital ledger that records every transaction seamlessly.
Yes, blockchain can be theoretically hacked, but it is a complicated task to be achieved. A network of users constantly reviews it, which makes hacking the blockchain difficult.
Coinbase Global is currently the biggest blockchain company in the world. The company runs a commendable infrastructure, services, and technology for the digital currency economy.
Blockchain is a decentralized technology. It’s a chain of distributed ledgers connected with nodes. Each node can be any electronic device. Thus, one owns blockhain.
Bitcoin is a cryptocurrency, which is powered by Blockchain technology while Blockchain is a distributed ledger of cryptocurrency
Generally a database is a collection of data which can be stored and organized using a database management system. The people who have access to the database can view or edit the information stored there. The client-server network architecture is used to implement databases. whereas a blockchain is a growing list of records, called blocks, stored in a distributed system. Each block contains a cryptographic hash of the previous block, timestamp and transaction information. Modification of data is not allowed due to the design of the blockchain. The technology allows decentralized control and eliminates risks of data modification by other parties.
Blockchain has a wide spectrum of applications and, over the next 5-10 years, we will likely see it being integrated into all sorts of industries. From finance to healthcare, blockchain could revolutionize the way we store and share data. Although there is some hesitation to adopt blockchain systems right now, that won't be the case in 2022-2023 (and even less so in 2026). Once people become more comfortable with the technology and understand how it can work for them, owners, CEOs and entrepreneurs alike will be quick to leverage blockchain technology for their own gain. Hope you like this article if you have any question let me know in the comments section
FOLLOW US ON TWITTER
Not all technological innovation deserves to be called progress. That’s because some advances, despite their conveniences, may not do as much societal advancing, on balance, as advertised. One researcher who stands opposite technology’s cheerleaders is MIT economist Daron Acemoglu. (The “c” in his surname is pronounced like a soft “g.”) IEEE Spectrum spoke with Agemoglu—whose fields of research include labor economics, political economy, and development economics—about his recent work and his take on whether technologies such as artificial intelligence will have a positive or negative net effect on human society.
IEEE Spectrum: In your November 2022 working paper “Automation and the Workforce,” you and your coauthors say that the record is, at best, mixed when AI encounters the job force. What explains the discrepancy between the greater demand for skilled labor and their staffing levels?
Acemoglu: Firms often lay off less-skilled workers and try to increase the employment of skilled workers.
“Generative AI could be used, not for replacing humans, but to be helpful for humans. ... But that’s not the trajectory it’s going in right now.”
—Daron Acemoglu, MIT
In theory, high demand and tight supply are supposed to result in higher prices—in this case, higher salary offers. It stands to reason that, based on this long-accepted principle, firms would think ‘More money, less problems.’
Acemoglu: You may be right to an extent, but... when firms are complaining about skill shortages, a part of it is I think they’re complaining about the general lack of skills among the applicants that they see.
In your 2021 paper “Harms of AI,” you argue if AI remains unregulated, it’s going to cause substantial harm. Could you provide some examples?
Acemoglu: Well, let me give you two examples from Chat GPT, which is all the rage nowadays. ChatGPT could be used for many different things. But the current trajectory of the large language model, epitomized by Chat GPT, is very much focused on the broad automation agenda. ChatGPT tries to impress the users…What it’s trying to do is trying to be as good as humans in a variety of tasks: answering questions, being conversational, writing sonnets, and writing essays. In fact, in a few things, it can be better than humans because writing coherent text is a challenging task and predictive tools of what word should come next, on the basis of the corpus of a lot of data from the Internet, do that fairly well.
The path that GPT3 [the large language model that spawned ChatGPT] is going down is emphasizing automation. And there are already other areas where automation has had a deleterious effect—job losses, inequality, and so forth. If you think about it you will see—or you could argue anyway—that the same architecture could have been used for very different things. Generative AI could be used, not for replacing humans, but to be helpful for humans. If you want to write an article for IEEE Spectrum, you could either go and have ChatGPT write that article for you, or you could use it to curate a reading list for you that might capture things you didn’t know yourself that are relevant to the topic. The question would then be how reliable the different articles on that reading list are. Still, in that capacity, generative AI would be a human complementary tool rather than a human replacement tool. But that’s not the trajectory it’s going in right now.
“Open AI, taking a page from Facebook’s ‘move fast and break things’ code book, just dumped it all out. Is that a good thing?”
—Daron Acemoglu, MIT
Let me give you another example more relevant to the political discourse. Because, again, the ChatGPT architecture is based on just taking information from the Internet that it can get for free. And then, having a centralized structure operated by Open AI, it has a conundrum: If you just take the Internet and use your generative AI tools to form sentences, you could very likely end up with hate speech including racial epithets and misogyny, because the Internet is filled with that. So, how does the ChatGPT deal with that? Well, a bunch of engineers sat down and they developed another set of tools, mostly based on reinforcement learning, that allow them to say, “These words are not going to be spoken.” That’s the conundrum of the centralized model. Either it’s going to spew hateful stuff or somebody has to decide what’s sufficiently hateful. But that is not going to be conducive for any type of trust in political discourse. because it could turn out that three or four engineers—essentially a group of white coats—get to decide what people can hear on social and political issues. I believe hose tools could be used in a more decentralized way, rather than within the auspices of centralized big companies such as Microsoft, Google, Amazon, and Facebook.
Instead of continuing to move fast and break things, innovators should take a more deliberate stance, you say. Are there some definite no-nos that should guide the next steps toward intelligent machines?
Acemoglu: Yes. And again, let me give you an illustration using ChatGPT. They wanted to beat Google [to market, understanding that] some of the technologies were originally developed by Google. And so, they went ahead and released it. It’s now being used by tens of millions of people, but we have no idea what the broader implications of large language models will be if they are used this way, or how they’ll impact journalism, middle school English classes, or what political implications they will have. Google is not my favorite company, but in this instance, I think Google would be much more cautious. They were actually holding back their large language model. But Open AI, taking a page from Facebook’s ‘move fast and break things’ code book, just dumped it all out. Is that a good thing? I don’t know. Open AI has become a multi-billion-dollar company as a result. It was always a part of Microsoft in reality, but now it’s been integrated into Microsoft Bing, while Google lost something like 100 billion dollars in value. So, you see the high-stakes, cutthroat environment we are in and the incentives that that creates. I don’t think we can trust companies to act responsibly here without regulation.
Tech companies have asserted that automation will put humans in a supervisory role instead of just killing all jobs. The robots are on the floor, and the humans are in a back room overseeing the machines’ activities. But who’s to say the back room is not across an ocean instead of on the other side of a wall—a separation that would further enable employers to slash labor costs by offshoring jobs?
Acemoglu: That’s right. I agree with all those statements. I would say, in fact, that’s the usual excuse of some companies engaged in rapid algorithmic automation. It’s a common refrain. But you’re not going to create 100 million jobs of people supervising, providing data, and training to algorithms. The point of providing data and training is that the algorithm can now do the tasks that humans used to do. That’s very different from what I’m calling human complementarity, where the algorithm becomes a tool for humans.
“[Imagine] using AI... for real-time scheduling which might take the form of zero-hour contracts. In other words, I employ you, but I do not commit to providing you any work.”
—Daron Acemoglu, MIT
According to “The Harms of AI,” executives trained to hack away at labor costs have used tech to help, for instance, skirt labor laws that benefit workers. Say, scheduling hourly workers’ shifts so that hardly any ever reach the weekly threshold of hours that would make them eligible for employer-sponsored health insurance coverage and/or overtime pay.
Acemoglu: Yes, I agree with that statement too. Even more important examples would be using AI for monitoring workers, and for real-time scheduling which might take the form of zero-hour contracts. In other words, I employ you, but I do not commit to providing you any work. You’re my employee. I have the right to call you. And when I call you, you’re expected to show up. So, say I’m Starbucks. I’ll call and say ‘Willie, come in at 8am.’ But I don’t have to call you, and if I don’t do it for a week, you don’t make any money that week.
Will the simultaneous spread of AI and the technologies that enable the surveillance state bring about a total absence of privacy and anonymity, as was depicted in the sci-fi film Minority Report?
Acemoglu: Well, I think it has already happened. In China, that’s exactly the situation urban dwellers find themselves in. And in the United States, it’s actually private companies. Google has much more information about you and can constantly monitor you unless you turn off various settings in your phone. It’s also constantly using the data you leave on the Internet, on other apps, or when you use Gmail. So, there is a complete loss of privacy and anonymity. Some people say ‘Oh, that’s not that bad. Those are companies. That’s not the same as the Chinese government.’ But I think it raises a lot of issues that they are using data for individualized, targeted ads. It’s also problematic that they’re selling your data to third parties.
In four years, when my children will be about to graduate from college, how will AI have changed their career options?
Acemoglu: That goes right back to the earlier discussion with ChatGPT. Programs like GPT3and GPT4 may scuttle a lot of careers but without creating huge productivity improvements on their current path. On the other hand, as I mentioned, there are alternative paths that would actually be much better. AI advances are not preordained. It’s not like we know exactly what’s going to happen in the next four years, but it’s about trajectory. The current trajectory is one based on automation. And if that continues, lots of careers will be closed to your children. But if the trajectory goes in a different direction, and becomes human complementary, who knows? Perhaps they may have some very meaningful new occupations open to them.
Elon Musk, step aside. You may be the richest rich man in the space business, but you’re not first. Musk’s SpaceX corporation is a powerful force, with its weekly launches and visions of colonizing Mars. But if you want a broader view of how wealthy entrepreneurs have shaped space exploration, you might want to look at George Ellery Hale, James Lick, William McDonald or—remember this name—John D. Hooker.
All this comes up now because SpaceX, joining forces with the billionaire Jared Isaacman, has made what sounds at first like a novel proposal to NASA: It would like to see if one of the company’s Dragon spacecraft can be sent to service the fabled, invaluable (and aging) Hubble Space Telescope, last repaired in 2009.
Private companies going to the rescue of one of NASA’s crown jewels? NASA’s mantra in recent years has been to let private enterprise handle the day-to-day of space operations—communications satellites, getting astronauts to the space station, and so forth—while pure science, the stuff that makes history but not necessarily money, remains the province of government. Might that model change?
“We’re working on crazy ideas all the time,” said Thomas Zurbuchen, NASA’s space science chief. "Frankly, that’s what we’re supposed to do.”
It’s only a six-month feasibility study for now; no money will change hands between business and NASA. But Isaacman, who made his fortune in payment-management software before turning to space, suggested that if a Hubble mission happens, it may lead to other things. “Alongside NASA, exploration is one of many objectives for the commercial space industry,” he said on a media teleconference. “And probably one of the greatest exploration assets of all time is the Hubble Space Telescope.”
So it’s possible that at some point in the future, there may be a SpaceX Dragon, perhaps with Isaacman as a crew member, setting out to grapple the Hubble, boost it into a higher orbit, maybe even replace some worn-out components to lengthen its life.
Aerospace companies say privately mounted repair sounds like a good idea. So good that they’ve proposed it already.
The Chandra X-ray telescope, as photographed by space-shuttle astronauts after they deployed it in July 1999. It is attached to a booster that moved it into an orbit 10,000 by 100,000 kilometers from Earth.NASA
Northrop Grumman, one of the United States’ largest aerospace contractors, has quietly suggested to NASA that it might service one of the Hubble’s sister telescopes, the Chandra X-ray Observatory. Chandra was launched into Earth orbit by the space shuttle Columbia in 1999 (Hubble was launched from the shuttle Discovery in 1990), and the two often complement each other, observing the same celestial phenomena at different wavelengths.
As in the case of the SpaceX/Hubble proposal, Northrop Grumman’s Chandra study is at an early stage. But there are a few major differences. For one, Chandra was assembled by TRW, a company that has since been bought by Northrop Grumman. And another company subsidiary, SpaceLogistics, has been sending what it calls Mission Extension Vehicles (MEVs) to service aging Intelsat communications satellites since 2020. Two of these robotic craft have launched so far. The MEVs act like space tugs, docking with their target satellites to provide them with attitude control and propulsion if their own systems are failing or running out of fuel. SpaceLogistics says it is developing a next-generation rescue craft, which it calls a Mission Robotic Vehicle, equipped with an articulated arm to add, relocate, or possibly repair components on orbit.
“We want to see if we can apply this to space-science missions,” says Jon Arenberg, Northrop Grumman’s chief mission architect for science and robotic exploration, who worked on Chandra and, later, the James Webb Space Telescope. He says a major issue for servicing is the exacting specifications needed for NASA’s major observatories; Chandra, for example, records the extremely short wavelengths of X-ray radiation (0.01–10 nanometers).
“We need to preserve the scientific integrity of the spacecraft,” he says. “That’s an absolute.”
But so far, the company says, a mission seems possible. NASA managers have listened receptively. And Northrop Grumman says a servicing mission could be flown for a fraction of the cost of a new telescope.
New telescopes need not be government projects. In fact, NASA’s chief economist, Alexander MacDonald, argues that almost all of America’s greatest observatories were privately funded until Cold War politics made government the major player in space exploration. That’s why this story began with names from the 19th and 20th centuries—Hale, Lick, and McDonald—to which we should add Charles Yerkes and, more recently, William Keck. These were arguably the Elon Musks of their times—entrepreneurs who made millions in oil, iron, or real estate before funding the United States’ largest telescopes. (Hale’s father manufactured elevators—highly profitable in the rebuilding after the Great Chicago Fire of 1871.) The most ambitious observatories, MacDonald calculated for his book The Long Space Age, were about as expensive back then as some of NASA’s modern planetary probes. None of them had very much to do with government.
To be sure, government will remain a major player in space for a long time. “NASA pays the cost, predominantly, of the development of new commercial crew vehicles, SpaceX’s Dragon being one,” MacDonald says. “And now that those capabilities exist, private individuals can also pay to utilize those capabilities.” Isaacman doesn’t have to build a spacecraft; he can hire one that SpaceX originally built for NASA.
“I think that creates a much more diverse and potentially interesting space-exploration future than we have been considering for some time,” MacDonald says.
So put these pieces together: Private enterprise has been a driver of space science since the 1800s. Private companies are already conducting on-orbit satellite rescues. NASA hasn’t said no to the idea of private missions to service its orbiting observatories.
And why does John D. Hooker’s name matter? In 1906, he agreed to put up US $45,000 (about $1.4 million today) to make the mirror for a 100-inch reflecting telescope at Mount Wilson, Calif. One astronomer made the Hooker Telescope famous by using it to determine that the universe, full of galaxies, was expanding.
The astronomer’s name was Edwin Hubble. We’ve come full circle.
In 2001, a team of engineers at a then-obscure R&D company called AC Propulsion quietly began a groundbreaking experiment. They wanted to see whether an electric vehicle could feed electricity back to the grid. The experiment seemed to prove the feasibility of the technology. The company’s president, Tom Gage, dubbed the system “vehicle to grid” or V2G.
The concept behind V2G had gained traction in the late 1990s after California’s landmark zero-emission-vehicle (ZEV) mandate went into effect and compelled automakers to commercialize electric cars. In V2G, environmental-policy wonks saw a potent new application of the EV that might satisfy many interests. For the utilities, it promised an economical way of meeting rising demand for electricity. For ratepayers, it offered cheaper and more reliable electricity services. Purveyors of EVs would have a new public-policy rationale backing up their market. And EV owners would become entrepreneurs, selling electricity back to the grid.
AC Propulsion’s experiment was timely. It occurred in the wake of the California electricity crisis of 2000 and 2001, when mismanaged deregulation, market manipulation, and environmental catastrophe combined to unhinge the power grid. Some observers thought V2G could prevent the kinds of price spikes and rolling blackouts then plaguing the Golden State. Around the same time, however, General Motors and other automakers were in the process of decommissioning their battery EV fleets, the key component of V2G.
AC Propulsion’s president, Tom Gage, explains the company’s vehicle-to-grid technology at a 2001 conference in Seattle. Photo-illustration: Max-o-matic; photo source: Alec Brooks
The AC Propulsion experiment thus became an obscure footnote in the tortuous saga of the green automobile. A decade later, in the 2010s, the battery EV began an astounding reversal of fortune, thanks in no small part to the engineers at ACP, whose electric-drive technology informed the development of the Roadster, the car that launched Tesla Motors. By the 2020s, automakers around the world were producing millions of EVs a year. And with the revival of the EV, the V2G concept was reborn.
If a modern electronics- and software-laden car can be thought of as a computer on wheels, then an electric car capable of discharging electricity to the grid might be considered a power plant on wheels. And indeed, that’s how promoters of vehicle-to-grid technology perceive the EV.
Keep in mind, though, that electricity’s unique properties pose problems to anyone who would make a business of producing and delivering it. Electricity is a commodity that is bought and sold, and yet unlike most other commodities, it cannot easily be stored. Once electricity is generated and passes into the grid, it is typically used almost immediately. If too much or too little electricity is present in the power grid, the network can suddenly become unbalanced.
At the turn of the 20th century, utilities promoted the use of electric truck fleets to soak up excess electricity. Photo-illustration: Max-o-matic; photo source: M&N/Alamy
Some operators of early direct-current power plants at the turn of the 20th century solved the problem of uneven power output from their generators by employing large banks of rechargeable lead-acid batteries, which served as a kind of buffer to balance the flow of electrons. As utilities shifted to more reliable alternating-current systems, they phased out these costly backup batteries.
Then, as electricity entrepreneurs expanded power generation and transmission capacity, they faced the new problem of what to do with all the cheap off-peak, nighttime electricity they could now produce. Utilities reconsidered batteries, not as stationary units but in EVs. As the historian Gijs Mom has noted, enterprising utility managers essentially outsourced the storage of electricity to the owners and users of the EVs then proliferating in northeastern U.S. cities. Early utility companies like Boston Edison and New York Edison organized EV fleets, favoring electric trucks for their comparatively capacious batteries.
In the early years of the automobile, battery-powered electric cars were competitive with cars fueled by gasoline and other types of propulsion.Photo-illustration: Max-o-matic; image source: Shawshots/Alamy
The problems of grid management that EVs helped solve faded after World War I. In the boom of the 1920s, U.S. utility barons such as Samuel Insull massively expanded the country’s grid systems. During the New Deal era, the federal government began funding the construction of giant hydropower plants and pushed transmission into rural areas. By the 1950s, the grid was moving electricity across time zones and national borders, tying in diverse sources of supply and demand.
The need for large-scale electrochemical energy storage as a grid-stabilizing source of demand disappeared. When utilities considered storage technology at all in the succeeding decades, it was generally in the form of pumped-storage hydropower, an expensive piece of infrastructure that could be built only in hilly terrain.
It wasn’t until the 1990s that the electric car reemerged as a possible solution to problems of grid electricity. In 1997, Willett Kempton, a professor at the University of Delaware, and Steve Letendre, a professor at Green Mountain College, in Vermont, began publishing a series of journal articles that imagined the bidirectional EV as a resource for electricity utilities. The researchers estimated that, if applied to the task of generating electricity, all of the engines in the U.S. light-duty vehicle fleet would produce around 16 times the output of stationary power plants. Kempton and Letendre also noted that the average light vehicle was used only around 4 percent of the time. Therefore, they reasoned, a fleet of bidirectional EVs could be immensely useful to utilities, even if it was only a fraction the size of the conventional vehicle fleet.
AC Propulsion cofounder Wally Rippel converted a Volkswagen microbus into an electric vehicle while he was still a student at Caltech. Photo-illustration: Max-o-matic; photo source: Herald Examiner Collection/Los Angeles Public Library
The engineers at AC Propulsion (ACP) were familiar with the basic precepts of bidirectional EV power. The company was the brainchild of Wally Rippel and Alan Cocconi, Caltech graduates who had worked in the late 1980s and early 1990s as consultants for AeroVironment, then a developer of lightweight experimental aircraft. The pair made major contributions to the propulsion system for the Impact, a battery-powered concept car that AeroVironment built under contract for General Motors. Forerunner of the famous EV1, the Impact was regarded as the most advanced electric car of its day, thanks to its solid-state power controls, induction motor, and integrated charger. The vehicle inspired California’s ZEV mandate, instituted in 1990. As Cocconi told me, the Impact was bidirectional-capable, although that function wasn’t fully implemented.
AeroVironment had encouraged its engineers to take creative initiative in developing the Impact, but GM tightly managed efforts to translate the idiosyncratic car into a production prototype, which rankled Cocconi and Rippel. Cocconi was also dismayed by the automaker’s decision to equip the production car with an off-board rather than onboard charger, which he believed would limit the car’s utility. In 1992, he and Rippel quit the project and, with Hughes Aircraft engineer Paul Carosa, founded ACP, to further develop battery electric propulsion. The team applied their technology to a two-seat sportscar called the tzero, which debuted in January 1997.
Electric Car tzero 0-60 3.6 sec faster than Tesla Roadster www.youtube.com
Through the 1990s and into the early 2000s, ACP sold its integrated propulsion systems to established automakers, including Honda, Volkswagen, and Volvo, for use in production models being converted into EVs. For car companies, this was a quick and cheap way to gain experience with battery electric propulsion while also meeting any quota they may have been subject to under the California ZEV mandate.
By the turn of the millennium, however, selling EV propulsion systems had become a hard way to make a living. In early 2000, when GM announced it had ceased production of the EV1, it signaled that the automaking establishment was abandoning battery electric cars. ACP looked at other ways of marketing its technology and saw an opportunity in the California electricity crisis then unfolding.
Traditionally, the electricity business combined several discrete services, including some designed to meet demand and others designed to stabilize the network. Since the 1930s, these services had been provided by regulated, vertically integrated utilities, which operated as quasi-monopolies. The most profitable was peaking power—electricity delivered when demand was highest. The less-lucrative stabilization services balanced electricity load and generation to maintain system frequency at 60 hertz, the standard for the United States. In a vertically integrated utility, peaking services essentially subsidized stabilization services.
With deregulation in the 1990s, these aggregated services were unbundled and commodified. In California, regulators separated generation from distribution and sold 40 percent of installed capacity to newly created independent power producers that specialized in peaking power. Grid-stabilization functions were reborn as “ancillary services.” Major utilities were compelled to purchase high-cost peaking power, and because retail prices were capped, they could not pass their costs on to consumers. Moreover, deregulation disincentivized the construction of new power plants. At the turn of the millennium, nearly 20 percent of the state’s generating capacity was idled for maintenance.
General Motors’ Impact debuted at the 1990 Los Angeles Auto Show. It was regarded as the most advanced electric vehicle of its era.Photo-illustration: Max-o-matic; photo source: Alec Brooks
The newly marketized grid was highly unstable, and in 2000 and 2001, things came to a head. Hot weather caused a demand spike, and the accompanying drought (the beginning of the multidecade southwestern megadrought) cut hydropower capacity. As Californians turned on their air conditioners, peaking capacity had to be kept in operation longer. Then market speculators got into the act, sending wholesale prices up 800 percent and bankrupting Pacific Gas & Electric. Under these combined pressures, grid reliability eroded, resulting in rolling blackouts.
With the grid crippled, ACP’s Gage contacted Kempton to discuss whether bidirectional EV power could help. Kempton identified frequency regulation as the optimal V2G market because it was the most profitable of the ancillary services, constituting about 80 percent of what the California Independent System Operator, the nonprofit set up to manage the deregulated grid, then spent on such services.
The result was a demonstration project, a task organized by Alec Brooks, manager of ACP’s tzero production. Like Rippel and Cocconi, Brooks was a Caltech graduate and part of the close-knit community of EV enthusiasts that emerged around the prestigious university. After earning a Ph.D. in civil engineering in 1981, Brooks had joined AeroVironment, where he managed the development of Sunraycer, an advanced solar-powered demonstration EV built for GM, and the Impact. He recruited Rippel and Cocconi for both jobs. During the 1990s, Brooks formed a team at AeroVironment that provided support for GM’s EV programs until he too tired of the corporate routine and joined ACP in 1999.
Before cofounding AC Propulsion, Alan Cocconi worked on Sunraycer, a solar-powered car for GM. Here, he’s testing the car’s motor-drive power electronics.Photo-illustration: Max-o-matic; photo source: Alec Brooks
Working with Gage and Kempton, and consulting with the ISO, Brooks set out to understand how the EV might function as a utility resource.
ACP adapted its second-generation AC-150 drivetrain, which had bidirectional capability, for this application. As Cocconi recalled, the bidirectional function had originally been intended for a different purpose. In the 1990s, batteries had far less capacity than they do today, and for the small community of EV users, the prospect of running out of juice and becoming stranded was very real. In such an emergency, a bidirectional EV with charge to spare could come to the rescue.
With funding from the California Air Resources Board, the team installed an AC-150 drive in a Volkswagen Beetle. The system converted AC grid power to DC power to charge the battery and could also convert DC power from the battery to AC power that could feed both external stand-alone loads and the grid. Over the course of the project, the group successfully demonstrated bidirectional EV power using simulated dispatch commands from the ISO’s computerized energy-management system.
This pair of graphs shows how AC Propulsion’s AC-150 drivetrain performed in a demonstration of grid frequency regulation. The magenta line in the upper graph tracks grid frequency centered around 60 hertz. The lower graph indicates power flowing between the grid and the drivetrain; a negative value means power is being drawn from the grid, while a positive value means power is being sent back to the grid.
Photo-illustration: Max-o-matic; photo source: Alec Brooks
The experiment demonstrated the feasibility of the vehicle-to-grid approach, yet it also revealed the enormous complexities involved in deploying the technology. One unpleasant surprise, Brooks recalled, came with the realization that the electricity crisis had artificially inflated the ancillary-services market. After California resolved the crisis—basically by re-regulating and subsidizing electricity—the bubble burst, making frequency regulation as a V2G service a much less attractive business proposition.
The prospect of integrating EV storage batteries into legacy grid systems also raised concerns about control. The computers responsible for automatically signaling generators to ramp up or down to regulate frequency were programmed to control large thermoelectric and hydroelectric plants, which respond gradually to signals. Batteries, by contrast, respond nearly instantaneously to commands to draw or supply power. David Hawkins, an engineer who served as a chief aide to the ISO’s vice president of operations and advised Brooks, noted that the responsiveness of batteries had unintended consequences when they were used to regulate frequency. In one experiment involving a large lithium-ion battery, the control computer fully charged or discharged the unit in a matter of minutes, leaving no spare capacity to regulate the grid.
In principle, this problem might have been solved with software to govern the charging and discharging. The main barrier to V2G in the early 2000s, it turns out, was that the battery EV would have to be massively scaled up before it could serve as a practical energy-storage resource. And the auto industry had just canceled the battery EV. In its place, automakers promised the fuel-cell electric car, a type of propulsion system that does not easily lend itself to bidirectional power flow.
The dramatic revival of the battery EV in the late 2000s and early 2010s led by Tesla Motors and Nissan revived prospects for the EV as a power-grid resource. This EV renaissance spawned a host of R&D efforts in bidirectional EV power, including ECOtality and the Mid-Atlantic Grid Interactive Cars Consortium. The consortium, organized by Kempton in conjunction with PJM, the regional transmission organization responsible for much of the eastern United States, used a car equipped with an AC-150 drivetrain to further study the use of V2G in the frequency-regulation market.
Over time, however, the research focus in bidirectional EV applications shifted from the grid to homes and commercial buildings. In the wake of the Fukushima nuclear disaster in 2011, for instance, Nissan developed and marketed a vehicle-to-building (V2B) charging system that enabled its Leaf EV to provide backup power.
In 2001, AC Propulsion engineers installed an AC-150 drivetrain in a Volkswagen Beetle to demonstrate the feasibility of V2G technology for regulating frequency on the power grid.Photo-illustration: Max-o-matic; photo source: Alec Brooks
The automaker later entered an R&D partnership with Fermata Energy, a Virginia-based company that develops bidirectional EV power systems. Founded by the entrepreneur and University of Virginia researcher David Slutzky in 2010, Fermata considered and then ruled out the frequency-regulation market, on the grounds that it was too small and unscalable.
Slutsky now believes that early markets for bidirectional EV power will emerge in supplying backup power and supplementing peak loads for individual commercial buildings. Those applications will require institutional fleets of EVs. Slutzky and other proponents of EV power have been pressing for a more favorable regulatory environment, including access to the subsidies that states such as California offer to users of stationary storage batteries.
Advocates believe that V2G can help pay for EV batteries. While interest in this idea seems likely to grow as EVs proliferate, the prospect of electric car owners becoming power entrepreneurs appears more distant. Hawkins, the engineer who advised Brooks, holds that the main barriers to V2G are not so much technological as economic: Viable markets need to emerge. The everyday participant in V2G, he argues, would face the difficult task of attempting to arbitrage the difference between wholesale and retail prices while still paying the retail rate. In principle, EV owners could take advantage of the same feed-in tariffs and net-metering schemes designed to enable homeowners to sell surplus solar power back to the grid. But marketing rooftop solar power has proven more complicated and costly for suburbanites than initially assumed, and the same would likely hold true for EV power.
Another major challenge is how to balance the useful lifetime of EV batteries in transportation and non-vehicle applications. That question turns on understanding how EV batteries will perform and age in stationary-power roles. Users would hardly be further ahead, after all, if they substantially degraded their batteries in the act of paying them off. Grid managers could also face problems if they come to depend on EV batteries that prove unreliable or become unavailable as driving patterns change.
In short, the core conundrum of V2G is the conflict of interest that comes from repurposing privately owned automobiles as power plants. Scaling up this technology will require intimate collaboration between automaking and electricity-making, enterprises with substantially different revenue models and systems of regulation. At the moment, the auto industry does not have a clear interest in V2G.
On the other hand, rising electricity demand, concerns about fossil fuels, greenhouse gases, and climate change, and the challenges of managing intermittent renewable energy have all created new justifications for bidirectional EV power. With the proliferation of EVs over the last decade, more demonstrations of the technology are being staged for a host of applications—sometimes expressed as V2X, or vehicle-to-everything. Some automakers, notably Nissan and now Ford, already sell bidirectional EVs, and others are experimenting with the technology. Enterprises are emerging to equip and manage demonstrations of V2B, V2G, and V2X for utilities and big institutional users of electricity. Some ambitious pilot projects are underway, notably in the Dutch city of Utrecht.
Back in 2002, at the end of their experiment, the engineers at AC Propulsion concluded that what V2G really needed was a powerful institutional champion. They went on to make further important contributions to EV technology. Brooks and Rippel worked for the nascent Tesla Motors, while Cocconi continued at ACP until a cancer diagnosis led him to reevaluate his life. In the mid-2000s, Cocconi sold his stake in the company and devoted himself to aviation, his first love, developing remote-controlled solar-powered aircraft. The rebirth of the battery electric car in the 2010s and 2020s reaffirmed the efforts of these three visionary pioneers.
A strong V2G patron has yet to emerge. Nevertheless, the idea of an off-the-shelf energy storage unit that also provides transportation and pays for itself is likely to remain attractive enough to sustain ongoing interest. Who knows? The electric car might still one day become a power plant on wheels.
The author thanks Alec Brooks, Alan Cocconi, David Hawkins, David Slutzky, and Wally Rippel for sharing their experiences. Parts of this article are adapted from the author’s new book, Age of Auto Electric (MIT Press, 2022).
Update 4 Nov. 2:45 p.m. EDT: Rocket Lab says its launch was successful, but booster recovery was not. It says it lost telemetry signals from the descending first stage during reentry.
“As standard procedure, we pull the helicopter from the recovery zone if this happens,” a company spokesperson said.
“If at first you don’t succeed….” Rocket Lab, the space launch company with two launchpads on the New Zealand coast, almost did succeed in May at something very difficult: To make its Electron booster reusable (and therefore far less expensive to fly), it tried catching the used first stage—in midair—with a helicopter as it descended by parachute toward the Pacific Ocean.
It came oh-so-close. On its first try, Rocket Lab’s helicopter successfully snagged the parachute with a hook at the end of a long cable—a remarkable piece of planning and flying. But the pilot, in the company’s words, “detected different load characteristics than previously experienced in testing,” and let the rocket fall in the water for a ship to recover it.
So try, try again. Rocket Lab is now making a new recovery attempt, this time with a rocket carrying an atmospheric-research satellite for the Swedish National Space Agency. If the helicopter can catch and hold onto the booster, it will fly it back to Rocket Lab’s production complex near Auckland for possible reuse.
“We’re eager to get the helicopter back out there,” said Peter Beck, Rocket Lab’s CEO and founder.
During Rocket Lab’s launch on 2 May 2022, the helicopter was able to catch the Electron rocket booster, but load issues forced the pilot to let the rocket fall to the water.Rocket Lab
“No changes since the May recovery,” said Morgan Bailey of Rocket Lab in an email to IEEE Spectrum, “but our team has carried out a number of capture rehearsals with test stages in preparation for this launch.”
Satellite operators are watching closely because, after Elon Musk’s SpaceX, Rocket Lab has established itself as a contender in space launches, especially for companies and government agencies with smaller payloads. This is its 32nd Electron launch since 2017. “They’ve become a major player,” said Chad Anderson of Space Capital, a venture capital firm.
Many of the world’s launch bases have historically been near the ocean for good reason: If rockets failed, open water is a relatively safe place for debris to fall. That’s why the United States uses coastal Florida and California, and the European Space Agency uses Kourou, French Guiana, on the northern coast of South America. Rocket Lab started in New Zealand and is expanding to the Virginia coast.
The downside is that saltwater and rocket hardware don’t mix very well; the water is corrosive, and cleanup is expensive. SpaceX goes to great lengths to land its boosters on barges or back at Cape Canaveral; Rocket Lab, whose boosters are smaller, can change its commercial space business dramatically if helicopter recoveries become routine.
The name of the mission for its first booster-recovery attempt was a playful “There and Back Again”; the second, suggested by an American space enthusiast, is “Catch Me if You Can.”
Here’s the plan: The Electron rocket, 18 meters tall, lifts off over the southern Pacific, aiming to place the satellite in a sun-synchronous orbit 585 kilometers high. The first stage, which made up 80 percent of the vehicle’s mass at launch, burns out after the first 70 km. Two minutes and 32 seconds into the flight, it drops off, following a long arc that, on past flights, would have sent it crashing into the ocean, about 280 km downrange.
This artist's conception envisions the helicopter, having successfully snagged the booster's parachute, carrying it back to dry land. A recovery ship is on standby. Rocket Lab
But Rocket Lab has equipped it with heat shielding, a guidance computer and control thrusters, protecting and steering it as it falls tailfirst at up to 8,300 kilometers per hour. Temperatures reach 2,400 °C as it’s slowed by the thickening air around it.
At an altitude of 13 km a small drogue parachute is deployed, followed by a main chute less than a minute later. They slow the booster’s descent to about 36 km/h.
The helicopter, a Sikorsky S-92, is waiting in the landing zone, trailing a grappling hook on a long cable. If all goes well, the helicopter flies over the descending rocket and snags the parachute cables about 2,000 meters above the ocean’s surface. Then it flies back to land with the rocket hanging underneath.
“The main advantage of air capture is that we’re not cleaning salt water out of it,” said Rocket Lab’s Bailey in an earlier interview. “We’re still in the test phase part of the program, and in terms of time and cost savings, that’ll be determined.”
But engines recovered from the ocean after previous launches have been refurbished and test-fired successfully, says Rocket Lab. Like many engineering efforts, it’s a step at a time.
“Being able to refly Electron without too much rework is the aim of the game,” says the company. “If we can achieve high level performance with engine parts recovered from the ocean, imagine what we can do with returned dry engines.”
The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.
Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.
But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.
How is AI currently being used to design the next generation of chips?
Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.
Heather GorrMathWorks
Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.
What are the benefits of using AI for chip design?
Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.
So it’s like having a digital twin in a sense?
Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.
So, it’s going to be more efficient and, as you said, cheaper?
Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.
We’ve talked about the benefits. How about the drawbacks?
Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years.
Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together.
One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.
How can engineers use AI to better prepare and extract insights from hardware or sensor data?
Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.
One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.
What should engineers and designers consider when using AI for chip design?
Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.
How do you think AI will affect chip designers’ jobs?
Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.
How do you envision the future of AI and chip design?
Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
For about as long as engineers have talked about beaming solar power to Earth from space, they’ve had to caution that it was an idea unlikely to become real anytime soon. Elaborate designs for orbiting solar farms have circulated for decades—but since photovoltaic cells were inefficient, any arrays would need to be the size of cities. The plans got no closer to space than the upper shelves of libraries.
That’s beginning to change. Right now, in a sun-synchronous orbit about 525 kilometers overhead, there is a small experimental satellite called the Space Solar Power Demonstrator One (SSPD-1 for short). It was designed and built by a team at the California Institute of Technology, funded by donations from the California real estate developer Donald Bren, and launched on 3 January—among 113 other small payloads—on a SpaceX Falcon 9 rocket.
“To the best of our knowledge, this would be the first demonstration of actual power transfer in space, of wireless power transfer,” says Ali Hajimiri, a professor of electrical engineering at Caltech and a codirector of the program behind SSPD-1, the Space Solar Power Project.
The Caltech team is waiting for a go-ahead from the operators of a small space tug to which it is attached, providing guidance and attitude control. If all goes well, SSPD-1 will spend at least five to six months testing prototype components of possible future solar stations in space. In the next few weeks, the project managers hope to unfold a lightweight frame, called DOLCE (short for Deployable on-Orbit ultraLight Composite Experiment), on which parts of future solar arrays could be mounted. Another small assembly on the spacecraft contains samples of 32 different types of photovoltaic cells, intended to see which would be most efficient and robust. A third part of the vehicle contains a microwave transmitter, set up to prove that energy from the solar cells can be sent to a receiver. For this first experiment, the receivers are right there on board the spacecraft, but if it works, an obvious future step would be to send electricity via microwave to receivers on the ground.
Caltech’s Space Solar Power Demonstrator, shown orbiting Earth in this artist’s conception, was launched on 3 January.Caltech
One can dismiss the 50-kilogram SSPD-1 as yet another nonstarter, but a growing army of engineers and policymakers take solar energy from space seriously. Airbus, the European aerospace company, has been testing its own technology on the ground, and government agencies in China, Japan, South Korea, and the United States have all mounted small projects. “Recent technology and conceptual advances have made the concept both viable and economically competitive,” said Frazer-Nash, a British engineering consultancy, in a 2021 report to the U.K. government. Engineers working on the technology say microwave power transmissions would be safe, unlike ionizing radiation, which is harmful to people or other things in its path.
No single thing has happened to start this renaissance. Instead, say engineers, several advances are coming together.
For one thing, the cost of launching hardware into orbit keeps dropping, led by SpaceX and other, smaller companies such as Rocket Lab. SpaceX has a simplified calculator on its website, showing that if you want to launch a 50-kg satellite into sun-synchronous orbit, they’ll do it for US $275,000.
Meanwhile, photovoltaic technology has improved, step by step. Lightweight electronic components keep getting better and cheaper. And there is political pressure as well: Governments and major companies have made commitments to decarbonize in the battle against global climate change, committing to renewable energy sources to replace fossil fuels.
Most solar power, at least for the foreseeable future, will be Earth-based, which will be cheaper and easier to maintain than anything anyone can launch into space. Proponents of space-based solar power say that for now, they see it as best used for specialty needs, such as remote outposts, places recovering from disasters, or even other space vehicles.
But Hajimiri says don’t underestimate the advantages of space, such as unfiltered sunlight that is far stronger than what reaches the ground and is uninterrupted by darkness or bad weather—if you can build an orbiting array light enough to be practical.
Most past designs, dictated by the technology of their times, included impossibly large truss structures to hold solar panels and wiring to route power to a central transmitter. The Caltech team would dispense with all that. An array would consist of thousands of independent tiles as small as 100 square centimeters, each with its own solar cells, transmitter, and avionics. They might be loosely connected, or they might even fly in formation.
Time-lapse images show the experimental DOLCE frame for an orbiting solar array being unfolded in a clean room.Caltech
“The analogy I like to use is that it’s like an army of ants instead of an elephant,” says Hajimiri. Transmission to receivers on the ground could be by phased array—microwave signals from the tiles synchronized so that they can be aimed with no moving parts. And the parts—the photovoltaic cells with their electronics—could perhaps be so lightweight that they’re flexible. New algorithms could keep their signals focused.
“That’s the kind of thing we’re talking about,” said Harry Atwater, a coleader of the Caltech project, as SSPD-1 was being planned. “Really gossamer-like, ultralight, the limits of mass-density deployable systems.”
If it works out, in 30 years maybe there could be orbiting solar power fleets, adding to the world’s energy mix. In other words, as a recent report from Frazer-Nash concluded, this is “a potential game changer.”
This year’s £150,000 allocation scrapped amid discussions about including groups critical of government
The Home Office has decided not to award £150,000-worth of grants to Windrush community organisations, amid internal disagreement about whether funds should be given to groups that have expressed criticism of the government on social media.
In December, civil servants approved applications from 15 organisations to receive about £10,000 of funding each from the Windrush community engagement fund, a grant established in the wake of the 2018 citizenship scandal.
Continue reading...The technical challenge of missile defense has been compared with that of hitting a bullet with a bullet. Then there is the still tougher economic challenge of using an expensive interceptor to kill a cheaper target—like hitting a lead bullet with a golden one.
Maybe trouble and money could be saved by shooting down such targets with a laser. Once the system was designed, built, and paid for, the cost per shot would be low. Such considerations led planners at the Pentagon to seek a solution from Lockheed Martin, which has just delivered a 300-kilowatt laser to the U.S. Army. The new weapon combines the output of a large bundle of fiber lasers of varying frequencies to form a single beam of white light. This laser has been undergoing tests in the lab, and it should see its first field trials sometime in 2023. General Atomics, a military contractor in San Diego, is also developing a laser of this power for the Army based on what’s known as the distributed-gain design, which has a single aperture.
Both systems offer the prospect of being inexpensive to use. The electric bill itself would range “from US $5 to $10,” for a pulse lasting a few seconds, says Michael Perry, the vice president in charge of laser systems for General Atomics.
Why are we getting ray guns only now, more than a century after H.G. Wells imagined them in his sci-fi novel The War of the Worlds? Put it down partly to the rising demand for cheap antimissile defense, but it’s mainly the result of technical advances in high-energy lasers.
The old standby for powerful lasers employed chemical reactions in flowing gas. That method was clumsy, heavy, and dangerous, and the laser itself became a flammable target for enemies to attack. The advantage was that these chemical lasers could be made immensely powerful, a far cry from the puny pulsed ruby lasers that wowed observers back in the 1960s by punching holes in razor blades (at power levels jocularly measured in “gillettes”).
“With lasers, if you can see it, you can kill it.” —Robert Afzal, Lockheed Martin
By 2014, fiber lasers had reached the point where they could be considered for weapons, and one 30-kW model was installed on the USS Ponce, where it demonstrated the ability to shoot down speedboats and small drones at relatively close range. The 300-kW fiber lasers being employed now in the two Army projects emit about 100 kW in optical power, enough to burn through much heftier targets (not to mention quite a few gillettes) at considerable distances.
“A laser of that class can be effective against a wide variety of targets, including cruise missiles, mortars, UAVs, and aircraft,” says Perry. “But not reentry vehicles [launched by ballistic missiles].” Those are the warheads, and to ward them off, he says, you’d probably have to hit the rocket when it’s still in the boost phase, which would mean placing your laser in orbit. Laser tech is still far from performing such a feat.
Even so, these futuristic weapons will no doubt find plenty of applications in today’s world. Israel made news in April by field-testing an airborne antimissile laser called Iron Beam, a play on the name Iron Dome, the missile system it has used to down rockets fired from Gaza. The laser system, reportedly rated at about 100 kW, is still not in service and hasn’t seen combat, but one day it may be able to replace some, if not all, of Iron Dome’s missiles with photons. Other countries have similar capabilities, or say they do. In May, Russia said it had used a laser to incinerate a Ukrainian drone from 5 kilometers away, a claim that Ukraine’s president, Volodymyr Zelenskyy, derided.
A missile is destroyed by a low-power, 2013 version of Lockheed Martin’s fiber laser www.youtube.com
Not all ray guns must be lasers, though. In March, Taiwan News reported that Chinese researchers had built a microwave weapon that in principle could be placed in orbit from where its 5-megawatt pulses could fry the electronic heart of an enemy satellite. But making such a machine in the lab is quite different from operating it in the field, not to mention in outer space, where supplying power and removing waste heat constitute major problems.
Because lasers performance falls off in bad weather, they can’t be relied on by themselves to defend critically important targets. They must instead be paired with kinetic weapons—missiles or bullets—to create a layered defense system.
“With lasers, if you can see it, you can kill it; typically rain and snow are not big deterrents,” says Robert Afzal, an expert on lasers at Lockheed Martin. “But a thundercloud—that’s hard.”
Afzal says that the higher up a laser is placed, the less interference it will face, but there is a trade-off. “With an airplane you have the least amount of resources—least volume, least weight—that is available to you. On a ship, you have a lot more resources available, but you’re in the maritime atmosphere, which is pretty hazy, so you may need a lot more power to get to the target. And the Army is in between: It deals with closer threats, like rockets and mortars, and they need a deep magazine, because they deal with a lot more targets.”
In every case, the point is to use expensive antimissile missiles only when you must. Israel opted to pursue laser weapons in part because its Iron Dome missiles cost so much more than the unguided, largely homemade rockets they defend against. Some of the military drones that Russia and Ukraine are now flying wouldn’t break the budget of the better-heeled sort of hobbyist. And it would be a Pyrrhic victory indeed to shoot them from the sky with projectiles so costly that you went broke.
This article appears in the January 2023 print issue as “Economics Drives a Ray-Gun Resurgence .”
Top Tech 2023: A Special Report
Preview exciting technical developments for the coming year.
Can This Company Dominate Green Hydrogen?
Fortescue will need more electricity-generating capacity than France.
Pathfinder 1 could herald a new era for zeppelins
A New Way to Speed Up Computing
Blue microLEDs bring optical fiber to the processor.
The Personal-Use eVTOL Is (Almost) Here
Opener’s BlackFly is a pulp-fiction fever dream with wings.
Baidu Will Make an Autonomous EV
Its partnership with Geely aims at full self-driving mode.
China Builds New Breeder Reactors
The power plants could also make weapons-grade plutonium.
Economics Drives a Ray-Gun Resurgence
Lasers should be cheap enough to use against drones.
A Cryptocurrency for the Masses or a Universal ID?
What Worldcoin’s killer app will be is not yet clear.
The company’s Condor chip will boast more than 1,000 qubits.
Vagus-nerve stimulation promises to help treat autoimmune disorders.
New satellites can connect directly to your phone.
The E.U.’s first exascale supercomputer will be built in Germany.
A dozen more tech milestones to watch for in 2023.
The U.S. has a long and disturbing habit of ignoring the violence it commits overseas as well as at home.
The post Americans Don’t Care About the Iraqi Dead. They Don’t Even Care About Their Own. appeared first on The Intercept.
D.C. hawks say American military might brought order to the Middle East, but without U.S. meddling, regional rivals finally made a deal.
The post The Key Factor in the Saudi-Iran Deal: Absolutely No U.S. Involvement appeared first on The Intercept.
Clube classifica mulheres como ‘a vadia’ e ‘a caçadora de atenção de homens’ e leva homens a países vulneráveis em busca de sexo.
The post Coaches ensinam homens a conseguir sexo com técnicas inspiradas pelo traficante sexual de crianças e adolescentes appeared first on The Intercept.
Are you a great Chrome user? That’s nice to hear. But first, consider whether or not there are any essential Chrome extensions you are currently missing from your browsing life, so here we're going to share with you 10 Best Chrome Extensions That Are Perfect for Everyone. So Let's Start.
When you have too several passwords to remember, LastPass remembers them for you.
This chrome extension is an easy way to save you time and increase security. It’s a single password manager that will log you into all of your accounts. you simply ought to bear in mind one word: your LastPass password to log in to all or any your accounts.
Features
MozBar is an SEO toolbar extension that makes it easy for you to analyze your web pages' SEO while you surf. You can customize your search so that you see data for a particular region or for all regions. You get data such as website and domain authority and link profile. The status column tells you whether there are any no-followed links to the page.You can also compare link metrics. There is a pro version of MozBar, too.
Grammarly is a real-time grammar checking and spelling tool for online writing. It checks spelling, grammar, and punctuation as you type, and has a dictionary feature that suggests related words. if you use mobile phones for writing than Grammerly also have a mobile keyboard app.
VidIQ is a SaaS product and Chrome Extension that makes it easier to manage and optimize your YouTube channels. It keeps you informed about your channel's performance with real-time analytics and powerful insights.
Features
ColorZilla is a browser extension that allows you to find out the exact color of any object in your web browser. This is especially useful when you want to match elements on your page to the color of an image.
Features
Honey is a chrome extension with which you save each product from the website and notify it when it is available at low price it's one among the highest extensions for Chrome that finds coupon codes whenever you look online.
Features
GMass (or Gmail Mass) permits users to compose and send mass emails using Gmail. it is a great tool as a result of you'll use it as a replacement for a third-party email sending platform. you will love GMass to spice up your emailing functionality on the platform.
It's a Chrome extension for geeks that enables you to highlight and save what you see on the web.
It's been designed by Notion, that could be a Google space different that helps groups craft higher ideas and collaborate effectively.
Features
If you are someone who works online, you need to surf the internet to get your business done. And often there is no time to read or analyze something. But it's important that you do it. Notion Web Clipper will help you with that.
WhatFont is a Chrome extension that allows web designers to easily identify and compare different fonts on a page. The first time you use it on any page, WhatFont will copy the selected page.It Uses this page to find out what fonts are present and generate an image that shows all those fonts in different sizes. Besides the apparent websites like Google or Amazon, you'll conjointly use it on sites wherever embedded fonts ar used.
Similar Web is an SEO add on for both Chrome and Firefox.It allows you to check web site traffic and key metrics for any web site, as well as engagement rate, traffic ranking, keyword ranking, and traffic source. this is often a good tool if you are looking to seek out new and effective SEO ways similarly as analyze trends across the web.
Features
I know everyone knows how to install extension in pc but most of people don't know how to install it in android phone so i will show you how to install it in android
1. Download Kiwi browser from Play Store and then Open it.
2. Tap the three dots at the top right corner and select Extension.
3. Click on (+From Store) to access chrome web store or simple search chrome web store and access it.
4. Once you found an extension click on add to chrome a message will pop-up asking if you wish to confirm your choice. Hit OK to install the extension in the Kiwi browser.
5. To manage extensions on the browser, tap the three dots in the upper right corner. Then select Extensions to access a catalog of installed extensions that you can disable, update or remove with just a few clicks.
Your Chrome extensions should install on Android, but there’s no guarantee all of them will work. Because Google Chrome Extensions are not optimized for Android devices.
We hope this list of 10 best chrome extensions that is perfect for everyone will help you in picking the right Chrome Extensions. We have selected the extensions after matching their features to the needs of different categories of people. Also which extension you like the most let me know in the comment section
If electric vertical takeoff and landing aircraft do manage to revolutionize transportation, the date of 5 October 2011, may live on in aviation lore. That was the day when a retired mechanical engineer named Marcus Leng flew a home-built eVTOL across his front yard in Warkworth, Ont., Canada, startling his wife and several of his friends.
“So, take off, flew about 6 feet above the ground, pitched the aircraft towards my wife and the two couples that were there, who were behind automobiles for protection, and decided to do a skidding stop in front of them. Nobody had an idea that this was going to be happening,” recalls Leng.
But as he looked to set his craft down, he saw a wing starting to dig into his lawn. “Uh-oh, this is not good,” he thought. “The aircraft is going to spin out of control. But what instead happened was the propulsion systems revved up and down so rapidly that as the aircraft did that skidding turn, that wing corner just dragged along my lawn exactly in the direction I was holding the aircraft, and then came to a stable landing,” says Leng. At that point, he knew that such an aircraft was viable “because to have that sort of an interference in the aircraft and for the control systems to be able to control it was truly remarkable.”
It was the second time anyone, anywhere had ever flown an eVTOL aircraft.
Today, some 350 organizations in 48 countries are designing, building, or flying eVTOLs, according to the Vertical Flight Society. These companies are fueled by more than US $7 billion and perhaps as much as $10 billion in startup funding. And yet, 11 years after Leng’s flight, no eVTOLs have been delivered to customers or are being produced at commercial scale. None have even been certified by a civil aviation authority in the West, such as the U.S. Federal Aviation Administration or the European Union Aviation Safety Agency.
But 2023 looks to be a pivotal year for eVTOLs. Several well-funded startups are expected to reach important early milestones in the certification process. And the company Leng founded, Opener, could beat all of them by making its first deliveries—which would also be the first for any maker of an eVTOL.
Today, some 350 organizations in 48 countries are designing, building, or flying eVTOLs, according to the Vertical Flight Society.
As of late October, the company had built at its facility in Palo Alto, Calif., roughly 70 aircraft—considerably more than are needed for simple testing and evaluation. It had flown more than 30 of them. And late in 2022, the company had begun training a group of operators on a state-of-the-art virtual-reality simulator system.
Opener’s highly unusual, single-seat flier is intended for personal use rather than transporting passengers, which makes it almost unique. Opener intends to have its aircraft classified as an “ultralight,” enabling it to bypass the rigorous certification required for commercial-transport and other aircraft types. The certification issue looms as a major unknown over the entire eVTOL enterprise, at least in the United States, because, as the blog Jetlaw.com noted last August, “the FAA has no clear timeline or direction on when it will finalize a permanent certification process for eVTOL.”
Opener’s strategy is not without risks, either. For one, there’s no guarantee that the FAA will ultimately agree that Opener’s aircraft, called BlackFly, qualifies as an ultralight. And not everyone is happy with this approach. “My concern is, these companies that are saying they can be ultralights and start flying around in public are putting at risk a $10 billion [eVTOL] industry,” says Mark Moore, founder and chief executive of Whisper Aero in Crossville, Tenn. “Because if they crash, people won’t know the difference” between the ultralights and the passenger eVTOLs, he adds. “To me, that’s unacceptable.” Previously, Moore led a team at NASA that designed a personal-use eVTOL and then served as engineering director at Uber’s Elevate initiative.
A BlackFly eVTOL took off on 1 October, 2022, at the Pacific Airshow in Huntington Beach, Calif. Irfan Khan/Los Angeles Times/Getty Images
Opener’s aircraft is as singular as its business model. It’s a radically different kind of aircraft, and it sprang almost entirely from Leng’s fertile mind.
“As a kid,” he says, “I already envisioned what it would be like to have an aircraft that could seamlessly do a vertical takeoff, fly, and land again without any encumbrances whatsoever.” It was a vision that never left him, from a mechanical-engineering degree at the University of Toronto, management jobs in the aerospace industry, starting a company and making a pile of money by inventing a new kind of memory foam, and then retiring in 1996 at the age of 36.
The fundamental challenge to designing a vertical-takeoff aircraft is endowing it with both vertical lift and efficient forward cruising. Most eVTOL makers achieve this by physically tilting multiple large rotors from a vertical rotation axis, for takeoff, to a horizontal one, for cruising. But the mechanism for tilting the rotors must be extremely robust, and therefore it inevitably adds substantial complexity and weight. Such tilt-rotors also entail significant compromises and trade-offs in the size of the rotors and their placement relative to the wings.
Opener’s BlackFly ingeniously avoids having to make those trade-offs and compromises. It has two wings, one in front and one behind the pilot. Affixed to each wing are four motors and rotors—and these never change their orientation relative to the wings. Nor do the wings move relative to the fuselage. Instead, the entire aircraft rotates in the air to transition between vertical and horizontal flight.
To control the aircraft, the pilot moves a joystick, and those motions are instantly translated by redundant flight-control systems into commands that alter the relative thrust among the eight motor-propellers.
Visually, it’s an astounding aircraft, like something from a 1930s pulp sci-fi magazine. It’s also a triumph of engineering.
Leng says the journey started for him in 2008, when “I just serendipitously stumbled upon the fact that all the key technologies for making electric VTOL human flight practical were coming to a nexus.”
The journey that made Leng’s dream a reality kicked into high gear in 2014 when a chance meeting with investor Sebastian Thrun at an aviation conference led to Google cofounder Larry Page investing in Leng’s project.
Leng started in his basement in 2010, spending his own money on a mélange of home-built and commercially available components. The motors were commercial units that Leng modified himself, the motor controllers were German and off the shelf, the inertial-measurement unit was open source and based on an Arduino microcontroller. The batteries were modified model-aircraft lithium-polymer types.
“The main objective behind this was proof of concept,” he says.“I had to prove it to myself, because up until that point, they were just equations on a piece of paper. I had to get to the point where I knew that this could be practical.”
After his front-yard flight in 2011, there followed several years of refining and rebuilding all of the major components until they achieved the specifications Leng wanted. “Everything on BlackFly is from first principles,” he declares.
The motors started out generating 160 newtons (36 pounds) of static thrust. It was way too low. “I actually tried to purchase motors and motor controllers from companies that manufactured those, and I specifically asked them to customize those motors for me, by suggesting a number of changes,” he says. “I was told that, no, those changes won’t work.”
So he started designing his own brushless AC motors. “I did not want to design motors,” says Leng. “In the end, I was stunned at how much improvement we could make by just applying first principles to this motor design.”
Eleven years after Leng’s flight, no eVTOLs have been delivered to customers or are being produced at commercial scale.
To increase the power density, he had to address the tendency of a motor in an eVTOL to overheat at high thrust, especially during hover, when cooling airflow over the motor is minimal. He began by designing a system to force air through the motor. Then he began working on the rotor of the motor (not to be confused with the rotor wings that lift and propel the aircraft). This is the spinning part of a motor, which is typically a single piece of electrical steel. It’s an iron alloy with very high magnetic permeability.
By layering the steel of the rotor, Leng was able to greatly reduce its heat generation, because the thinner layers of steel limited the eddy currents in the steel that create heat. Less heat meant he could use higher-strength neodymium magnets, which would otherwise become demagnetized. Finally, he rearranged those magnets into a configuration called a Halbach array. In the end Leng’s motors were able to produce 609 newtons (137 lbs.) of thrust.
Overall, the 2-kilogram motors are capable of sustaining 20 kilowatts, for a power density of 10 kilowatts per kilogram, Leng says. It’s an extraordinary figure. One of the few motor manufacturers claiming a density in that range is H3X Technologies, which says its HPDM-250 clocks in at 12 kw/kg.
The brain of the BlackFly consists of three independent flight controllers, which calculate the aircraft’s orientation and position, based on readings from the inertial-measurement units, GPS receivers, and magnetometers. They also use pitot tubes to measure airspeed. The flight controllers continually cross-check their outputs to make sure they agree. They also feed instructions, based on the operator’s movement of the joystick, to the eight motor controllers (one for each motor).
Equipped with these sophisticated flight controllers, the fly-by-wire BlackFly is similar in that regard to the hobbyist drones that rely on processors and clever algorithms to avoid the tricky manipulations of sticks, levers, and pedals required to fly a traditional fixed- or rotary-wing aircraft.
That sophisticated, real-time control will allow a far larger number of people to consider purchasing a BlackFly when it becomes available. In late November, Opener had not disclosed a likely purchase price, but in the past the company had suggested that BlackFly would cost as much as a luxury SUV. So who might buy it? CEO Ken Karklin points to several distinct groups of potential buyers who have little in common other than wealth.
There are early tech adopters and also people who are already aviators and are “passionate about the future of electric flight, who love the idea of being able to have their own personal vertical-takeoff-and-landing, low-maintenance, clean aircraft that they can fly in rural and uncongested areas,” Karklin says. “One of them is a business owner. He has a plant that’s a 22-mile drive but would only be a 14-mile flight, and he wants to install charging infrastructure on either end and wants to use it to commute every day. We love that.”
Others are less certain about how, or even whether, this market segment will establish itself. “When it comes to personal-use eVTOLs, we are really struggling to see the business case,” says Sergio Cecutta, founder and partner at SMG Consulting, where he studies eVTOLs among other high-tech transportation topics. “I’m not saying they won’t sell. It’s how many will they sell?” He notes that Opener is not the only eVTOL maker pursuing a path to success through the ultralight or some other specialized FAA category. As of early November, the list included Alauda Aeronautics, Air, Alef, Bellwether Industries, Icon Aircraft, Jetson, Lift Aircraft, and Ryse Aero Technologies.
What makes Opener special? Both Karklin and Leng emphasize the value of all that surrounds the BlackFly aircraft. For example, there are virtual-reality-based simulators that they say enable them to fully train an operator in 10 to 15 hours. The aircraft themselves are heavily instrumented: “Every flight, literally, there’s over 1,000 parameters that are recorded, some of them at 1,000 hertz, some 100 Hz, 10 Hz, and 1 Hz,” says Leng. “All that information is stored on the aircraft and downloaded to our database at the end of the flight. When we go and make a software change, we can do what’s called regression testing by running that software using all the data from our previous flights. And we can compare the outputs against what the outputs were during any specific flight and can automatically confirm that the changes that we’ve made are without any issues. And we can also compare, to see if they make an improvement.”
Ed Lu, a former NASA astronaut and executive at Google, sits on Opener’s safety-review board. He says what impressed him most when he first met the BlackFly team was “the fact that they had based their entire development around testing. They had a wealth of flight data from flying this vehicle in a drone mode, an unmanned mode.” Having all that data was key. “They could make their decisions based not on analysis, but after real-world operations,” Lu says, adding that he is particularly impressed by Opener’s ability to manage all the flight data. “It allows them to keep track of every aircraft, what sensors are in which aircraft, which versions of code, all the way down to the flights, to what happened in each flight, to videos of what’s happening.” Lu thinks this will be a huge advantage once the aircraft is released into the “real” world.
Karklin declines to comment on whether an ultralight approval, which is governed by what the FAA designates “ Part 103,” might be an opening move toward an FAA type certification in the future. “This is step one for us, and we are going to be very, very focused on personal air vehicles for recreational and fun purposes for the foreseeable future,” he says. “But we’ve also got a working technology stack here and an aircraft architecture that has considerable utility beyond the realm of Part-103 [ultralight] aircraft, both for crewed and uncrewed applications.” Asked what his immediate goals are, Karklin responds without hesitating. “We will be the first eVTOL company, we believe, in serial production, with a small but steadily growing revenue and order book, and with a growing installed base of cloud-connected aircraft that with every flight push all the telemetry, all the flight behavior, all the component behavior, all the operator-behavior data representing all of this up to the cloud, to be ingested by our back office, and processed. And that provides us a lot of opportunity.”
This article appears in the January 2023 print issue as “Finally, an eVTOL You Can Buy Soonish.”
Top Tech 2023: A Special Report
Preview exciting technical developments for the coming year.
Can This Company Dominate Green Hydrogen?
Fortescue will need more electricity-generating capacity than France.
Pathfinder 1 could herald a new era for zeppelins
A New Way to Speed Up Computing
Blue microLEDs bring optical fiber to the processor.
The Personal-Use eVTOL Is (Almost) Here
Opener’s BlackFly is a pulp-fiction fever dream with wings.
Baidu Will Make an Autonomous EV
Its partnership with Geely aims at full self-driving mode.
China Builds New Breeder Reactors
The power plants could also make weapons-grade plutonium.
Economics Drives a Ray-Gun Resurgence
Lasers should be cheap enough to use against drones.
A Cryptocurrency for the Masses or a Universal ID?
What Worldcoin’s killer app will be is not yet clear.
The company’s Condor chip will boast more than 1,000 qubits.
Vagus-nerve stimulation promises to help treat autoimmune disorders.
New satellites can connect directly to your phone.
The E.U.’s first exascale supercomputer will be built in Germany.
A dozen more tech milestones to watch for in 2023.
A rocket built by Indian startup Skyroot has become the country’s first privately developed launch vehicle to reach space, following a successful maiden flight earlier today. The suborbital mission is a major milestone for India’s private space industry, say experts, though more needs to be done to nurture the fledgling sector.
The Vikram-S rocket, named after the founder of the Indian space program, Vikram Sarabhai, lifted off from the Indian Space Research Organization’s (ISRO) Satish Dhawan Space Centre, on India’s east coast, at 11:30 a.m. local time (1 a.m. eastern time). It reached a peak altitude of 89.5 kilometers (55.6 miles), crossing the 80-km line that NASA counts as the boundary of space, but falling just short of the 100 km recognized by the Fédération Aéronautique Internationale.
In the longer run, India’s space industry has ambitions of capturing a significant chunk of the global launch market.
Pawan Kumar Chandana, cofounder of the Hyderabad-based startup, says the success of the launch is a major victory for India’s nascent space industry, but the buildup to the mission was nerve-racking. “We were pretty confident on the vehicle, but, as you know, rockets are very notorious for failure,” he says. “Especially in the last 10 seconds of countdown, the heartbeat was racing up. But once the vehicle had crossed the launcher and then went into the stable trajectory, I think that was the moment of celebration.”
At just 6 meters (20 feet) long and weighing only around 550 kilograms (0.6 tonnes), the Vikram-S is not designed for commercial use. Today’s mission, called Prarambh, which means “the beginning” in Sanskrit, was designed to test key technologies that will be used to build the startup’s first orbital rocket, the Vikram I. The rocket will reportedly be capable of lofting as much as 480 kg up to an 500-km altitude and is slated for a maiden launch next October.
Skyroot cofounder Pawan Kumar Chandana standing in front of the Vikram-S rocket at the Satish Dhawan Space Centre, on the east coast of India.Skyroot
In particular, the mission has validated Skyroot’s decision to go with a novel all-carbon fiber structure to cut down on weight, says Chandana. It also allowed the company to test 3D-printed thrusters, which were used for spin stabilization in Vikram-S but will power the upper stages of its later rockets. Perhaps the most valuable lesson, though, says Chandana, was the complexity of interfacing Skyroot's vehicle with ISRO’s launch infrastructure. “You can manufacture the rocket, but launching it is a different ball game,” he says. “That was a great learning experience for us and will really help us accelerate our orbital vehicle.”
Skyroot is one of several Indian space startups looking to capitalize on recent efforts by the Indian government to liberalize its highly regulated space sector. Due to the dual-use nature of space technology, ISRO has historically had a government-sanctioned monopoly on most space activities, says Rajeswari Pillai Rajagopalan, director of the Centre for Security, Strategy and Technology at the Observer Research Foundation think tank, in New Delhi. While major Indian engineering players like Larsen & Toubro and Godrej Aerospace have long supplied ISRO with components and even entire space systems, the relationship has been one of a supplier and vendor, she says.
But in 2020, Finance Minister Nirmala Sitharaman announced a series of reforms to allow private players to build satellites and launch vehicles, carry out launches, and provide space-based services. The government also created the Indian National Space Promotion and Authorisation Centre (InSpace), a new agency designed to act as a link between ISRO and the private sector, and affirmed that private companies would be able to take advantage of ISRO’s facilities.
The first launch of a private rocket from an ISRO spaceport is a major milestone for the Indian space industry, says Rajagopalan. “This step itself is pretty crucial, and it’s encouraging to other companies who are looking at this with a lot of enthusiasm and excitement,” she says. But more needs to be done to realize the government’s promised reforms, she adds. The Space Activities Bill that is designed to enshrine the country’s space policy in legislation has been languishing in draft form for years, and without regulatory clarity, it’s hard for the private sector to justify significant investments. “These are big, bold statements, but these need to be translated into actual policy and regulatory mechanisms,” says Rajagopalan.
Skyroot’s launch undoubtedly signals the growing maturity of India’s space industry, says Saurabh Kapil, associate director in PwC’s space practice. “It’s a critical message to the Indian space ecosystem, that we can do it, we have the necessary skill set, we have those engineering capabilities, we have those manufacturing or industrialization capabilities,” he says.
The Vikram-S rocket blasting off from the Satish Dhawan Space Centre, on the east coast of India.Skyroot
However, crossing this technical milestone is only part of the challenge, he says. The industry also needs to demonstrate a clear market for the kind of launch vehicles that companies like Skyroot are building. While private players are showing interest in launching small satellites for applications like agriculture and infrastructure monitoring, he says, these companies will be able to build sustainable businesses only if they are allowed to compete for more lucrative government and defense-sector contacts.
In the longer run, though, India’s space industry has ambitions of capturing a significant chunk of the global launch market, says Kapil. ISRO has already developed a reputation for both reliability and low cost—its 2014 mission to Mars cost just US $74 million, one-ninth the cost of a NASA Mars mission launched the same week. That is likely to translate to India’s private space industry, too, thanks to a considerably lower cost of skilled labor, land, and materials compared with those of other spacefaring nations, says Kapil. “The optimism is definitely there that because we are low on cost and high on reliability, whoever wants to build and launch small satellites is largely going to come to India,” he says.
Non-fungible tokens (NFTs) are the most popular digital assets today, capturing the attention of cryptocurrency investors, whales and people from around the world. People find it amazing that some users spend thousands or millions of dollars on a single NFT-based image of a monkey or other token, but you can simply take a screenshot for free. So here we share some freuently asked question about NFTs.
NFT stands for non-fungible token, which is a cryptographic token on a blockchain with unique identification codes that distinguish it from other tokens. NFTs are unique and not interchangeable, which means no two NFTs are the same. NFTs can be a unique artwork, GIF, Images, videos, Audio album. in-game items, collectibles etc.
A blockchain is a distributed digital ledger that allows for the secure storage of data. By recording any kind of information—such as bank account transactions, the ownership of Non-Fungible Tokens (NFTs), or Decentralized Finance (DeFi) smart contracts—in one place, and distributing it to many different computers, blockchains ensure that data can’t be manipulated without everyone in the system being aware.
The value of an NFT comes from its ability to be traded freely and securely on the blockchain, which is not possible with other current digital ownership solutionsThe NFT points to its location on the blockchain, but doesn’t necessarily contain the digital property. For example, if you replace one bitcoin with another, you will still have the same thing. If you buy a non-fungible item, such as a movie ticket, it is impossible to replace it with any other movie ticket because each ticket is unique to a specific time and place.
One of the unique characteristics of non-fungible tokens (NFTs) is that they can be tokenised to create a digital certificate of ownership that can be bought, sold and traded on the blockchain.
As with crypto-currency, records of who owns what are stored on a ledger that is maintained by thousands of computers around the world. These records can’t be forged because the whole system operates on an open-source network.
NFTs also contain smart contracts—small computer programs that run on the blockchain—that give the artist, for example, a cut of any future sale of the token.
Non-fungible tokens (NFTs) aren't cryptocurrencies, but they do use blockchain technology. Many NFTs are based on Ethereum, where the blockchain serves as a ledger for all the transactions related to said NFT and the properties it represents.5) How to make an NFT?
Anyone can create an NFT. All you need is a digital wallet, some ethereum tokens and a connection to an NFT marketplace where you’ll be able to upload and sell your creations
When you purchase a stock in NFT, that purchase is recorded on the blockchain—the bitcoin ledger of transactions—and that entry acts as your proof of ownership.
The value of an NFT varies a lot based on the digital asset up for grabs. People use NFTs to trade and sell digital art, so when creating an NFT, you should consider the popularity of your digital artwork along with historical statistics.
In the year 2021, a digital artist called Pak created an artwork called The Merge. It was sold on the Nifty Gateway NFT market for $91.8 million.
Non-fungible tokens can be used in investment opportunities. One can purchase an NFT and resell it at a profit. Certain NFT marketplaces let sellers of NFTs keep a percentage of the profits from sales of the assets they create.
Many people want to buy NFTs because it lets them support the arts and own something cool from their favorite musicians, brands, and celebrities. NFTs also give artists an opportunity to program in continual royalties if someone buys their work. Galleries see this as a way to reach new buyers interested in art.
There are many places to buy digital assets, like opensea and their policies vary. On top shot, for instance, you sign up for a waitlist that can be thousands of people long. When a digital asset goes on sale, you are occasionally chosen to purchase it.
To mint an NFT token, you must pay some amount of gas fee to process the transaction on the Etherum blockchain, but you can mint your NFT on a different blockchain called Polygon to avoid paying gas fees. This option is available on OpenSea and this simply denotes that your NFT will only be able to trade using Polygon's blockchain and not Etherum's blockchain. Mintable allows you to mint NFTs for free without paying any gas fees.
The answer is no. Non-Fungible Tokens are minted on the blockchain using cryptocurrencies such as Etherum, Solana, Polygon, and so on. Once a Non-Fungible Token is minted, the transaction is recorded on the blockchain and the contract or license is awarded to whoever has that Non-Fungible Token in their wallet.
You can sell your work and creations by attaching a license to it on the blockchain, where its ownership can be transferred. This lets you get exposure without losing full ownership of your work. Some of the most successful projects include Cryptopunks, Bored Ape Yatch Club NFTs, SandBox, World of Women and so on. These NFT projects have gained popularity globally and are owned by celebrities and other successful entrepreneurs. Owning one of these NFTs gives you an automatic ticket to exclusive business meetings and life-changing connections.
That’s a wrap. Hope you guys found this article enlightening. I just answer some question with my limited knowledge about NFTs. If you have any questions or suggestions, feel free to drop them in the comment section below. Also I have a question for you, Is bitcoin an NFTs? let me know in The comment section below
More than 2m households fell into fuel poverty last year, and in one community in north-east England many must make a daily choice between heating and eating. Video producers Maeve Shearlaw and Chris Cherry visited a centre in Shiremoor, North Tyneside, that is supporting people through the cost of living crisis, and saw how mouldy properties and prepayment meters are exacerbating problems for the most vulnerable people
The 19-seater Dornier 228 propeller plane that took off into the cold blue January sky looked ordinary at first glance. Spinning its left propeller, however, was a 2-megawatt electric motor powered by two hydrogen fuel cells—the right side ran on a standard kerosene engine—making it the largest aircraft flown on hydrogen to date. Val Miftakhov, founder and CEO of ZeroAvia, the California startup behind the 10-minute test flight in Gloucestershire, England, called it a “historical day for sustainable aviation.”
Los Angeles–based Universal Hydrogen plans to test a 50-seat hydrogen-powered aircraft by the end of February. Both companies promise commercial flights of retrofitted turboprop aircraft by 2025. French aviation giant Airbus is going bigger with a planned 2026 demonstration flight of its iconic A380 passenger airplane, which will fly using hydrogen fuel cells and by burning hydrogen directly in an engine. And Rolls Royce is making headway on aircraft engines that burn pure hydrogen.
The aviation industry, responsible for some 2.5 percent of global carbon emissions, has committed to net-zero emissions by 2050. Getting there will require several routes, including sustainable fuels, hybrid-electric engines, and battery-electric aircraft.
Hydrogen is another potential route. Whether used to make electricity in fuel cells or burned in an engine, it combines with oxygen to emit water vapor. If green hydrogen scales up for trucks and ships, it could be a low-cost fuel without the environmental issues of batteries.
Flying on hydrogen brings storage and aircraft-certification challenges, but aviation companies are doing the groundwork now for hydrogen flight by 2035. “Hydrogen is headed off to the sky, and we’re going to take it there,” says Amanda Simpson, vice president for research and technology at Airbus Americas.
The most plentiful element, hydrogen is also the lightest—key for an industry fighting gravity—packing three times the energy of jet fuel by weight. The problem with hydrogen is its volume. For transport, it has to be stored in heavy tanks either as a compressed high-pressure gas or a cryogenic liquid.
ZeroAvia is using compressed hydrogen gas, since it is already approved for road transport. Its test airplane had two hydrogen fuel cells and tanks sitting inside the cabin, but the team is now thinking creatively about a compact system with minimal changes to aircraft design to speed up certification in the United States and Europe. The fuel cells’ added weight could reduce flying range, but “that’s not a problem, because aircraft are designed to fly much further than they’re used,” says vice president of strategy James McMicking.
The company has backing from investors that include Bill Gates and Jeff Bezos; partnerships with British Airways and United Airlines; and 1,500 preorders for its hydrogen-electric power-train system, half of which are for smaller, 400-kilometer-range 9- to 19-seaters.
By 2027, ZeroAvia plans to convert larger, 70-seater turboprop aircraft with twice the range, used widely in Europe. The company is developing 5-MW electric motors for those, and it plans to switch to more energy-dense liquid hydrogen to save space and weight. The fuel is novel for the aviation industry and could require a longer regulatory approval process, McMicking says.
Next will come a 10-MW power train for aircraft with 100 to 150 seats, “the workhorses of the industry,” he says. Those planes—think Boeing 737—are responsible for 60 percent of aviation emissions. Making a dent in those with hydrogen will require much more efficient fuel cells. ZeroAvia is working on proprietary high-temperature fuel cells for that, McMicking says, with the ability to reuse the large amounts of waste heat generated. “We have designs and a technology road map that takes us into jet-engine territory for power,” he says.
Universal Hydrogen
Universal Hydrogen, which counts Airbus, GE Aviation, and American Airlines among its strategic investors, is placing bets on liquid hydrogen. The startup, “a hydrogen supply and logistics company at our core,” wants to ensure a seamless delivery network for hydrogen aviation as it catches speed, says founder and CEO Paul Eremenko. The company sources green hydrogen, turns it into liquid, and puts it in relatively low-tech insulated aluminum tanks that it will deliver via road, rail, or ship. “We want them certified by the Federal Aviation Administration for 2025, which means they can’t be a science project,” he says.
The cost of green hydrogen is expected to be on par with kerosene by 2025, Eremenko says. But “there’s nobody out there with an incredible hydrogen-airplane solution. It’s a chicken-and-egg problem.”
To crack it, Universal Hydrogen partnered with leading fuel-cell-maker Plug Power to develop a few thousand conversion kits for regional turboprop airplanes. The kits swap the engine in its streamlined housing (also known as nacelle) for a fuel-cell stack, power electronics, and a 2-MW electric motor. While the company’s competitors use batteries as buffers during takeoff, Eremenko says Universal uses smart algorithms to manage fuel cells, so they can ramp up and respond quickly. “We are the Nespresso of hydrogen,” he says. “We buy other people’s coffee, put it into capsules, and deliver to customers. But we have to build the first coffee machine. We’re the only company incubating the chicken and egg at the same time.”
This rendering of an Airbus A380 demonstrator flight (presently slated for 2026) reveals current designs on an aircraft that’s expected to fly using fuel cells and by burning hydrogen directly in the engine. Airbus
Fuel cells have a few advantages over a large central engine. They allow manufacturers to spread out smaller propulsion motors over an aircraft, giving them more design freedom. And because there are no high-temperature moving parts, maintenance costs can be lower. For long-haul aircraft, however, the weight and complexity of high-power fuel cells makes hydrogen-combustion engines appealing.
Airbus is considering both fuel-cell and combustion propulsion for its ZEROe hydrogen aircraft system. It has partnered with German automotive fuel-cell-maker Elring Klinger and, for direct combustion engines, with CFM International, a joint venture between GE Aviation and Safran. Burning liquid hydrogen in today’s engines is still expected to require slight modifications, such as a shorter combustion chamber and better seals.
Airbus is also evaluating hybrid propulsion concepts with a hydrogen-engine-powered turbine and a hydrogen-fuel-cell-powered motor on the same shaft, says Simpson, of Airbus Americas. “Then you can optimize it so you use both propulsion systems for takeoff and climb, and then turn one off for cruising.”
The company isn’t limiting itself to simple aircraft redesign. Hydrogen tanks could be stored in a cupola on top of the plane, pods under the wings, or a large tank at the back, Simpson says. Without liquid fuel in the wings, as in traditional airplanes, she says, “you can optimize wings for aerodynamics, make them thinner or longer. Or maybe a blended-wing body, which could be very different. This opens up the opportunity to optimize aircraft for efficiency.” Certification for such new aircraft could take years, and Airbus isn’t expecting commercial flights until 2035.
Conventional aircraft made today will be around in 2050 given their 25- to 30-year life-span, says Robin Riedel, an analyst at McKinsey & Co. Sustainable fuels are the only green option for those. He says hydrogen could play a role there, through “power-to-liquid technology, where you can mix hydrogen and captured carbon dioxide to make aviation fuel.”
Even then, Riedel thinks hydrogen will likely be a small part of aviation’s sustainability solution until 2050. “By 2070, hydrogen is going to play a much bigger role,” he says. “But we have to get started on hydrogen now.” The money that Airbus and Boeing are putting into hydrogen is a small fraction of aerospace, he says, but big airlines investing in hydrogen companies or placing power-train orders “shows there is desire.”
The aviation industry has to clean up if it is to grow, Simpson says. Biofuels are a stepping-stone, because they reduce only carbon emissions, not other harmful ones. “If we’re going to move towards clean aviation, we have to rethink everything from scratch and that’s what ZEROe is doing,” she says. “This is an opportunity to make not an evolutionary change but a truly revolutionary one.”
Are you looking for a way to create content that is both effective and efficient? If so, then you should consider using an AI content generator. AI content generators are a great way to create content that is both engaging and relevant to your audience.
There are a number of different AI content generator tools available on the market, and it can be difficult to know which one is right for you. To help you make the best decision, we have compiled a list of the top 10 AI content generator tools that you should use in 2022.
So, without further ado, let’s get started!
Boss Mode: $99/Month
The utility of this service can be used for short-term or format business purposes such as product descriptions, website copy, market copy, and sales reports.
Free Trial – 7 days with 24/7 email support and 100 runs per day.
Pro Plan: $49 and yearly, it will cost you $420 i.e. $35 per month.
Wait! I've got a pretty sweet deal for you. Sign up through the link below, and you'll get (7,000 Free Words Plus 40% OFF) if you upgrade to the paid plan within four days.
Claim Your 7,000 Free Words With This Special Link - No Credit Card Required
Just like Outranking, Frase is an AI that helps you research, create and optimize your content to make it high quality within seconds. Frase works on SEO optimization where the content is made to the liking of search engines by optimizing keywords and keywords.
Solo Plan: $14.99/Month and $12/Month if billed yearly with 4 Document Credits for 1 user seat.
Basic Plan: $44.99/month and $39.99/month if billed yearly with 30 Document Credits for 1 user seat.
Team Plan: $114.99/month and $99.99/month if billed yearly for unlimited document credits for 3 users.
*SEO Add-ons and other premium features for $35/month irrespective of the plan.
Article Forge is another content generator that operates quite differently from the others on this list. Unlike Jasper.ai, which requires you to provide a brief and some information on what you want it to write this tool only asks for a keyword. From there, it’ll generate a complete article for you.
What’s excellent about Article Forge is they provide a 30-day money-back guarantee. You can choose between a monthly or yearly subscription. Unfortunately, they offer a free trial and no free plan:
Basic Plan: $27/Month
This plan allows users to produce up to 25k words each month. This is excellent for smaller blogs or those who are just starting.
Standard Plan: $57/month)
Unlimited Plan: $117/month
It’s important to note that Article Forge guarantees that all content generated through the platform passes Copyscape.
Rytr.me is a free AI content generator perfect for small businesses, bloggers, and students. The software is easy to use and can generate SEO-friendly blog posts, articles, and school papers in minutes.
Cons
Pricing
Rytr offers a free plan that comes with limited features. It covers up to 5,000 characters generated each month and has access to the built-in plagiarism checker. If you want to use all the features of the software, you can purchase one of the following plans:
Saver Plan: $9/month, $90/year
Writesonic is a free, easy-to-use AI content generator. The software is designed to help you create copy for marketing content, websites, and blogs. It's also helpful for small businesses or solopreneurs who need to produce content on a budget.
Writesonic is free with limited features. The free plan is more like a free trial, providing ten credits. After that, you’d need to upgrade to a paid plan. Here are your options:
Short-form: $15/month
Features:
Long-Form: $19/month
CopySmith is an AI content generator that can be used to create personal and professional documents, blogs, and presentations. It offers a wide range of features including the ability to easily create documents and presentations.
CopySmith also has several templates that you can use to get started quickly.
CopySmith offers a free trial with no credit card required. After the free trial, the paid plans are as follows:
Starter Plan: $19/month
Hypotenuse.ai is a free online tool that can help you create AI content. It's great for beginners because it allows you to create videos, articles, and infographics with ease. The software has a simple and easy-to-use interface that makes it perfect for new people looking for AI content generation.
Special Features
Hypotenuse doesn’t offer a free plan. Instead, it offers a free trial period where you can take the software for a run before deciding whether it’s the right choice for you or not. Other than that, here are its paid options:
Starter Plan: $29/month
Growth Plan: $59/month
Enterprise – pricing is custom, so don’t hesitate to contact the company for more information.
Kafkai comes with a free trial to help you understand whether it’s the right choice for you or not. Additionally, you can also take a look at its paid plans:
Writer Plan: $29/month Create 100 articles per month. $0.29/article
Newsroom Plan $49/month – Generate 250 articles a month at $0.20 per article.
Printing Press Plan: $129 /month Create up to 1000 articles a month at roughly $0.13/article.
Industrial Printer Plan: ($199 a month) – Generate 2500 articles each month for $0.08/article.
Peppertype.ai is an online AI content generator that’s easy to use and best for small business owners looking for a powerful copy and content writing tool to help them craft and generate various content for many purposes.
Unfortunately, Peppertype.ai isn’t free. However, it does have a free trial to try out the software before deciding whether it’s the right choice for you. Here are its paid plans:
personal Plan:$35/Month
Team Plan: $199/month
Enterprise – pricing is custom, so please contact the company for more information.
It is no longer a secret that humans are getting overwhelmed with the daily task of creating content. Our lives are busy, and the process of writing blog posts, video scripts, or other types of content is not our day job. In comparison, AI writers are not only cheaper to hire, but also perform tasks at a high level of excellence. This article explores 10 writing tools that used AI to create better content choose the one which meets your requirements and budget but in my opinion Jasper ai is one of the best tools to use to make high-quality content.
If you have any questions ask in the comments section
Note: Don't post links in your comments
Note: This article contains affiliate links which means we make a small commission if you buy any premium plan from our link.
![]() |
Image by vectorjuice on Freepik |
There are lots of questions floating around about how affiliate marketing works, what to do and what not to do when it comes to setting up a business. With so much uncertainty surrounding both personal and business aspects of affiliate marketing. In this post, we will answer the most frequently asked question about affiliate marketing
Affiliate marketing is a way to make money by promoting the products and services of other people and companies. You don't need to create your product or service, just promote existing ones. That's why it's so easy to get started with affiliate marketing. You can even get started with no budget at all!
An affiliate program is a package of information you create for your product, which is then made available to potential publishers. The program will typically include details about the product and its retail value, commission levels, and promotional materials. Many affiliate programs are managed via an affiliate network like ShareASale, which acts as a platform to connect publishers and advertisers, but it is also possible to offer your program directly.
Affiliate networks connect publishers to advertisers. Affiliate networks make money by charging fees to the merchants who advertise with them; these merchants are known as advertisers. The percentage of each sale that the advertiser pays is negotiated between the merchant and the affiliate network.
Dropshipping is a method of selling that allows you to run an online store without having to stock products. You advertise the products as if you owned them, but when someone makes an order, you create a duplicate order with the distributor at a reduced price. The distributor takes care of the post and packaging on your behalf. As affiliate marketing is based on referrals and this type of drop shipping requires no investment in inventory when a customer buys through the affiliate link, no money exchanges hands.
Performance marketing is a method of marketing that pays for performance, like when a sale is made or an ad is clicked This can include methods like PPC (pay-per-click) or display advertising. Affiliate marketing is one form of performance marketing where commissions are paid out to affiliates on a performance basis when they click on their affiliate link and make a purchase or action.
Smartphones are essentially miniature computers, so publishers can display the same websites and offers that are available on a PC. But mobiles also offer specific tools not available on computers, and these can be used to good effect for publishers. Publishers can optimize their ads for mobile users by making them easy to access by this audience. Publishers can also make good use of text and instant messaging to promote their offers. As the mobile market is predicted to make up 80% of traffic in the future, publishers who do not promote on mobile devices are missing out on a big opportunity.
The best way to find affiliate publishers is on reputable networks like ShareASale Cj(Commission Junction), Awin, and Impact radius. These networks have a strict application process and compliance checks, which means that all affiliates are trustworthy.
An affiliate disclosure statement discloses to the reader that there may be affiliate links on a website, for which a commission may be paid to the publisher if visitors follow these links and make purchases.
Publishers promote their programs through a variety of means, including blogs, websites, email marketing, and pay-per-click ads. Social media has a huge interactive audience, making this platform a good source of potential traffic.
A super affiliate is an affiliate partner who consistently drives a large majority of sales from any program they promote, compared to other affiliate partners involved in that program. Affiliates make a lot of money from affiliate marketing Pat Flynn earned more than $50000 in 2013 from affiliate marketing.
Publishers can be identified by their publisher ID, which is used in tracking cookies to determine which publishers generate sales. The activity is then viewed within a network's dashboard.
Because the Internet is so widespread, affiliate programs can be promoted in any country. Affiliate strategies that are set internationally need to be tailored to the language of the targeted country.
Affiliate marketing can help you grow your business in the following ways:
One of the best ways to work with qualified affiliates is to hire an affiliate marketing agency that works with all the networks. Affiliates are carefully selected and go through a rigorous application process to be included in the network.
Affiliate marketing is generally associated with websites, but there are other ways to promote your affiliate links, including:
To build your affiliate marketing business, you don't have to invest money in the beginning. You can sign up for free with any affiliate network and start promoting their brands right away.
Commission rates are typically based on a percentage of the total sale and in some cases can also be a flat fee for each transaction. The rates are set by the merchant.
Who manages your affiliate program?
Some merchants run their affiliate programs internally, while others choose to contract out management to a network or an external agency.
Cookies are small pieces of data that work with web browsers to store information such as user preferences, login or registration data, and shopping cart contents. When someone clicks on your affiliate link, a cookie is placed on the user's computer or mobile device. That cookie is used to remember the link or ad that the visitor clicked on. Even if the user leaves your site and comes back a week later to make a purchase, you will still get credit for the sale and receive a commission it depends on the site cookies duration
The merchant determines the duration of a cookie, also known as its “cookie life.” The most common length for an affiliate program is 30 days. If someone clicks on your affiliate link, you’ll be paid a commission if they purchase within 30 days of the click.
Some examples can be:
There’s no excuse not to try this website — it’s free and easy to use!
Visit Exploding Topics From Here
Headline Studio allows you to create catchy headlines for your content. After writing a title there is data on how often people view articles with similar titles and why they are involved with them.
This is a valuable tool when creating new blog posts because it generates catchy headlines for your blog post to catch a reader’s attention.
Visit Headline Studio From Here
Answer The public is an excellent tool for content creators. It gives you insight into what people are asking on social media sites and communities and lets you guess about topics that matter to your audience. Answer the public allows you to enter a keyword or topic related to your niche and it will show results with popular questions and keywords related to your topic. It's an amazing way to get insights into what people are searching online and allows you to identify topics driven by new blog posts or social media content on platforms like Facebook, Instagram, Youtube, and Twitter as well as the types of questions they ask and also want answers.
Visit Answer The Public From Here
With this tool, content creators can quickly and easily check the ranking of their websites and those of other competitors. This tool allows you to see how your website compares to others in different categories, including:
Surfer Seo is free and the interface is very friendly. It's a great tool for anyone who wants to do quick competitor research or check their site's rankings at any time.
Canva is a free graphic design platform that makes it easy to create invitations, business cards, mobile videos, Instagram posts, Instagram stories, flyers, and more with professionally designed templates. You can even upload your photos and drag and drop them into Canva templates. It's like having a basic version of Photoshop. You can also remove background from images with one click.
Canva offers thousands of free, professionally designed templates that can be customized with just a few clicks. Simply upload your photos to Canva, drag them into the template of your choice, and save the file to your computer.
It is free to use for basic use but if you want access to different fonts or more features, then you need to buy a premium plan.
Facebook Audience Insights is a powerful tool for content creators when researching their target market. This can help you understand the demographics, interests, and behaviors of your target audience. This information helps determine the direction of your content so that it resonates with them. The most important tools to consider in Facebook Audience Insights are Demographics and Behavior. These two sections provide you with valuable information about your target market, such as their age and from where they belong, how much time they spend on social media per day, what devices they use to access it, etc.
There is another section of Facebook Audits that is very helpful. This will let you know the interests, hobbies, and activities that people in your target market are most interested in. You can use this information to create content for them about things they will be about as opposed to topics they may not be so keen on.
Visit Facebook Audience Insights From Here
Pexels is a warehouse for any content creator with millions of free royalty images who wants to find high-quality images that can be used freely without having to worry about permissions or licensing so you are free to use the photos in your content and also there is no watermark on photos
The only cons are that some photos contain people, and Pexels doesn't allow you to remove people from photos. Search your keyword and download as many as you want!
So there you have it. We hope that these specially curated websites will come in handy for content creators and small businesses alike. If you've got a site that should be on this list, let us know! And if you're looking for more content creator resources, then let us know in the comments section below
Scientists today reported that they’ve observed room-temperature superconductivity. Superconductivity is a rarefied state of matter in which electrical resistance in a material drops to zero while its electrical and magnetic capacity vastly expands. Until now, the phenomenon has been observed only at cryogenic temperatures or phenomenally high pressures. Such a discovery, if confirmed, could open pathways to a range of applications including lossless electric transmission, high-efficiency electric motors, maglev trains, and low-cost magnets for MRI and nuclear fusion.
However, the caveats attached to today’s announcement are considerable. While the researchers say their material retains its coveted lossless properties at temperatures up to 20.6 ºC, it still requires substantial pressure (10 kilobars, or 9,900 atmospheres). Today’s publication is also tarnished by the fact that the scientists behind the discovery, publishing their work in today’s issue of the journal Nature, have retracted a previous paper on room-temperature superconductivity because of its unconventional data-reduction methods.
The primary researcher Ranga Dias—assistant professor in the departments of mechanical engineering and physics and astronomy at the University of Rochester—said the retracted research paper has since been revised to accommodate the criticisms and accusations. Originally published in Nature as well, the revised version is back under peer review with Nature, Dias said.
“We’ve made an open-door policy. We [allowed] everybody to come to our lab and see how we do the measurements.”
—Ranga Dias, University of Rochester
Last fall, when the group’s previous paper (reporting similarly compelling results involving a much higher-pressure material inside a diamond anvil) was retracted, many criticisms and even allegations of misconduct dogged the team across the science press. “I think this is a real problem,” Jorge Hirsch, professor of physics at the University of California, San Diego, told Science at the time. “You cannot leave it as, ‘Oh, it’s a difference of opinion.’ ”
Contacted by Spectrum, Hirsch said his views today—alleging misconduct—have only strengthened since then. According to him, some of Dias’s group’s reported data was allegedly computer-generated—a feat that Hirsch’s team says they can reproduce out to seven-digit accuracy. “When you read the paper, it superficially looks great. ... [And] if this is true it is an incredible breakthrough, worthy of a Nobel Prize. But when you look more carefully several warning signals become apparent,” Hirsch said via email.
Venkat Viswanathan, associate professor of mechanical engineering at Carnegie Mellon University, in Pittsburgh, said the degree of controversy the retraction merited may have been overstated. “It was unfortunate what happened,” he said. “But a lot of people seized on it. If people took a serious look at the work itself and all that’s transpired since, I think the data is still solid. It’s still very attractive for superconductivity.”
Paul C.W. Chu, professor of physics and founding director at the Texas Center for Superconductivity at the University of Houston, said he has seen many claims of high-temperature superconductivity in his more than half century in the field. Many such claims did not pan out. (He has also, in his time, grabbed headlines for high-temperature superconductivity claims—in his case, claims that were true and advanced the field.)
Spectrum spoke with Chu hours after Dias’s group had presented their findings to this year’s March Meeting of the American Physical Society, the same meeting that in 1987 Chu had legendarily presented some of his own groundbreaking superconducting discoveries. Chu said he is especially cautious about the Dias group’s background subtraction methods. Background subtraction is not uncommon in the field, he said. But in this case, the signal is small compared to the noise. So, he said, “the background subtraction has to take place carefully.”
Still, Chu continued, “It is a very nice experiment. This is definitely significant, if it is proven to be real.”
According to James Walsh, assistant professor of chemistry at the University of Massachusetts Amherst, some of the controversy behind the group’s findings may be related to the challenges posed by the medium itself. “High-pressure science imposes experimental difficulties that simply don’t exist with traditional methods,” he told Spectrum via email. “It is hard to overstate the skill and ingenuity of the high-pressure community that has made magnetism and resistivity data accessible at all.”
Because of the increased scrutiny occasioned by the Dias group’s publication history—as well as the outsized significance of the group’s new finding—Dias said that his team has abided by increased levels of transparency and repeatability.
“The history of materials science has shown us that technological leaps can often be traced back to the announcement of a newly discovered material with outstanding properties.”
—James Walsh, University of Massachusetts Amherst
“We’ve made an open-door policy,” Dias said. “We [allowed] everybody to come to our lab and see how we do the measurements. During the review process, we shared all our data with the referees.”
He added that in collecting data for their revised previous paper, the researchers collaborated with officials from Argonne and Brookhaven National Laboratories. “We did the measurements in front of a live audience,” Dias said. “They showed the superconducting transition. We are collaborating with both labs to understand the material properties and understand the exact structure of the material.” (A spokesperson for Argonne, contacted by Spectrum, said that U.S. Department of Energy policy prohibits them from speaking about research appearing in papers that their group did not author.)
The centerpiece material in the present research—the putative 10-kilobar superconductor—is sure to be the subject of a flurry of both controversy and at least short-term interest. The recipe for what the team calls “reddmatter” (a Star Trek reference) involves hydrogen, nitrogen and the 71st element on the periodic table, lutetium (Lu).
Carnegie Mellon’s Viswanathan said today’s discovery may represent the biggest gold rush on lutetium in the rare earth’s entire history. “He has singlehandedly spiked the metals index for this element,” he said of Dias.
Walsh, of the University of Massachusetts, expressed enthusiasm for the material itself—named for its ruby red hue in its high-pressure state. “The history of materials science has shown us that technological leaps can often be traced back to the announcement of a newly discovered material with outstanding properties,” he said via email. “It would be hard to argue that a result like this should not qualify.”
These microphotographs show the lutetium nitrogen hydrogen material (a.k.a. “reddmatter”) that researchers report superconducts at high pressures. Curiously, also at high pressures the previously blue material turns ruby red.Ranga Dias/University of Rochester
Of course, a result like this also requires highly pressurized cells, which might only swap the cryogenic equipment required for present-day superconductors with a different kind of elaborate, expensive, and unwieldy roomful of hardware. Chu says he will be collaborating with researchers investigating ways to transform rare-earth materials like the lutetium nitrogen hydrogen compound into superconductors that require substantially less pressure.
“These high-pressure cells interfere with measurements, and if you talk about applications, it’s not practical,” he said. “We want to see if we can stabilize it without pressure.”
Such notions have parallels in other fields. In semiconductor engineering, strained silicon transistors can retain in their lattice effective pressures three or more times as great as the pressures involved in the present material.
Eva Zurek, professor of chemistry at the University at Buffalo in New York state, said independent confirmations of the Dias group’s work are essential. But if the finding is validated, then she anticipates a challenging but not impossible road to develop a material that can perform at something close to ambient pressures as well as temperatures.
“If [the new finding is] proven to be true,” she said via email, “then I believe it would be relatively straightforward to either find ways to bring Lu-N-H to normal pressure/temperature conditions, or develop technologies where it may be used at very mild pressures.”
Update 13 March: The story was updated to include new remarks from Prof. Jorge Hirsch, a response to a call for comments that had arrived after this article was originally published.
Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.
“Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.
Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.
DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.
“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.
“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”
So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.
The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.
The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA
But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.
NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.
The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.
The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA
What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”
So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.
“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”
There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.
“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”
Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.
IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.
Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.
“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”
The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.
Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).
Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT
In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.
As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.
In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.
“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.
On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.
While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.
“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”
This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.
“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.
Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.
Social work staff explain how continuing professional development increases their understanding and helps protect the people they work with
Being a social worker is not just about helping people, it is also a commitment to lifelong learning. Every day brings challenges and real-life lessons but those working in social care must also complete two pieces of continuing professional development (CPD) annually to maintain their registered status.
CPD does not have to take place in a classroom as social workers are able to learn through a range of experiences in their working – and personal – lives. Conventional training and supervision is an option, but the regulator, Social Work England, also allows CPD from case work, professional feedback, mentoring, and personal lived experience. It particularly encourages CPD that is very personal to the practitioner. As long as it is relevant to a person’s role, and they can reflect on the learning and demonstrate how it has had a positive influence on their work, CPD can take some unexpected forms.
Social media and podcasts. Dunmore Chihwehwete, a social care team manager in the London borough of Barnet, finds YouTube a valuable educational resource. He says: “I’ve learned a lot about the interaction between children and adults from watching recorded sessions between respected psychotherapists and their clients, particularly Salvador Minuchin. I listen to podcasts and audiobooks in the car. The truth is you’re always learning, whether it’s watching a Ted Talk or following an academic on Twitter. I also use an app called iTunes U, where universities upload lectures.”
Criticism and praise. Powerful lessons are learned when things go wrong. Anna Ramsey, a service manager for Essex county council’s adult social care department, takes part in a regular online meeting with her team in which good and bad news stories are shared. She says: “Some complaints are hard to hear, but they’re important learning opportunities. We need to ask: ‘How could we have done better?’ We celebrate successes and learn from errors.”
News, events and charitable efforts. A recently qualified social worker in Manchester, Ashiq Khan, gets involved in charities alongside his work and finds these organisations expand his understanding of the causes they champion. He also finds current events can have a big impact on his work. “I attended a vigil in Manchester for Brianna Ghey, the transgender girl killed there recently,” Khan says. “We have a responsibility in our work to challenge heteronormative ways of thinking and I’ve done a work presentation on the topic. It all made me think about our feelings about gender, sexual orientation, and general attitudes constructed over time. One of the most important reasons for undertaking CPD is to ensure the safety of the people we work with. It helps us to reflect and grow personally and as professionals.”
Documentaries, journals, films and novels. In a hectic working week, it can be tricky to find time for extra reading or watching work-related content. But Khan points out that interesting documentaries, as well as films and novels, can open up difficult topics and feed into CPD learning. “I recently watched a movie based on a true story about a woman who had been trafficked, who then ended up running brothels. It touched on areas we see in our work in Manchester and prompted me to do more research, leading me to other related documentaries. Eye-openers are everywhere, whether it’s in books, magazines or television,” says Khan.
Personal experience. Becky Cuming works for Cornwall council’s social services team, supporting five- to 11-year-olds who have experienced trauma. Some of her most valuable learning comes from reflecting on life experiences with colleagues. “In peer supervision, we used family therapist John Burnham’s social GGRRAAACCEEESSS exercise [looking at aspects of personal and social identity which afford people different levels of power and privilege] and we also did questionnaires on our own childhoods to increase our understanding of how the families we work with feel,” Cuming says.
Continue reading...Update 5 Sept.: For now, NASA’s giant Artemis I remains on the ground after two launch attempts scrubbed by a hydrogen leak and a balky engine sensor. Mission managers say Artemis will fly when everything's ready—but haven't yet specified whether that might be in late September or in mid-October.
“When you look at the rocket, it looks almost retro,” said Bill Nelson, the administrator of NASA. “Looks like we’re looking back toward the Saturn V. But it’s a totally different, new, highly sophisticated—more sophisticated—rocket, and spacecraft.”
Artemis, powered by the Space Launch System rocket, is America’s first attempt to send astronauts to the moon since Apollo 17 in 1972, and technology has taken giant leaps since then. On Artemis I, the first test flight, mission managers say they are taking the SLS, with its uncrewed Orion spacecraft up top, and “stressing it beyond what it is designed for”—the better to ensure safe flights when astronauts make their first landings, currently targeted to begin with Artemis III in 2025.
But Nelson is right: The rocket is retro in many ways, borrowing heavily from the space shuttles America flew for 30 years, and from the Apollo-Saturn V.
Much of Artemis’s hardware is refurbished: Its four main engines, and parts of its two strap-on boosters, all flew before on shuttle missions. The rocket’s apricot color comes from spray-on insulation much like the foam on the shuttle’s external tank. And the large maneuvering engine in Orion’s service module is actually 40 years old—used on 19 space shuttle flights between 1984 and 1992.
“I have a name for missions that use too much new technology—failures.”
—John Casani, NASA
Perhaps more important, the project inherits basic engineering from half a century of spaceflight. Just look at Orion’s crew capsule—a truncated cone, somewhat larger than the Apollo Command Module but conceptually very similar.
Old, of course, does not mean bad. NASA says there is no need to reinvent things engineers got right the first time.
“There are certain fundamental aspects of deep-space exploration that are really independent of money,” says Jim Geffre, Orion vehicle-integration manager at the Johnson Space Center in Houston. “The laws of physics haven’t changed since the 1960s. And capsule shapes happen to be really good for coming back into the atmosphere at Mach 32.”
Roger Launius, who served as NASA’s chief historian from 1990 to 2002 and as a curator at the Smithsonian Institution from then until 2017, tells of a conversation he had with John Casani, a veteran NASA engineer who managed the Voyager, Galileo, and Cassini probes to the outer planets.
“I have a name for missions that use too much new technology,” he recalls Casani saying. “Failures.”
The Artemis I flight is slated for about six weeks. (Apollo 11 lasted eight days.) The ship roughly follows Apollo’s path to the moon’s vicinity, but then puts itself in what NASA calls a distant retrograde orbit. It swoops within 110 kilometers of the lunar surface for a gravity assist, then heads 64,000 km out—taking more than a month but using less fuel than it would in closer orbits. Finally, it comes home, reentering the Earth’s atmosphere at 11 km per second, slowing itself with a heatshield and parachutes, and splashing down in the Pacific not far from San Diego.
If all four, quadruply redundant flight computer modules fail, there is a fifth, entirely separate computer onboard, running different code to get the spacecraft home.
“That extra time in space,” says Geffre, “allows us to operate the systems, give more time in deep space, and all those things that stress it, like radiation and micrometeoroids, thermal environments.”
There are, of course, newer technologies on board. Orion is controlled by two vehicle-management computers, each composed of two flight computer modules (FCMs) to handle guidance, navigation, propulsion, communications, and other systems. The flight control system, Geffre points out, is quad-redundant; if at any point one of the four FCMs disagrees with the others, it will take itself offline and, in a 22-second process, reset itself to make sure its outputs are consistent with the others’. If all four FCMs fail, there is a fifth, entirely separate computer running different code to get the spacecraft home.
Guidance and navigation, too, have advanced since the sextant used on Apollo. Orion uses a star tracker to determine its attitude, imaging stars and comparing them to an onboard database. And an optical navigation camera shoots Earth and the moon so that guidance software can determine their distance and position and keep the spacecraft on course. NASA says it’s there as backup, able to get Orion to a safe splashdown even if all communication with Earth has been lost.
But even those systems aren’t entirely new. Geffre points out that the guidance system’s architecture is derived from the Boeing 787. Computing power in deep space is limited by cosmic radiation, which can corrupt the output of microprocessors beyond the protection of Earth’s atmosphere and magnetic field.
Beyond that is the inevitable issue of cost. Artemis is a giant project, years behind schedule, started long before NASA began to buy other launches from companies like SpaceX and Rocket Lab. NASA’s inspector general, Paul Martin, testified to Congress in March that the first four Artemis missions would cost US $4.1 billion each—“a price tag that strikes us as unsustainable.”
Launius, for one, rejects the argument that government is inherently wasteful. “Yes, NASA’s had problems in managing programs in the past. Who hasn’t?” he says. He points out that Blue Origin and SpaceX have had plenty of setbacks of their own—they’re just not obliged to be public about them. “I could go on and on. It’s not a government thing per se and it’s not a NASA thing per se.”
So why return to the moon with—please forgive the pun—such a retro rocket? Partly, say those who watch Artemis closely, because it’s become too big to fail, with so much American money and brainpower invested in it. Partly because it turns NASA’s astronauts outward again, exploring instead of maintaining a space station. Partly because new perspectives could come of it. And partly because China and Russia have ambitions in space that threaten America’s.
“Apollo was a demonstration of technological verisimilitude—to the whole world,” says Launius. “And the whole world knew then, as they know today, that the future belongs to the civilization that can master science and technology.”
Update 7 Sept.: Artemis I has been on launchpad 39B, not 39A as previously reported, at Kennedy Space Center.
ChatGPT has wowed the world with the depth of its knowledge and the fluency of its responses, but one problem has hobbled its usefulness: It keeps hallucinating.
Yes, large language models (LLMs) hallucinate, a concept popularized by Google AI researchers in 2018. Hallucination in this context refers to mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. In short, you can’t trust what the machine is telling you.
That’s why, while OpenAI’s Codex or Github’s Copilot can write code, an experienced programmer still needs to review the output—approving, correcting, or rejecting it before allowing it to slip into a code base where it might wreak havoc.
High school teachers are learning the same. A ChatGPT-written book report or historical essay may be a breeze to read but could easily contain erroneous “facts” that the student was too lazy to root out.
Hallucinations are a serious problem. Bill Gates has mused that ChatGPT or similar large language models could some day provide medical advice to people without access to doctors. But you can’t trust advice from a machine prone to hallucinations.
Ilya Sutskever, OpenAI’s chief scientist and one of the creators of ChatGPT, says he’s confident that the problem will disappear with time as large language models learn to anchor their responses in reality. OpenAI has pioneered a technique to shape its models’ behaviors using something called reinforcement learning with human feedback (RLHF).
RLHF was developed by OpenAI and Google’s DeepMind team in 2017 as a way to improve reinforcement learning when a task involves complex or poorly defined goals, making it difficult to design a suitable reward function. Having a human periodically check on the reinforcement learning system’s output and give feedback allows reinforcement-learning systems to learn even when the reward function is hidden.
For ChatGPT, data collected during its interactions are used to train a neural network that acts as a “reward predictor,” which reviews ChatGPT’s outputs and predicts a numerical score that represents how well those actions align with the system’s desired behavior—in this case, factual or accurate responses.
Periodically, a human evaluator checks ChatGPT responses and chooses those that best reflect the desired behavior. That feedback is used to adjust the reward-predictor neural network, and the updated reward-predictor neural network is used to adjust the behavior of the AI model. This process is repeated in an iterative loop, resulting in improved behavior. Sutskever believes this process will eventually teach ChatGPT to improve its overall performance.
“I’m quite hopeful that by simply improving this subsequent reinforcement learning from the human feedback step, we can teach it to not hallucinate,” said Sutskever, suggesting that the ChatGPT limitations we see today will dwindle as the model improves.
But Yann LeCun, a pioneer in deep learning and the self-supervised learning used in large language models, believes there is a more fundamental flaw that leads to hallucinations.
“Large language models have no idea of the underlying reality that language describes,” he said, adding that most human knowledge is nonlinguistic. “Those systems generate text that sounds fine, grammatically, semantically, but they don’t really have some sort of objective other than just satisfying statistical consistency with the prompt.”
Humans operate on a lot of knowledge that is never written down, such as customs, beliefs, or practices within a community that are acquired through observation or experience. And a skilled craftsperson may have tacit knowledge of their craft that is never written down.
“Language is built on top of a massive amount of background knowledge that we all have in common, that we call common sense,” LeCun said. He believes that computers need to learn by observation to acquire this kind of nonlinguistic knowledge.
“There is a limit to how smart they can be and how accurate they can be because they have no experience of the real world, which is really the underlying reality of language,” said LeCun. “Most of what we learn has nothing to do with language.”
“We learn how to throw a basketball so it goes through the hoop,” said Geoff Hinton, another pioneer of deep learning. “We don’t learn that using language at all. We learn it from trial and error.”
But Sutskever believes that text already expresses the world. “Our pretrained models already know everything they need to know about the underlying reality,” he said, adding that they also have deep knowledge about the processes that produce language.
While learning may be faster through direct observation by vision, he argued, even abstract ideas can be learned through text, given the volume—billions of words—used to train LLMs like ChatGPT.
Neural networks represent words, sentences, and concepts through a machine-readable format called an embedding. An embedding maps high-dimensional vectors—long strings of numbers that capture their semantic meaning—to a lower-dimensional space, a shorter string of numbers that is easier to analyze or process.
By looking at those strings of numbers, researchers can see how the model relates one concept to another, Sutskever explained. The model, he said, knows that an abstract concept like purple is more similar to blue than to red, and it knows that orange is more similar to red than purple. “It knows all those things just from text,” he said. While the concept of color is much easier to learn from vision, it can still be learned from text alone, just more slowly.
Whether or not inaccurate outputs can be eliminated through reinforcement learning with human feedback remains to be seen. For now, the usefulness of large language models in generating precise outputs remains limited.
“Most of what we learn has nothing to do with language.”
Mathew Lodge, the CEO of Diffblue, a company that uses reinforcement learning to automatically generate unit tests for Java code, said that “reinforcement systems alone are a fraction of the cost to run and can be vastly more accurate than LLMs, to the point that some can work with minimal human review.”
Codex and Copilot, both based on GPT-3, generate possible unit tests that an experienced programmer must review and run before determining which is useful. But Diffblue’s product writes executable unit tests without human intervention.
“If your goal is to automate complex, error-prone tasks at scale with AI—such as writing 10,000 unit tests for a program that no single person understands—then accuracy matters a great deal,” said Lodge. He agrees that LLMs can be great for freewheeling creative interaction, but he cautions that the last decade has taught us that large deep-learning models are highly unpredictable, and making the models larger and more complicated doesn’t fix that. “LLMs are best used when the errors and hallucinations are not high impact,” he said.
Nonetheless, Sutskever said that as generative models improve, “they will have a shocking degree of understanding of the world and many of its subtleties, as seen through the lens of text.”
In this age of continuous innovation, intellectual property (IP) is a core business asset. As IP becomes ever more central to businesses’ ability to innovate, compete and grow, managing these assets is becoming more critical—and more complex.
In this new paper, we review why a new approach to IP management is needed - enabling corporate teams to remove friction from their IP management workflows and unlock the full potential of their IP.
Download this whitepaper to learn:
Despite tech giants including Meta, Microsoft, and Nvidia investing billions of dollars in the development of the metaverse, it is still little more than a fantasy. Making it a reality is likely to require breakthroughs in a range of sectors such as storage, modeling, and communication.
To spur progress in the advancement of those technologies, the IEEE Standards Association has launched the Persistent Computing for Metaverse initiative. As part of the IEEE’s Industry Connections Program, it will bring together experts from both industry and academia to help map out the innovations that will be needed to make the metaverse a reality.
Although disparate virtual-reality experiences exist today, the metaverse represents a vision of an interconnected and always-on virtual world that can host thousands, if not millions, of people simultaneously. The ultimate goal is for the virtual world to become so realistic that it is almost indistinguishable from the real one.
Today’s technology is a long way from making that possible, says Yu Yuan, president of the IEEE Standards Association. The Institute spoke with Yuan to find out more about the initiative and the key challenges that need to be overcome. His answers have been edited for clarity.
The Institute: What is persistent computing?
Yu Yuan: I have been working in virtual reality and multimedia for more than 20 years, I just didn’t call my work metaverse. After metaverse became a buzzword, I asked myself, ‘What’s the difference between metaverse and VR?’ My answer is: persistence, or the ability to leave traces in a virtual world.
Persistent computing refers to the combination of all the technologies needed to support the development and operation of a persistent virtual world. In other words, a metaverse. There are different kinds of VR experiences, but many of them are one-time events. Similar to how video games work, every time a user logs in, the entire virtual world resets. But users in the metaverse can leave traces. For example, they can permanently change the virtual world by destroying a wall or building a new house. Those changes have to be long-lasting so there will be a meaningful virtual society or meaningful economy in that virtual world.
What are the key components that are required to make persistent computing possible?
Yuan: The first is storage. In most of today’s video games, users can destroy a building, only for it to be restored the next time the user logs in to the game. But in a persistent virtual world the current status of the virtual world needs to be stored constantly. Users can create or destroy something in that world and the next time they log in, those changes will still be there. These kinds of things have to be properly stored—which means a very large amount of data needs to be stored.
It’s also important to support persistence from a modeling perspective because, as we can imagine, people will demand higher and higher quality experiences. To do this we need a larger scale in the future as well as finer granularity, or more detail, to make those virtual objects and environments more realistic.
“The metaverse is a truly long-term vision. We may need another 15 to 20 years, or even longer, to make it happen.”
That also requires technology to support upgrading the virtual world on the fly. For example, let’s say the building block of your virtual world would be at the brick level, but later, along with the advancement of technologies, we may be able to bring that detail level to grains of sand.
Along with that upgrade the buildings that users created before will have to be maintained. So that raises some challenges, like: How can we support modeling and operation of virtual worlds without interruption, providing continuous experiences for the users?
You say you need a lot more storage to maintain all the information. But does that just mean more powerful memory technologies, or is it more complicated than that?
Yuan: Larger storage capacity and low power consumption will be necessary. This may also be an important factor, because some people are concerned that the metaverse will consume lots of energy, making the entire thing not sustainable. But we also need to address some other issues.
Let’s say the ultimate goal for the metaverse is to be able to create virtual universes that are indistinguishable from a real physical universe. In order to simulate and store, for example, a million virtual atoms, how many physical atoms do we need? That is ultimately one of the questions we need to answer in terms of connecting the universe of atoms to the universe of bits. We will hit a wall in terms of how many physical atoms we need to create an equal or larger number of virtual atoms. That requires not only innovations in storage, computation, and communications technology in general but also some special innovations in modeling, and engines dedicated for the metaverse. It could be some data-compression technology, but that’s just one of the directions we need to explore.
I think communications are equally important. Most people believe the metaverse means lots of users—which means we will definitely need innovations in communications to support real-time, massive user experiences. If we are talking about supporting a million users in a virtual city, we definitely need some disruptive innovations from the communications perspective. That’s also an integral component of persistent computing.
How will the issues that you’ve identified be solved?
Yuan: That’s part of the mission for the Persistent Computing for Metaverse Initiative. It serves as a platform for information exchange and discussions on the gaps in today’s existing technologies.
Maybe we already have most of the technologies in place, but we just need to find a way to integrate them together. Or maybe there are gaps where we need to do R&D on some particular subarea of technology. With this gap analysis, we will know what other innovations are needed—which could provide some direction for academia and industry.
The initiative plans to host events, publish white papers with its findings, and propose new standards.
A lot of the development in the space is happening internally in companies. Is there an appetite to collaborate, or is there a danger that everyone is racing to set up walled gardens?
Yuan: I wouldn’t say that it’s a danger, but I don’t think it’s efficient. That’s why I think standards will play a leading role to help pave the way for the metaverse. We need to develop standards to identify gaps and set up road maps for the industry. The industry will then have some basis for discussion and how we can work together to make this happen. Working together is also important so that companies aren’t reinventing wheels in different silos.
I think the metaverse will have a profound impact on all industries.
Is a lot of this kind of pie-in-the sky at the moment? Are we still a long way from persistent virtual worlds?
Yuan: The metaverse is a truly long-term vision. We may need another 15 to 20 years, or even longer, to make it happen. I believe the metaverse should be indistinguishable from our current universe, and to do that we need to address many grand challenges. Some of these include how to create a persistent virtual universe and how to make our perception realistic enough. Currently we are using XR [extended reality] devices, but eventually we may need innovations in brain-machine interface or neural interface technologies to be able to comprehensively take over the interface between our consciousness and the virtual world. But along with this long-term development there are also preliminary embodiments of the metaverse that can be useful and generate value for industry and for consumers.
The controversial decision to approve a new coalmine in Cumbria was met with dismay by UK environmental groups, with many wondering what it meant for a country that has pitched itself as a leader in the green energy revolution. But in the town of Whitehaven where the mine is to be situated, the feeling is very different, with vast support across the political spectrum. The Guardian's Richard Sprenger travels to the Mirehouse estate, a short distance from the Woodhouse Colliery site, to find out what lies behind this positivity in the face of a profound climate crisis
Continue reading...In this article, we explore the top 10 AI tools that are driving innovation and efficiency in various industries. These tools are designed to automate repetitive tasks, improve workflow, and increase productivity. The tools included in our list are some of the most advanced and widely used in the market, and are suitable for a variety of applications. Some of the tools focus on natural language processing, such as ChatGPT and Grammarly, while others focus on image and video generation, such as DALL-E and Lumen5. Other tools such as OpenAI Codex, Tabnine, Canva, Jasper AI,, and Surfer SEO are designed to help with specific tasks such as code understanding content writing and website optimization. This list is a great starting point for anyone looking to explore the possibilities of AI and how it can be applied to their business or project.
So let’s dive into
ChatGPT is a large language model that generates human-like responses to a variety of prompts. It can be used for tasks such as language translation, question answering, and text completion. It can handle a wide range of topics and styles of writing, and generates coherent and fluent text, but should be used with care as it may generate text that is biased, offensive, or factually incorrect.
Pros:
Cons:
Overall, ChatGPT is a powerful tool for natural language processing, but it should be used with care and with an understanding of its limitations.
DALL-E is a generative model developed by OpenAI that is capable of generating images from text prompts. It is based on the GPT-3 architecture, which is a transformer-based neural network language model that has been trained on a massive dataset of text. DALL-E can generate images that are similar to a training dataset and it can generate high-resolution images that are suitable for commercial use.
Pros:
Cons:
Overall, DALL-E is a powerful AI-based tool for generating images, it can be used for a variety of applications such as creating images for commercial use, gaming, and other creative projects. It is important to note that the generated images should be reviewed and used with care, as they may not be entirely original and could be influenced by the training data.
Lumen5 is a content creation platform that uses AI to help users create videos, social media posts, and other types of content. It has several features that make it useful for content creation and marketing, including:
Pros:
Cons:
Overall, Lumen5 is a useful tool for creating content quickly and easily, it can help automate the process of creating videos, social media posts, and other types of content. However, the quality of the generated content may vary depending on the source material and it is important to review and edit the content before publishing it.
Grammarly is a writing-enhancement platform that uses AI to check for grammar, punctuation, and spelling errors in the text. It also provides suggestions for improving the clarity, concision, and readability of the text. It has several features that make it useful for improving writing, including:
Pros:
Cons:
OpenAI Codex is a system developed by OpenAI that can create code from natural language descriptions of software tasks. The system is based on the GPT-3 model and can generate code in multiple programming languages.
Pros:
Cons:
Overall, OpenAI Codex is a powerful tool that can help automate the process of writing code and make it more accessible to non-technical people. However, the quality of the generated code may vary depending on the task description and it is important to review and test the code before using it in a production environment. It is important to use the tool as an aid, not a replacement for the developer's knowledge.
Tabnine is a code completion tool that uses AI to predict and suggest code snippets. It is compatible with multiple programming languages and can be integrated with various code editors.
Pros:
Cons:
Overall, TabNine is a useful tool for developers that can help improve coding efficiency and reduce the time spent on writing code. However, it is important to review the suggestions provided by the tool and use them with caution, as they may not always be accurate or appropriate. It is important to use the tool as an aid, not a replacement for the developer's knowledge.
Jasper is a content writing and content generation tool that uses artificial intelligence to identify the best words and sentences for your writing style and medium in the most efficient, quick, and accessible way.
Pros:
Cons:
Surfer SEO is a software tool designed to help website owners and digital marketers improve their search engine optimization (SEO) efforts. The tool provides a variety of features that can be used to analyze a website's on-page SEO, including:
Features:
Pros:
Cons:
Overall, Surfer SEO can be a useful tool for website owners and digital marketers looking to improve their SEO efforts. However, it is important to remember that it is just a tool and should be used in conjunction with other SEO best practices. Additionally, the tool is not a guarantee of better ranking.
Zapier is a web automation tool that allows users to automate repetitive tasks by connecting different web applications together. It does this by creating "Zaps" that automatically move data between apps, and can also be used to trigger certain actions in one app based on events in another app.
Features:
Pros:
Cons:
Overall, Zapier is a useful tool that can help users automate repetitive tasks and improve workflow. It can save time and increase productivity by connecting different web applications together. However, it may require some technical skills and some features may require a paid subscription. It is important to use the tool with caution and not to rely too much on it, to understand the apps better.
Compose AI is a company that specializes in developing natural language generation (NLG) software. Their software uses AI to automatically generate written or spoken text from structured data, such as spreadsheets, databases, or APIs.
Features:
Pros:
Cons:
Overall, Compose AI's NLG software can be a useful tool for automating the process of creating written or spoken content from structured data. However, the quality of the generated content may vary depending on the data source, and it is essential to review the generated content before using it in a production environment. It is important to use the tool as an aid, not a replacement for the understanding of the data.
AI tools are becoming increasingly important in today's business and technology landscape. They are designed to automate repetitive tasks, improve workflow, and increase productivity. The top 10 AI tools included in this article are some of the most advanced and widely used in the market, and are suitable for various applications. Whether you're looking to improve your natural language processing, create high-resolution images, or optimize your website, there is an AI tool that can help. It's important to research and evaluate the different tools available to determine which one is the best fit for your specific needs. As AI technology continues to evolve, these tools will become even more powerful and versatile and will play an even greater role in shaping the future of business and technology.
Three days before astronauts left on Apollo 8, the first-ever flight around the moon, NASA’s safety chief, Jerome Lederer, gave a speech that was at once reassuring and chilling. Yes, he said, the United States’ moon program was safe and well-planned—but even so, “Apollo 8 has 5,600,000 parts and one and one half million systems, subsystems, and assemblies. Even if all functioned with 99.9 percent reliability, we could expect 5,600 defects.”
The mission, in December 1968, was nearly flawless—a prelude to the Apollo 11 landing the next summer. But even today, half a century later, engineers wrestle with the sheer complexity of the machines they build to go to space. NASA’s Artemis I, its Space Launch System rocket mandated by Congress in 2010, endured a host of delays before it finally launched in November 2022. And Elon Musk’s SpaceX may be lauded for its engineering acumen, but it struggled for six years before its first successful flight into orbit.
Relativity envisions 3D-printing facilities someday on the Martian surface, fabricating much of what people from Earth would need to live there.
Is there a better way? An upstart company called Relativity Space is about to try one. Its Terran 1 rocket, the company says, has about a tenth as many parts as comparable launch vehicles do, because it is made through 3D printing. Instead of bending metal and milling and welding, engineers program a robot to deposit layers of metal alloy in place.
Relativity’s first rocket, the company says, is ready to go from launch complex 16 at Cape Canaveral, Fla. When it happens, the company says it will stream the liftoff on YouTube.
Artist’s concept of Relativity’s planned Terran R rocket. The company says it should be able to carry a 20,000-kilogram payload into low Earth orbit.Relativity
“Over 85 percent of the rocket by mass is 3D printed,” said Scott Van Vliet, Relativity’s head of software engineering. “And what’s really cool is not only are we reducing the amount of parts and labor that go into building one of these vehicles over time, but we’re also reducing the complexity, we’re reducing the chance of failure when you reduce the part count, and you streamline the build process.”
Relativity says it can put together a Terran rocket in two months, compared to two years for some conventionally built ones. The speed and cost of making a prototype—say, for wind-tunnel testing—are reduced because you tell the printer to make a scaled-down model. There is less waste because the process is additive. And if something needs to be modified, you reprogram the 3D printer instead of slow, expensive retooling.
Investors have noticed. The company says financial backers have included BlackRock, Y Combinator and the entrepreneur Mark Cuban.
“If you walk into any rocket factory today other than ours,” said Josh Brost, the company’s head of business development, “you still will see hundreds of thousands of parts coming from thousands of vendors, and still being assembled using lots of touch labor and lots of big-fix tools.”
Terran 1 Nose Cone Timelapse Check out this timelapse of our nose cone build for Terran 1. This milestone marks the first time we’ve created this unique shape ...
Terran 1, rated as capable of putting a 1,250-kilogram payload in low Earth orbit, is mainly intended as a test bed. Relativity has signed up a variety of future customers for satellite launches, but the first Terran 1 (“Terran” means “earthling”) will not carry a paying customer’s satellite. The first flight has been given the playful name “Good Luck, Have Fun”—GLHF for short. Eventually, if things are going well, Relativity will build a larger booster, called Terran R, which the company hopes will compete with the SpaceX Falcon 9 for launches of up to 20,000 kg. Relativity says the Terran R should be fully reusable, including the upper stage—something that other commercial launch companies have not accomplished. In current renderings, the rocket is, as the company puts it, “inspired by nature,” shaped to slice through the atmosphere as it ascends and comes back for recovery.
A number of Relativity’s top people came from Musk’s SpaceX or Jeff Bezos’s space company, Blue Origin, and, like Musk, they say their vision is a permanent presence on Mars. Brost calls it “the long-term North Star for us.” They say they can envision 3D-printing facilities someday on the Martian surface, fabricating much of what people from Earth would need to live there. “For that to happen,” says Brost, “you need to have manufacturing capabilities that are autonomous and incredibly flexible.”
Relativity’s fourth-generation Stargate 3D printer.Relativity
Just how Relativity will do all these things is a work in progress. The company says its 3D technology will help it work iteratively—finding mistakes as it goes, then correcting them as it prints the next rocket, and the next, and so on.
“In traditional manufacturing, you have to do a ton of work up front and have a lot of the design features done well ahead of time,” says Van Vliet. “You have to invest in fixed tooling that can often take years to build before you’ve actually developed an article for your launch vehicle. With 3D printing, additive manufacturing, we get to building something very, very quickly.”
The next step is to get the first rocket off the pad. Will it succeed? Brost says a key test will be getting through max q—the point of maximum dynamic pressure on the rocket as it accelerates through the atmosphere before the air around it thins out.
“If you look at history, at new space companies doing large rockets, there’s not a single one that’s done their first rocket on their first try. It would be quite an achievement if we were able to achieve orbit on our inaugural launch,” says Brost.
“I’ve been to many launches in my career,” he says, “and it never gets less exciting or nerve wracking to me.”
Each January, the editors of IEEE Spectrum offer up some predictions about technical developments we expect to be in the news over the coming year. You’ll find a couple dozen of those described in the following special report. Of course, the number of things we could have written about is far higher, so we had to be selective in picking which projects to feature. And we’re not ashamed to admit, gee-whiz appeal often shaped our choices.
For example, this year’s survey includes an odd pair of new aircraft that will be taking to the skies. One, whose design was inspired by the giant airships of years past, is longer than a football field; the other, a futuristic single-seat vertical-takeoff craft powered by electricity, is about the length of a small car.
While some of the other stories might not light up your imagination as much, they highlight important technical issues the world faces—like the challenges of shifting from fossil fuels to a hydrogen-based energy economy or the threat that new plutonium breeder reactors in China might accelerate the proliferation of nuclear weapons. So whether you prefer reading about topics that are heavy or light (even lighter than air), you should find something here to get you warmed up for 2023.
This article appears in the January 2023 print issue.
Top Tech 2023: A Special Report
Preview exciting technical developments for the coming year.
Can This Company Dominate Green Hydrogen?
Fortescue will need more electricity-generating capacity than France.
Pathfinder 1 could herald a new era for zeppelins
A New Way to Speed Up Computing
Blue microLEDs bring optical fiber to the processor.
The Personal-Use eVTOL Is (Almost) Here
Opener’s BlackFly is a pulp-fiction fever dream with wings.
Baidu Will Make an Autonomous EV
Its partnership with Geely aims at full self-driving mode.
China Builds New Breeder Reactors
The power plants could also make weapons-grade plutonium.
Economics Drives a Ray-Gun Resurgence
Lasers should be cheap enough to use against drones.
A Cryptocurrency for the Masses or a Universal ID?
What Worldcoin’s killer app will be is not yet clear.
The company’s Condor chip will boast more than 1,000 qubits.
Vagus-nerve stimulation promises to help treat autoimmune disorders.
New satellites can connect directly to your phone.
The E.U.’s first exascale supercomputer will be built in Germany.
A dozen more tech milestones to watch for in 2023.
![]() |
The marketing industry is turning to artificial intelligence (AI) as a way to save time and execute smarter, more personalized campaigns. 61% of marketers say AI software is the most important aspect of their data strategy.
If you’re late to the AI party, don’t worry. It’s easier than you think to start leveraging artificial intelligence tools in your marketing strategy. Here are 11 AI marketing tools every marketer should start using today.
Personalize is an AI-powered technology that helps you identify and produce highly targeted sales and marketing campaigns by tracking the products and services your contacts are most interested in at any given time. The platform uses an algorithm to identify each contact’s top three interests, which are updated in real-time based on recent site activity.
Key Features
Seventh Sense provides behavioral analytics that helps you win attention in your customers’ overcrowded email inboxes. Choosing the best day and time to send an email is always a gamble. And while some days of the week generally get higher open rates than others, you’ll never be able to nail down a time that’s best for every customer. Seventh Sense eases your stress of having to figure out the perfect send-time and day for your email campaigns. The AI-based platform figures out the best timing and email frequency for each contact based on when they’re opening emails. The tool is primarily geared toward HubSpot and Marketo customers
Key Features
Phrasee uses artificial intelligence to help you write more effective subject lines. With its AI-based Natural Language Generation system, Phrasee uses data-driven insights to generate millions of natural-sounding copy variants that match your brand voice. The model is end-to-end, meaning when you feed the results back to Phrasee, the prediction model rebuilds so it can continuously learn from your audience.
Key Features
HubSpot Search Engine Optimization (SEO) is an integral tool for the Human Content team. It uses machine learning to determine how search engines understand and categorize your content. HubSpot SEO helps you improve your search engine rankings and outrank your competitors. Search engines reward websites that organize their content around core subjects, or topic clusters. HubSpot SEO helps you discover and rank for the topics that matter to your business and customers.
Key Features
When you’re limited to testing two variables against each other at a time, it can take months to get the results you’re looking for. Evolv AI lets you test all your ideas at once. It uses advanced algorithms to identify the top-performing concepts, combine them with each other, and repeat the process to achieve the best site experience.
Key Features
Acrolinx is a content alignment platform that helps brands scale and improves the quality of their content. It’s geared toward enterprises – its major customers include big brands like Google, Adobe, and Amazon - to help them scale their writing efforts. Instead of spending time chasing down and fixing typos in multiple places throughout an article or blog post, you can use Acrolinx to do it all right there in one place. You start by setting your preferences for style, grammar, tone of voice, and company-specific word usage. Then, Acrolinx checks and scores your existing content to find what’s working and suggest areas for improvement. The platform provides real-time guidance and suggestions to make writing better and strengthen weak pages.
Key features
MarketMuse uses an algorithm to help marketers build content strategies. The tool shows you where to target keywords to rank in specific topic categories, and recommends keywords you should go after if you want to own particular topics. It also identifies gaps and opportunities for new content and prioritizes them by their probable impact on your rankings. The algorithm compares your content with thousands of articles related to the same topic to uncover what’s missing from your site.
Key features:
Copilot is a suite of tools that help eCommerce businesses maintain real-time communication with customers around the clock at every stage of the funnel. Promote products, recover shopping carts and send updates or reminders directly through Messenger.
Key features:
Yotpo’s deep learning technology evaluates your customers’ product reviews to help you make better business decisions. It identifies key topics that customers mention related to your products—and their feelings toward them. The AI engine extracts relevant reviews from past buyers and presents them in smart displays to convert new shoppers. Yotpo also saves you time moderating reviews. The AI-powered moderation tool automatically assigns a score to each review and flags reviews with negative sentiment so you can focus on quality control instead of manually reviewing every post.
Key features:
Albert is a self-learning software that automates the creation of marketing campaigns for your brand. It analyzes vast amounts of data to run optimized campaigns autonomously, allowing you to feed in your own creative content and target markets, and then use data from its database to determine key characteristics of a serious buyer. Albert identifies potential customers that match those traits, and runs trial campaigns on a small group of customers—with results refined by Albert himself—before launching it on a larger scale.
Albert plugs into your existing marketing technology stack, so you still have access to your accounts, ads, search, social media, and more. Albert maps tracking and attribution to your source of truth so you can determine which channels are driving your business.
Key features:
There are many tools and companies out there that offer AI tools, but this is a small list of resources that we have found to be helpful. If you have any other suggestions, feel free to share them in the comments below this article. As marketing evolves at such a rapid pace, new marketing strategies will be invented that we haven't even dreamed of yet. But for now, this list should give you a good starting point on your way to implementing AI into your marketing mix.
Note: This article contains affiliate links, meaning we make a small commission if you buy any premium plan from our link.
Grammarly is a tool that checks for grammatical errors, spelling, and punctuation.it gives you comprehensive feedback on your writing. You can use this tool to proofread and edit articles, blog posts, emails, etc.
Grammarly also detects all types of mistakes, including sentence structure issues and misused words. It also gives you suggestions on style changes, punctuation, spelling, and grammar all are in real-time. The free version covers the basics like identifying grammar and spelling mistakes
whereas the Premium version offers a lot more functionality, it detects plagiarism in your content, suggests word choice, or adds fluency to it.
ProWritingAid is a style and grammar checker for content creators and writers. It helps to optimize word choice, punctuation errors, and common grammar mistakes, providing detailed reports to help you improve your writing.
ProWritingAid can be used as an add-on to WordPress, Gmail, and Google Docs. The software also offers helpful articles, videos, quizzes, and explanations to help improve your writing.
Here are some key features of ProWriting Aid:
Grammarly and ProWritingAid are well-known grammar-checking software. However, if you're like most people who can't decide which to use, here are some different points that may be helpful in your decision.
As both writing assistants are great in their own way, you need to choose the one that suits you best.
Both ProWritingAid and Grammarly are awesome writing tools, without a doubt. but as per my experience, Grammarly is a winner here because Grammarly helps you to review and edit your content. Grammarly highlights all the mistakes in your writing within seconds of copying and pasting the content into Grammarly’s editor or using the software’s native feature in other text editors.
Not only does it identify tiny grammatical and spelling errors, it tells you when you overlook punctuations where they are needed. And, beyond its plagiarism-checking capabilities, Grammarly helps you proofread your content. Even better, the software offers a free plan that gives you access to some of its features.
Are you searching for an ecomerce platform to help you build an online store and sell products?
In this Sellfy review, we'll talk about how this eCommerce platform can let you sell digital products while keeping full control of your marketing.
And the best part? Starting your business can be done in just five minutes.
Let us then talk about the Sellfy platform and all the benefits it can bring to your business.
Sellfy is an eCommerce solution that allows digital content creators, including writers, illustrators, designers, musicians, and filmmakers, to sell their products online. Sellfy provides a customizable storefront where users can display their digital products and embed "Buy Now" buttons on their website or blog. Sellfy product pages enable users to showcase their products from different angles with multiple images and previews from Soundcloud, Vimeo, and YouTube. Files of up to 2GB can be uploaded to Sellfy, and the company offers unlimited bandwidth and secure file storage. Users can also embed their entire store or individual project widgets in their site, with the ability to preview how widgets will appear before they are displayed.
Sellfy includes:
Sellfy is a powerful e-commerce platform that helps you personalize your online storefront. You can add your logo, change colors, revise navigation, and edit the layout of your store. Sellfy also allows you to create a full shopping cart so customers can purchase multiple items. And Sellfy gives you the ability to set your language or let customers see a translated version of your store based on their location.
Sellfy gives you the option to host your store directly on its platform, add a custom domain to your store, and use it as an embedded storefront on your website. Sellfy also optimizes its store offerings for mobile devices, allowing for a seamless checkout experience.
Sellfy allows creators to host all their products and sell all of their digital products on one platform. Sellfy also does not place storage limits on your store but recommends that files be no larger than 5GB. Creators can sell both standard and subscription-based products in any file format that is supported by the online marketplace. Customers can purchase products instantly after making a purchase – there is no waiting period.
You can organize your store by creating your product categories, sorting by any characteristic you choose. Your title, description, and the image will be included on each product page. In this way, customers can immediately evaluate all of your products. You can offer different pricing options for all of your products, including "pay what you want," in which the price is entirely up to the customer. This option allows you to give customers control over the cost of individual items (without a minimum price) or to set pricing minimums—a good option if you're in a competitive market or when you have higher-end products. You can also offer set prices per product as well as free products to help build your store's popularity.
Sellfy is ideal for selling digital content, such as ebooks. But it does not allow you to copyrighted material (that you don't have rights to distribute).
Sellfy offers several ways to share your store, enabling you to promote your business on different platforms. Sellfy lets you integrate it with your existing website using "buy now" buttons, embed your entire storefront, or embed certain products so you can reach more people. Sellfy also enables you to connect with your Facebook page and YouTube channel, maximizing your visibility.
Sellfy is a simple online platform that allows customers to buy your products directly through your store. Sellfy has two payment processing options: PayPal and Stripe. You will receive instant payments with both of these processors, and your customer data is protected by Sellfy's secure (PCI-compliant) payment security measures. In addition to payment security, Sellfy provides anti-fraud tools to help protect your products including PDF stamping, unique download links, and limited download attempts.
The Sellfy platform includes marketing and analytics tools to help you manage your online store. You can send email product updates and collect newsletter subscribers through the platform. With Sellfy, you can also offer discount codes and product upsells, as well as create and track Facebook and Twitter ads for your store. The software's analytics dashboard will help you track your best-performing products, generated revenue, traffic channels, top locations, and overall store performance.
To expand functionality and make your e-commerce store run more efficiently, Sellfy offers several integrations. Google Analytics and Webhooks, as well as integrations with Patreon and Facebook Live Chat, are just a few of the options available. Sellfy allows you to connect to Zapier, which gives you access to hundreds of third-party apps, including tools like Mailchimp, Trello, Salesforce, and more.
The free plan comes with:
Starter plan comes with:
The business plan comes with:
The premium plan comes with:
Sellfy has its benefits and downsides, but fortunately, the pros outweigh the cons.
In this article, we have taken a look at some of the biggest benefits associated with using sellfy for eCommerce. Once you compare these benefits to what you get with other platforms such as Shopify, you should find that it is worth your time to consider sellfy for your business. After reading this article all of your questions will be solved but if you have still some questions let me know in the comment section below, I will be happy to answer your questions.
Note: This article contains affiliate links which means we make a small commission if you buy sellfy premium plan from our link.
SEMrush and Ahrefs are among the most popular tools in the SEO industry. Both companies have been in business for years and have thousands of customers per month.
If you're a professional SEO or trying to do digital marketing on your own, at some point you'll likely consider using a tool to help with your efforts. Ahrefs and SEMrush are two names that will likely appear on your shortlist.
In this guide, I'm going to help you learn more about these SEO tools and how to choose the one that's best for your purposes.
What is SEMrush?
SEMrush is a popular SEO tool with a wide range of features—it's the leading competitor research service for online marketers. SEMrush's SEO Keyword Magic tool offers over 20 billion Google-approved keywords, which are constantly updated and it's the largest keyword database.
The program was developed in 2007 as SeoQuake is a small Firefox extension
Features
Ahrefs is a leading SEO platform that offers a set of tools to grow your search traffic, research your competitors, and monitor your niche. The company was founded in 2010, and it has become a popular choice among SEO tools. Ahrefs has a keyword index of over 10.3 billion keywords and offers accurate and extensive backlink data updated every 15-30 minutes and it is the world's most extensive backlink index database.
Features
Direct Comparisons: Ahrefs vs SEMrush
Now that you know a little more about each tool, let's take a look at how they compare. I'll analyze each tool to see how they differ in interfaces, keyword research resources, rank tracking, and competitor analysis.
User Interface
Ahrefs and SEMrush both offer comprehensive information and quick metrics regarding your website's SEO performance. However, Ahrefs takes a bit more of a hands-on approach to getting your account fully set up, whereas SEMrush's simpler dashboard can give you access to the data you need quickly.
In this section, we provide a brief overview of the elements found on each dashboard and highlight the ease with which you can complete tasks.
AHREFS
The Ahrefs dashboard is less cluttered than that of SEMrush, and its primary menu is at the very top of the page, with a search bar designed only for entering URLs.
Additional features of the Ahrefs platform include:
SEMRUSH
When you log into the SEMrush Tool, you will find four main modules. These include information about your domains, organic keyword analysis, ad keyword, and site traffic.
You'll also find some other options like
Both Ahrefs and SEMrush have user-friendly dashboards, but Ahrefs is less cluttered and easier to navigate. On the other hand, SEMrush offers dozens of extra tools, including access to customer support resources.
When deciding on which dashboard to use, consider what you value in the user interface, and test out both.
If you're looking to track your website's search engine ranking, rank tracking features can help. You can also use them to monitor your competitors.
Let's take a look at Ahrefs vs. SEMrush to see which tool does a better job.
The Ahrefs Rank Tracker is simpler to use. Just type in the domain name and keywords you want to analyze, and it spits out a report showing you the search engine results page (SERP) ranking for each keyword you enter.
Rank Tracker looks at the ranking performance of keywords and compares them with the top rankings for those keywords. Ahrefs also offers:
You'll see metrics that help you understand your visibility, traffic, average position, and keyword difficulty.
It gives you an idea of whether a keyword would be profitable to target or not.
SEMRush offers a tool called Position Tracking. This tool is a project tool—you must set it up as a new project. Below are a few of the most popular features of the SEMrush Position Tracking tool:
All subscribers are given regular data updates and mobile search rankings upon subscribing
The platform provides opportunities to track several SERP features, including Local tracking.
Intuitive reports allow you to track statistics for the pages on your website, as well as the keywords used in those pages.
Identify pages that may be competing with each other using the Cannibalization report.
Ahrefs is a more user-friendly option. It takes seconds to enter a domain name and keywords. From there, you can quickly decide whether to proceed with that keyword or figure out how to rank better for other keywords.
SEMrush allows you to check your mobile rankings and ranking updates daily, which is something Ahrefs does not offer. SEMrush also offers social media rankings, a tool you won't find within the Ahrefs platform. Both are good which one do you like let me know in the comment.
Keyword research is closely related to rank tracking, but it's used for deciding which keywords you plan on using for future content rather than those you use now.
When it comes to SEO, keyword research is the most important thing to consider when comparing the two platforms.
The Ahrefs Keyword Explorer provides you with thousands of keyword ideas and filters search results based on the chosen search engine.
Ahrefs supports several features, including:
SEMrush's Keyword Magic Tool has over 20 billion keywords for Google. You can type in any keyword you want, and a list of suggested keywords will appear.
The Keyword Magic Tool also lets you to:
Both of these tools offer keyword research features and allow users to break down complicated tasks into something that can be understood by beginners and advanced users alike.
If you're interested in keyword suggestions, SEMrush appears to have more keyword suggestions than Ahrefs does. It also continues to add new features, like the Keyword Gap tool and SERP Questions recommendations.
Both platforms offer competitor analysis tools, eliminating the need to come up with keywords off the top of your head. Each tool is useful for finding keywords that will be useful for your competition so you know they will be valuable to you.
Ahrefs' domain comparison tool lets you compare up to five websites (your website and four competitors) side-by-side.it also shows you how your site is ranked against others with metrics such as backlinks, domain ratings, and more.
Use the Competing Domains section to see a list of your most direct competitors, and explore how many keywords matches your competitors have.
To find more information about your competitor, you can look at the Site Explorer and Content Explorer tools and type in their URL instead of yours.
SEMrush provides a variety of insights into your competitors' marketing tactics. The platform enables you to research your competitors effectively. It also offers several resources for competitor analysis including:
Traffic Analytics helps you identify where your audience comes from, how they engage with your site, what devices visitors use to view your site, and how your audiences overlap with other websites.
SEMrush's Organic Research examines your website's major competitors and shows their organic search rankings, keywords they are ranking for, and even if they are ranking for any (SERP) features and more.
The Market Explorer search field allows you to type in a domain and lists websites or articles similar to what you entered. Market Explorer also allows users to perform in-depth data analytics on These companies and markets.
SEMrush wins here because it has more tools dedicated to competitor analysis than Ahrefs. However, Ahrefs offers a lot of functionality in this area, too. It takes a combination of both tools to gain an advantage over your competition.
When it comes to keyword data research, you will become confused about which one to choose.
Consider choosing Ahrefs if you
Consider SEMrush if you:
Both tools are great. Choose the one which meets your requirements and if you have any experience using either Ahrefs or SEMrush let me know in the comment section which works well for you.
Are you looking for a new graphic design tool? Would you like to read a detailed review of Canva? As it's one of the tools I love using. I am also writing my first ebook using canva and publish it soon on my site you can download it is free. Let's start the review.
Canva has a web version and also a mobile app
Canva is a free graphic design web application that allows you to create invitations, business cards, flyers, lesson plans, banners, and more using professionally designed templates. You can upload your own photos from your computer or from Google Drive, and add them to Canva's templates using a simple drag-and-drop interface. It's like having a basic version of Photoshop that doesn't require Graphic designing knowledge to use. It’s best for nongraphic designers.
Canva is a great tool for small business owners, online entrepreneurs, and marketers who don’t have the time and want to edit quickly.
To create sophisticated graphics, a tool such as Photoshop can is ideal. To use it, you’ll need to learn its hundreds of features, get familiar with the software, and it’s best to have a good background in design, too.
Also running the latest version of Photoshop you need a high-end computer.
So here Canva takes place, with Canva you can do all that with drag-and-drop feature. It’s also easier to use and free. Also an even-more-affordable paid version is available for $12.95 per month.
The product is available in three plans: Free, Pro ($12.99/month per user or $119.99/year for up to 5 people), and Enterprise ($30 per user per month, minimum 25 people).
To get started on Canva, you will need to create an account by providing your email address, Google, Facebook or Apple credentials. You will then choose your account type between student, teacher, small business, large company, non-profit, or personal. Based on your choice of account type, templates will be recommended to you.
You can sign up for a free trial of Canva Pro, or you can start with the free version to get a sense of whether it’s the right graphic design tool for your needs.
When you sign up for an account, Canva will suggest different post types to choose from. Based on the type of account you set up you'll be able to see templates categorized by the following categories: social media posts, documents, presentations, marketing, events, ads, launch your business, build your online brand, etc.
Start by choosing a template for your post or searching for something more specific. Search by social network name to see a list of post types on each network.
Next, you can choose a template. Choose from hundreds of templates that are ready to go, with customizable photos, text, and other elements.
You can start your design by choosing from a variety of ready-made templates, searching for a template matching your needs, or working with a blank template.
Inside the Canva designer, the Elements tab gives you access to lines and shapes, graphics, photos, videos, audio, charts, photo frames, and photo grids.The search box on the Elements tab lets you search everything on Canva.
To begin with, Canva has a large library of elements to choose from. To find them, be specific in your search query. You may also want to search in the following tabs to see various elements separately:
The Photos tab lets you search for and choose from millions of professional stock photos for your templates.
You can replace the photos in our templates to create a new look. This can also make the template more suited to your industry.
You can find photos on other stock photography sites like pexel, pixabay and many more or simply upload your own photos.
When you choose an image, Canva’s photo editing features let you adjust the photo’s settings (brightness, contrast, saturation, etc.), crop, or animate it.
When you subscribe to Canva Pro, you get access to a number of premium features, including the Background Remover. This feature allows you to remove the background from any stock photo in library or any image you upload.
The Text tab lets you add headings, normal text, and graphical text to your design.
When you click on text, you'll see options to adjust the font, font size, color, format, spacing, and text effects (like shadows).
Canva Pro subscribers can choose from a large library of fonts on the Brand Kit or the Styles tab. Enterprise-level controls ensure that visual content remains on-brand, no matter how many people are working on it.
Create an animated image or video by adding audio to capture user’s attention in social news feeds.
If you want to use audio from another stock site or your own audio tracks, you can upload them in the Uploads tab or from the more option.
Want to create your own videos? Choose from thousands of stock video clips. You’ll find videos that range upto 2 minutes
You can upload your own videos as well as videos from other stock sites in the Uploads tab.
Once you have chosen a video, you can use the editing features in Canva to trim the video, flip it, and adjust its transparency.
On the Background tab, you’ll find free stock photos to serve as backgrounds on your designs. Change out the background on a template to give it a more personal touch.
The Styles tab lets you quickly change the look and feel of your template with just a click. And if you have a Canva Pro subscription, you can upload your brand’s custom colors and fonts to ensure designs stay on brand.
If you have a Canva Pro subscription, you’ll have a Logos tab. Here, you can upload variations of your brand logo to use throughout your designs.
With Canva, you can also create your own logos. Note that you cannot trademark a logo with stock content in it.
With Canva, free users can download and share designs to multiple platforms including Instagram, Facebook, Twitter, LinkedIn, Pinterest, Slack and Tumblr.
Canva Pro subscribers can create multiple post formats from one design. For example, you can start by designing an Instagram post, and Canva's Magic Resizer can resize it for other networks, Stories, Reels, and other formats.
Canva Pro subscribers can also use Canva’s Content Planner to post content on eight different accounts on Instagram, Facebook, Twitter, LinkedIn, Pinterest, Slack, and Tumblr.
Canva Pro allows you to work with your team on visual content. Designs can be created inside Canva, and then sent to your team members for approval. Everyone can make comments, edits, revisions, and keep track via the version history.
When it comes to printing your designs, Canva has you covered. With an extensive selection of printing options, they can turn your designs into anything from banners and wall art to mugs and t-shirts.
Canva Print is perfect for any business seeking to make a lasting impression. Create inspiring designs people will want to wear, keep, and share. Hand out custom business cards that leave a lasting impression on customers' minds.
The Canva app is available on the Apple App Store and Google Play. The Canva app has earned a 4.9 out of five star rating from over 946.3K Apple users and a 4.5 out of five star rating from over 6,996,708 Google users.
In addition to mobile apps, you can use Canva’s integration with other Internet services to add images and text from sources like Google Maps, Emojis, photos from Google Drive and Dropbox, YouTube videos, Flickr photos, Bitmojis, and other popular visual content elements.
In general, Canva is an excellent tool for those who need simple images for projects. If you are a graphic designer with experience, you will find Canva’s platform lacking in customization and advanced features – particularly vectors. But if you have little design experience, you will find Canva easier to use than advanced graphic design tools like Adobe Photoshop or Illustrator for most projects. If you have any queries let me know in the comments section.
If you are looking for the best wordpress plugins, then you are at the right place. Here is the list of best wordpress plugins that you should use in your blog to boost SEO, strong your security and know every aspects of your blog . Although creating a good content is one factor but there are many wordpress plugins that perform different actions and add on to your success. So let's start
Those users who are serious about SEO, Yoast SEO will do the work for them to reach their goals. All they need to do is select a keyword, and the plugin will then optimize your page according to the specified keyword
Yoast offers many popular SEO WordPress plugin functions. It gives you real-time page analysis to optimize your content, images, meta descriptions, titles, and kewords. Yoast also checks the length of your sentences and paragraphs, whether you’re using enough transition words or subheadings, how often you use passive voice, and so on. Yoast tells Google whether or not to index a page or a set of pages too.
A website running WordPress can put a lot of strain on a server, which increases the chances that the website will crash and harm your business. To avoid such an unfortunate situation and ensure that all your pages load quickly, you need a caching plugin like WP Rocket.
WP Rocket plugin designed to increases your website speed. Instead of waiting for pages to be saved to cache, WP Rocket turns on desired caching settings, like page cache and gzip compression. The plugin also activates other features, such as CDN support and llazy image loadding, to enhance your site speed.
Wordfence Security is a WordPress firewall and security scanner that keeps your site safe from malicious hackers, spam, and other online threats. This Plugin comes with a web application firewall (WAF) called tthread Defence Feed that helps to prevents brute force attacks by ensuring you set stronger passwords and limiting login attempts. It searches for malware and compares code, theme, and plugin files with the records in the WordPress.org repository to verify their integrity and reports changes to you.
Wordfence security scanner provides you with actionable insights into your website's security status and will alert you to any potential threats, keeping it safe and secure. It also includes login security features that let you activate reCAPTCHA and two-factor authentication for your website.
Akismet can help prevent spam from appearing on your site. Every day, it automatically checks every comment against a global database of spam to block malicious content. With Akismet, you also won’t have to worry about innocent comments being caught by the filter or false positives. You can simply tell Akismet about those and it will get better over time. It also checks your contact form submissions against its global spam database and weed out unnecessary fake information.
Contact Form 7 is a plug-in that allows you to create contact forms that make it easy for your users to send messages to your site. The plug-in was developed by Takayuki Miyoshi and lets you create multiple contact forms on the same site; it also integrates Akismet spam filtering and lets you customize the styling and fields that you want to use in the form. The plug-in provides CAPTCHA and Ajax submitting.
When you’re looking for an easy way to manage your Google Analytics-related web tracking services, Monster Insights can help. You can add, customize, and integrate Google Analytics data with ease so you’ll be able to see how every webpage performs, which online campaigns bring in the most traffic, and which content readers engage with the most. It’s same as Google Analytics
It is a powerful tool to keep track of your traffic stats. With it, you can view stats for your active sessions, conversions, and bounce rates. You’ll also be able to see your total revenue, the products you sell, and how your site is performing when it comes to referrals.
MonsterInsights offers a free plan that includes basic Google Analytics integration, data insights, and user activity metrics.
Pretty Links is a powerful WordPress plugin that enables you to easily cloak affiliate links on your websiteIt even allows you to easily redirect visitors based on a specific request, including permanent 301 and temporary 302/307 redirects.
Pretty links also helps you to automatically shorten your url for your post and pages.
You can also enable auto-linking feature to automatically add affiliate links for certain keywords
We hope you’ve found this article useful. We appreciate you reading and welcome your feedback if you have it.
Ginger VS Grammarly: When it comes to grammar checkers, Ginger and Grammarly are two of the most popular choices on the market. This article aims to highlight the specifics of each one so that you can make a more informed decision about the one you'll use.
If you are a writer, you must have heard of Grammarly before. Grammarly has over 10M users across the globe, it's probably the most popular AI writing enhancement tool, without a doubt. That's why there's a high chance that you already know about Grammarly.
But today we are going to do a comparison between Ginger and Grammarly, So let's define Grammarly here. Like Ginger, Grammarly is an AI writing assistant that checks for grammatical errors, spellings, and punctuation. The free version covers the basics like identifying grammar and spelling mistakes
While the Premium version offers a lot more functionality, it detects plagiarism in your content, suggests word choice, or adds fluency to it.
Ginger is a writing enhancement tool that not only catches typos and grammatical mistakes but also suggests content improvements. As you type, it picks up on errors then shows you what’s wrong, and suggests a fix. It also provides you with synonyms and definitions of words and allows you to translate your text into dozens of languages.
In addition, the program provides a text reader, so you can gauge your writing’s conversational tone.
Grammarly and Ginger are two popular grammar checker software brands that help you to become a better writer. But if you’re undecided about which software to use, consider these differences:
Grammarly Score: 7/10
Ginger:4/10
So Grammarly wins here.
For companies with three or more employees, the Business plan costs $12.50/month for each member of your team.
Ginger Wins Here
While both writing assistants are fantastic in their ways, you need to choose the one you want.
For example, go for Grammarly if you want a plagiarism tool included.
Choose Ginger if you want to write in languages other than English. I will to the differences for you in order to make the distinctions clearer.
Which one you like let me know in the comments section also give your opinions in the comments section below.
Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.
Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
Andrew Ng on...
The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?
Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
When you say you want a foundation model for computer vision, what do you mean by that?
Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
What needs to happen for someone to build a foundation model for video?
Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.
Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI
I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
I expect they’re both convinced now.
Ng: I think so, yes.
Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
How do you define data-centric AI, and why do you consider it a movement?
Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.
You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?
Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?
Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng
For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?
Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.
One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
When you talk about engineering the data, what do you mean exactly?
Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
What about using synthetic data, is that often a good solution?
Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
Do you mean that synthetic data would allow you to try the model on more data sets?
Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng
Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?
Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.
One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?
Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.
Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?
Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
RSS Rabbit links users to publicly available RSS entries.
Vet every link before clicking! The creators accept no responsibility for the contents of these entries.
Relevant
Fresh
Convenient
Agile
We're not prepared to take user feedback yet. Check back soon!