Crypto Contagion Is Spreading, Fast Fri, 25 Nov 2022 18:17:11 +0000 The collapse of FTX has set off a chain reaction that threatens to topple one of crypto’s oldest and most respected institutions. Match ID: 2 Score: 35.71 source: www.wired.com age: 4 days qualifiers: 35.71 genes
Alma mater: University of Maryland in College Park
Barth, an IEEE Life Fellow, conducted pioneering work in analyzing the effects of cosmic rays and solar radiation on spacecraft observatories. Her tools and techniques are still used today. She also helped develop science requirements for NASA’s Living With a Star program, which studies the sun, magnetospheres, and planetary systems.
For her work, Barth was honored with this year’s IEEE Marie Sklodowska-Curie Award for “leadership of and contributions to the advancement of the design, building, deployment, and operation of capable, robust space systems.”
“I still tear up just thinking about it,” Barth says. “Receiving this award is humbling. Everyone at IEEE and Goddard who I worked with owns a piece of this award.”
From co-op hire to chief of NASA’s EE division
Barth initially attended the University of Michigan in Ann Arbor, to pursue a degree in biology, but she soon realized that it wasn’t a good fit for her. She transferred to the University of Maryland in College Park, and changed her major to applied mathematics.
She was accepted for a co-op position in 1978 at the Goddard center, which is about 9 kilometers from the university. Co-op jobs allow students to work at a company and gain experience while pursuing their degree.
“I was excited about using my analysis and math skills to enable new science at Goddard,” she says. She conducted research on radiation environments and their effects on electronic systems.
Goddard hired her after she graduated as a radiation and hardness assurance engineer. She helped ensure that the electronics and materials in space systems would perform as designed after being exposed to radiation in space.
Because of her expertise in space radiation, George Withbroe, director of the NASA Solar-Terrestrial Physics program (now its Heliophysics Division), asked her in 1999 to help write a funding proposal for a program he wanted to launch—which became Living With a Star. It received US $2 billion from the U.S. Congress and launched in 2001.
During her 12 years with the program, Barth helped write the architecture document, which she says became a seminal publication for the field of heliophysics (the study of the sun and how it influences space). The document outlines the program’s goals and objectives.
In 2001 she was selected to be project manager for a NASA test bed that aimed to understand how spacecraft are affected by their environment. The test bed, which collected data from space to predict how radiation might impact NASA missions, successfully completed its mission in 2020.
Barth reached the next rung on her career ladder in 2002, when she became one of the first female associate branch heads of engineering at Goddard. At the space center’s Flight Data Systems and Radiation Effects Branch, she led a team of engineers who designed flight computers and storage systems. Although it was a steep learning curve for her, she says, she enjoyed it. Three years later, she was heading the branch.
She got another promotion, in 2010, to chief of the electrical engineering division. As the Goddard Engineering Directorate’s first female division chief, she led a team of 270 employees who designed, built, and tested electronics and electrical systems for NASA instruments and spacecraft.
Barth (left) and Moira Stanton at the 1997 RADiation and its Effects on Components and Systems Conference, held in Cannes, France. Barth and Stanton coauthored a poster paper and received the outstanding poster paper award.Janet Barth
Working on the James Webb Space Telescope
Throughout her career, Barth was involved in the development of the Webb space telescope. Whenever she thought that she was done with the massive project, she says with a laugh, her path would “intersect with Webb again.”
She first encountered the Webb project in the late 1990s, when she was asked to be on the initial study team for the telescope.
She wrote its space-environment specifications. After they were published in 1998, however, the team realized that there were several complex problems to solve with the telescope’s detectors. The Goddard team supported Matt Greenhouse, John C. Mather, and other engineers to work on the tricky issues. Greenhouse is a project scientist for the telescope’s science instrument payload. Mather won the 2006 Nobel Prize in Physics for discoveries supporting the Big Bang model.
The Webb’s detectors absorb photons—light from far-away galaxies, stars, and planets—and convert them into electronic voltages. Barth and her team worked with Greenhouse and Mather to verify that the detectors would work while exposed to the radiation environment at the L2 Lagrangian point, one of the positions in space where human-sent objects tend to stay put.
Years later, when Barth was heading the Flight Data Systems and Radiation Effects branch, she oversaw the development of the telescope’s instrument command and data handling systems. Because of her important role, Barth’s name was written on the telescope’s instrument ICDH flight box.
When she became chief of Goddard’s electrical engineering division, she was assigned to the technical review panel for the telescope.
“At that point,” she says, “we focused on the mechanics of deployment and the risks that came with not being able to fully test it in the environment it would be launched and deployed in.”
She served on that panel until she retired. In 2019, five years after retiring, she joined the Miller Engineering and Research Corp. advisory board. The company, based in Pasadena, Md., manufactures parts for aerospace and aviation organizations.
“I really like the ethics of the company. They service science missions and crewed missions,” Barth says. “I went back to my roots, and that’s been really rewarding.”
The best things about being an IEEE member
Barth and her husband, Douglas, who is also an engineer, joined IEEE in 1989. She says they enjoy belonging to a “unique peer group.” She especially likes attending IEEE conferences, having access to journals, and being able to take continuing education courses and workshops, she says.
“I stay up to date on the advancements in science and engineering,” she says, “and going to conferences keeps me inspired and motivated in what I do.” The networking opportunities are “terrific,” she adds, and she’s been able to meet people from just about all engineering industries.
There have been vigorous debates pro and con in the United States and elsewhere over whether electric grids can support EVs at scale. The answer is a nuanced “perhaps.” It depends on several factors, including the speed of grid-component modernization, the volume of EV sales, where they occur and when, what kinds of EV charging are being done and when, regulator and political decisions, and critically, economics.
The city of Palo Alto, Calif. is a microcosm of many of the issues involved. Palo Alto boasts the highest adoption rate of EVs in the United States: In 2020, one in six of the town’s 25,000 households owned an EV. Of the 52,000 registered vehicles in the city, 4,500 are EVs, and on workdays, commuters drive another 3,000 to 5,000 EVs to enter the city. Residents can access about 1,000 charging ports spread over 277 public charging stations, with another 3,500 or so charging ports located at residences.
Palo Alto’s government has set a very aggressive Sustainability and Climate Action Plan with a goal of reducing its greenhouse gas emissions to 80 percent below the 1990 level by the year 2030. In comparison, the state’s goal is to achieve this amount by 2050. To realize this reduction, Palo Alto must have 80 percent of vehicles within the next eight years registered in (and commuting into) the city be EVs (around 100,000 total). The projected number of charging ports will need to grow to an estimated 6,000 to 12,000 public ports (some 300 being DC fast chargers) and 18,000 to 26,000 residential ports, with most of those being L2-type charging ports.
“There are places even today where we can’t even take one more heat pump without having to rebuild the portion of the system. Or we can’t even have one EV charger go in.” —Tomm Marshall
To meet Palo Alto’s 2030 emission-reduction goals, the city, which owns and operates the electric utility, would like to increase significantly the amount of local renewable energy being used for electricity generation (think rooftop solar) including the ability to use EVs as distributed-energy resources (vehicle-to-grid (V2G) connections). The city has provided incentives for the purchase of both EVs and charging ports, the installation of heat-pump water heaters, and the installation of solar and battery-storage systems.
There are, however, a few potholes that need to be filled to meet the city’s 2030 emission objectives. At a February meeting of Palo Alto’s Utilities Advisory Commission, Tomm Marshall, assistant director of utilities, stated, “There are places even today [in the city] where we can’t even take one more heat pump without having to rebuild the portion of the [electrical distribution] system. Or we can’t even have one EV charger go in.”
Peak loading is the primary concern. Palo Alto’s electrical-distribution system was built for the electric loads of the 1950s and 1960s, when household heating, water, and cooking were running mainly on natural gas. The distribution system does not have the capacity to support EVs and all electric appliances at scale, Marshall suggested. Further, the system was designed for one-way power, not for distributed-renewable-energy devices sending power back into the system.
A big problem is the 3,150 distribution transformers in the city, Marshall indicated. A 2020 electrification-impact study found that without improvements, more than 95 percent of residential transformers would be overloaded if Palo Alto hits its EV and electrical-appliance targets by 2030.
Palo Alto’s electrical-distribution system needs a complete upgrade to allow the utility to balance peak loads.
For instance, Marshall stated, it is not unusual for a 37.5 kilovolt-ampere transformer to support 15 households, as the distribution system was originally designed for each household to draw 2 kilowatts of power. Converting a gas appliance to a heat pump, for example, would draw 4 to 6 kW, while an L2 charger for EVs would be 12 to 14 kW. A cluster of uncoordinated L2 charging could create an excessive peak load that would overload or blow out a transformer, especially when they are toward the end of their lives, which many already are. Without smart meters—that is, Advanced Metering Infrastructure (AMI), which will be introduced into Palo Alto in 2024—the utility has little to no household peak load insights.
Palo Alto’s electrical-distribution system needs a complete upgrade to allow the utility to balance peak loads, manage two-way power flows, install the requisite number of EV charging ports and electric appliances to support the city’s emission-reduction goals, and deliver power in a safe, reliable, sustainable, and cybersecure manner. The system also must be able to cope in a multihour-outage situation, where future electrical appliances and EV charging will commence all at once when power is restored, placing a heavy peak load on the distribution system.
PlugShare.comA map of EV charging stations in the Palo Alto, CA area from PlugShare.com
Until it can modernize its distribution network, Marshall conceded that the utility must continue to deal with angry and confused customers who are being encouraged by the city to invest in EVs, charging ports, and electric appliances, only then to be told that they may not be accommodated anytime soon.
Policy runs up against engineering reality
The situation in Palo Alto is not unique. There are some 465 cities in the United States with populations between 50,000 and 100,000 residents, and another 315 that are larger, many facing similar challenges. How many can really support a rapid influx of thousands of new EVs? Phoenix, for example, wants 280,000 EVs plying its streets by 2030, nearly seven times as many as it has currently. Similar mismatches between climate-policy desires and an energy infrastructure incapable of supporting those policies will play out across not only the United States but elsewhere in one form or another over the next two decades as conversion to EVs and electric appliances moves to scale.
As in Palo Alto, it will likely be blown transformers or constantly flickering lights that signal there is an EV charging-load issue. Professor Deepak Divan, the director of the Center for Distributed Energy at Georgia Tech, says his team found that in residential areas “multiple L2 chargers on one distribution transformer can reduce its life from an expected 30 to 40 years to 3 years.” Given that most of the millions of U.S. transformers are approaching the end of their useful lives, replacing transformers soon could be a major and costly headache for utilities, assuming they can get them.
Supplies for distribution transformers are low, and costs have skyrocketed from a range of $3,000 to $4,000 to $20,000 each. Supporting EVs may require larger, heavier transformers, which means many of the 180 million power poles on which these need to sit will need to be replaced to support the additional weight.
Exacerbating the transformer loading problem, Divan says, is that many utilities “have no visibility beyond the substation” into how and when power is being consumed. His team surveyed “twenty-nine utilities for detailed voltage data from their AMI systems, and no one had it.”
This situation is not true universally. Xcel Energy in Minnesota, for example, has already started to upgrade distribution transformers because of potential residential EV electrical-load issues. Xcel president Chris Clark told the Minneapolis Star Tribune that four or five families buying EVs noticeably affects the transformer load in a neighborhood, with a family buying an EV “adding another half of their house.”
Joyce Bodoh, director of energy solutions and clean energy for Virginia’s Rappahannock Electric Cooperative (REC), a utility distributor in central Virginia, says that “REC leadership is really, really supportive of electrification, energy efficiency, and electric transportation.” However, she adds, “all those things are not a magic wand. You can’t make all three things happen at the same time without a lot of forward thinking and planning.”
Total U.S. Energy Consumption
For nearly 50 years, Lawrence Livermore National Laboratory has been publishing a Sankey diagram of estimated U.S. energy consumption from various generation sources, as shown above. In 2021, the United States consumed 97.3 quadrillion British thermal units (quads) of energy, with the transportation sector using 26.9 quads, 90 percent of it from petroleum. Obviously, as the transportation sector electrifies, electricity generation will need to grow in some reduced proportion of the energy once provided to the transportation section by petroleum, given the higher energy efficiency of EVs.
To achieve the desired reduction in greenhouse gases, renewable-energy generation of electricity will need to replace fossil fuels. The improvements and replacements to the grid’s 8,000 power-generation units and 600,000 circuit miles of AC transmission lines (240,000 circuit miles being high-voltage lines) and 70,000 substations to support increased renewable energy and battery storage is estimated to be more than $2.5 trillion in capital, operations, and maintenance costs by 2035.
As part of this planning effort, Bodoh says that REC has actively been performing “an engineering study that looked at line loss across our systems as well as our transformers, and said, ‘If this transformer got one L2 charger, what would happen? If it got two L2s, what would happen, and so on?’” She adds that REC “is trying to do its due diligence, so we don’t get surprised when a cul-de-sac gets a bunch of L2 chargers and there’s a power outage.”
REC also has hourly energy-use data from which it can find where L2 chargers may be in use because of the load profile of EV charging. However, Bodoh says, REC does not just want to know where the L2 chargers are, but also to encourage its EV-owning customers to charge at nonpeak hours—that is, 9 p.m. to 5 a.m. and 10 a.m. to 2 p.m. REC has recently set up an EV charging pilot program for 200 EV owners that provides a $7 monthly credit if they do off-peak charging. Whether REC or other utilities can convince enough EV owners of L2 chargers to consistently charge during off-peak hours remains to be seen.
“Multiple L2 chargers on one distribution transformer can reduce its life from an expected 30 to 40 years to 3 years.” —Deepak Divan
Even if EV owner behavior changes, off-peak charging may not fully solve the peak-load problem once EV ownership really ramps up. “Transformers are passively cooled devices,” specifically designed to be cooled at night, says Divan. “When you change the (power) consumption profile by adding several EVs using L2 chargers at night, that transformer is running hot.” The risk of transformer failure from uncoordinated overnight charging may be especially aggravated during times of summer heat waves, an issue that concerns Palo Alto’s utility managers.
There are technical solutions available to help spread EV charging peak loads, but utilities will have to make the investments in better transformers and smart metering systems, as well as get regulatory permission to change electricity-rate structures to encourage off-peak charging. Vehicle-to-grid (V2G), which allows an EV to serve as a storage device to smooth out grid loads, may be another solution, but for most utilities in the United States, this is a long-term option. Numerous issues need to be addressed, such as the updating of millions of household electrical panels and smart meters to accommodate V2G, the creation of agreed-upon national technical standards for the information exchange needed between EVs and local utilities, the development of V2G regulatory policies, and residential and commercial business models, including fair compensation for utilizing an EV’s stored energy.
As energy expert Chris Neldernoted at a National Academy EV workshop, “vehicle-to-grid is not really a thing, at least not yet. I don’t expect it to be for quite some time until we solve a lot of problems at various utility commissions, state by state, rate by rate.”
In the next article in the series, we will look at the complexities of creating an EV charging infrastructure.
Match ID: 6 Score: 30.00 source: spectrum.ieee.org age: 1 day qualifiers: 10.00 development, 10.00 california, 10.00 apple
Rise in cost of essentials will hit poorer households, already struggling with higher energy bills, hardest
UK food price inflation hit a new high of 12.4% in November as the price of basics such as eggs, dairy products and coffee shot up.
Fresh foods led the increase in prices – with inflation rising to 14.3% from 13.3% in October – with rises expected to continue into next year according to the latest data from the British Retail Consortium trade body, which represents most big retailers, and the market research firm NielsenIQ.
Continue reading... Match ID: 11 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 trade
Thousands of staff, including 999 call handlers and paramedics, to take strike action over pay and staffing levels
Ambulance workers across England intend to strike before Christmas after voting in favour of industrial action over pay and staffing levels.
Unison, the UK’s biggest trade union, announced the results of its month-long NHS strike ballot and said thousands of 999 call handlers, ambulance technicians, paramedics and their colleagues working for ambulance services in the north-east, north-west, London, Yorkshire and the south-west are to take industrial action.
Continue reading... Match ID: 12 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 trade
Tuesday Morning Corp. said late Tuesday it was readying a 1-for-30 reverse stock split, effective Wednesday. Tuesday Morning stock will continue to trade on the Nasdaq and will begin trading on a split-adjusted basis at market open on Thursday. "The reverse stock split is primarily intended to enable Tuesday Morning to regain compliance with the $1.00 per share minimum bid price required for continued listing on Nasdaq," the company said. As a result of the reverse stock split, every 30 shares of Tuesday Morning's stock will be combined into one share. Shares of Tuesday Morning dropped more than 12% in the extended session Tuesday after ending the regular trading day up 18%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 14 Score: 25.00 source: www.marketwatch.com age: 0 days qualifiers: 25.00 trade
Spencer Platt/Getty ImagesOne of the bond market’s most reliable indicators of impending U.S. recessions is pointed in a pretty pessimistic direction right now, but contains at least one optimistic message: The Federal Reserve will remain committed to its battle on inflation and, some analysts say, should ultimately win it.The spread between 2- BX:TMUBMUSD02Y and 10-year Treasury yields BX:TMUBMUSD10Y is stuck at one of its most negative levels since 1981-1982 after shrinking to as little as minus 78.5 basis points on Tuesday. Over the past week, it’s even approached minus 80 basis points. The more deeply negative the spread becomes, the more worrisome of a signal it’s emitting about the severity of the next economic downturn. Read:Bond-market recession gauge hits 41-year milestone as global growth fears mountBut there’s more than one way to read this measure: The spread also reflects the degree to which the bond market still has confidence that policy makers will do what’s needed to bring down inflation running near its highest levels of the past four decades. The policy-sensitive 2-year Treasury yield finished the New York session at 4.47% on Tuesday, and is up by 370.7 basis points since January, as traders factor in further Fed interest rate hikes. Meanwhile, the 10-year yield was at 3.75% — roughly 72 basis points below the 2-year yield, resulting in a deeply negative spread — and at a level that indicates traders aren’t factoring in a whole lot of additional premium based on the possibility of higher, long-term inflation. Higher and stickier yields at the front end of the curve are “a sign of Fed credibility,” with the central bank seen committed to keeping monetary policy restrictive for longer to rein in inflation, said Subadra Rajappa, head of U.S. rates strategy for Société Générale. “Unfortunately, tighter policy will lead to demand destruction and lower growth, which is keeping long-end yields depressed.”In theory, lower economic growth equates to lower inflation, which helps the Fed do its job of controlling prices. The million-dollar question in financial markets, though, is just how quickly inflation will come down to more normal levels closer to 2%. History shows that Fed rate hikes have no apparent maximum impact on inflation for about 1.5 to 2 years, according to famed economist Milton Friedman, who was cited in an August blog by Atlanta Fed researchers.“The yield curve will likely remain inverted until there is a clear sign of a policy pivot from the Fed,” Rajappa wrote in an email to MarketWatch on Tuesday. Asked whether the deeply inverted curve indicates central bankers will ultimately be successful in curbing inflation, she said, “It is not a question of if, but when. While inflation should steadily decline over the upcoming year, strong employment and sticky services inflation might delay the outcome.”Ordinarily, the Treasury yield curve slopes upward, not downward, when the bond market sees brighter growth prospects ahead. In addition, investors demand more compensation to hold a note or bond for a longer period of time, which also leads to an upward sloping Treasury curve. That’s part of the reason why inversions grab so much attention. And at the moment, multiple parts of the bond market, not just the 2s/10s spread, are inverted. For Ben Jeffery, a rates strategist at BMO Capital Markets, a deeply inverted curve “shows that the Fed has moved aggressively and will keep rates on hold in restrictive territory despite a quickly dimming economic outlook.”The 2s/10s spread hasn’t been this far below zero since the early years of Ronald Reagan’s presidency. In October 1981, when the 2s10s spread shrank to as little as minus 96.8 basis points, the annual headline inflation rate from the consumer-price index was above 10%, the fed-funds rate was around 19% under then-Federal Reserve Chairman Paul Volcker, and the U.S. economy was in the midst of one of its worst downturns since the Great Depression. Volcker’s bold moves paid off, though, with the annual headline CPI rate dropping below 10% the following month and continuing to fall more steeply in the months and years that followed. Inflation hadn’t reared its head again until last year and again this year, when the annual headline CPI rate went above 8% for seven straight months before dipping to 7.7% in October. On Tuesday, Treasury yields were little changed to higher as traders assessed more hawkish rhetoric from Fed policy makers such as St. Louis Fed President James Bullard, who said on Monday that the central bank will likely need to keep its benchmark interest rate above 5% for most of next year and into 2024 to cool inflation.Right now, “a deeply inverted yield curve signals the Fed is somewhat overtightening, but the impact on inflation may take some time to come through,” said Ben Emons, a senior portfolio manager and the head of fixed income/macro strategy at NewEdge Wealth in New York.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 15 Score: 25.00 source: www.marketwatch.com age: 0 days qualifiers: 25.00 trade
Gold futures finished higher on Tuesday, with prices recouping most of what they lost a day earlier. The factor that “matters the most” is the Federal Reserve’s stance towards its monetary policy, said Naeem Aslam, chief market analyst at AvaTrade. A strong U.S. jobs number, due out Friday, could make the Fed think about its monetary policy, he said. The central bank “could adopt an ultra-hawkish monetary policy like before but so far, “the hope…is that the Fed will slow the roll in terms of increasing the interest rate.” Gold for February delivery GCG23 rose $8.40, or 0.5%, to settle at $1,763.70 an ounce on Comex after losing 0.8% on Monday.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 19 Score: 25.00 source: www.marketwatch.com age: 0 days qualifiers: 25.00 trade
Mounting evidence of dirty tricks against prospective MPs can’t be dismissed as leftwing sour grapes. We were promised a ‘broad church’
Britain will almost certainly have a Labour government in two years’ time: you have the Tories’ unprecedented self-immolation to thank for that. Debating, then, how Rishi Sunak’s successors will govern is a democratic imperative. To some of Keir Starmer’s more zealous supporters, scrutinising the opposition is an act of treachery that simply makes a Tory government more likely. Welcome to “Schrödinger’s left”: where the left of the partyis simultaneously so irrelevant and toxic that it must be marginalised, but so powerful it can help determine the result of general elections.
In his pitch for the Labour leadership, Starmer promised that under his watch the party would be a “broad church”, and that he would restore trust in Labour through “unity”. To underline that this wasn’t just empty rhetoric, he said that the selection of Labour candidates “needs to be more democratic and we should end NEC impositions of candidates. Local party members should select their candidates for every election.” To paraphrase Karl Marx, all that is a Starmer promise melts into air: but this particular issue has political consequences that go far beyond internal Labour politics.
Continue reading... Match ID: 24 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 trade
Boston Scientific Corp. said Tuesday it has agreed to acquire Apollo Endosurgery Inc. for $10 a share, or about $615 million in cash. The news sent Apollo's stock, which closed Monday at $6, up 61% in premarket trade. The company has a portfolio of devices used in endoluminal surgery procedures to close gastrointestinal defects, manage gastrointestinal complications and aid in weight loss for patients suffering from obesity, and is expected to generate net sales of about $76 million in 2022. "Endoluminal surgery is an emerging field and a core focus for our Endoscopy business," said Mike Jones, senior vice president and president, Endoscopy, Boston Scientific. The deal is expected to close in the first half of 2023. The deal is expected to be immaterial to Boston Scientific's per-share earnings in 2023, but to boost them after that. Boston Scientific shares were not yet active premarket, but have gained 3% in the year to date, while the S&P 500 has fallen 17%.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 25 Score: 25.00 source: www.marketwatch.com age: 0 days qualifiers: 25.00 trade
Jon Ferry sells old bones used in the teaching of medicine. But the medical bone trade has a murky history of exploitation
In a small, light-filled Bushwick studio space, a brown box rests on a wooden coffee table. Inside is a human head. “Wanna start?” asks Jon Pichaya Ferry, pulling a box cutter out of the pocket of his black skinny jeans.
Inside is a lumpy form wrapped in thin aqua foam, which he tears off to reveal a skull’s mandible. Out comes the rest of the skull; he fits the two parts together and places it on the lid of a coffin in the corner of the room, next to a can of Red Bull.
Continue reading... Match ID: 27 Score: 25.00 source: www.theguardian.com age: 0 days qualifiers: 25.00 trade
Elon Musk and the hardcore cult of Diet Coke Mon, 28 Nov 2022 15:22:22 EST Twitter owner Elon Musk shared a photo of his bedside table littered with cans of Diet Coke, reminding us of the power the beverage has. Match ID: 28 Score: 25.00 source: www.washingtonpost.com age: 1 day qualifiers: 15.00 musk, 10.00 amazon
A Peek Inside the FBI's Unprecedented January 6 Geofence Dragnet Mon, 28 Nov 2022 12:00:00 +0000 Google provided investigators with location data for more than 5,000 devices as part of the federal investigation into the attack on the US Capitol. Match ID: 29 Score: 25.00 source: www.wired.com age: 1 day qualifiers: 25.00 google
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND
Enjoy today’s videos!
Happy Thanksgiving, for those who celebrate it. Now spend 10 minutes watching a telepresence robot assemble a turkey sandwich.
Ayato Kanada, an assistant professor at Kyushu University, in Japan, wrote in to share “the world’s simplest omnidirectional mobile robot.”
We propose a palm-sized omnidirectional mobile robot with two torus wheels. A single torus wheel is made of an elastic elongated coil spring in which the two ends of the coil connected each other and is driven by a piezoelectric actuator (stator) that can generate 2-degrees-of-freedom (axial and angular) motions. The stator converts its thrust force and torque into longitudinal and meridian motions of the torus wheel, respectively, making the torus work as an omnidirectional wheel on a plane.
This work, entitled “Virtually turning robotic manipulators into worn devices: opening new horizons for wearable assistive robotics,” proposes a novel hybrid system using a virtually worn robotic arm in augmented reality, and a real robotic manipulator servoed on such a virtual representation. We basically aim at creating the illusion of wearing a robotic system while its weight is fully supported. We believe that this approach could offer a solution to the critical challenge of weight and discomfort caused by robotic sensorimotor extensions—such as supernumerary robotic limbs (SRL), prostheses, or handheld tools—and open new horizons for the development of wearable robotics.
Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jumps, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-size jumping robots.
The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the moon’s shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800-square-meter “lunar” terrain. The winning team will have the opportunity to have their technology implemented on the moon.
We present the tensegrity aerial vehicle, a design of collision-resilient rotor robots with icosahedron tensegrity structures. With collision resilience and reorientation ability, the tensegrity aerial vehicles can operate in cluttered environments without complex collision-avoidance strategies. These capabilities are validated by a test of an experimental tensegrity aerial vehicle operating with only onboard inertial sensors in a previously unknown forest.
The robotics research group Brubotics and the polymer-science and physical-chemistry group FYSC of the University of Brussels have together developed self-healing materials that can be scratched, punctured, or completely cut through and heal themselves back together, with the required heat, or even at room temperature.
Researchers at MIT’s Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.
The researchers from North Carolina State University have recently developed a fast and efficient soft robotic swimmer whose motions resemble a human’s butterfly-stroke style. It can achieve a high average swimming speed of 3.74 body lengths per second, close to five times as fast as the fastest similar soft swimmers, and also a high-power efficiency with a low energy cost.
To facilitate sensing and physical interaction in remote and/or constrained environments, high-extension, lightweight robot manipulators are easier to transport and can reach substantially further than traditional serial-chain manipulators. We propose a novel planar 3-degrees-of-freedom manipulator that achieves low weight and high extension through the use of a pair of spooling bistable tapes, commonly used in self-retracting tape measures, which are pinched together to form a reconfigurable revolute joint.
Robotics professor Henny Admoni answers the Internet’s burning questions about robots! How do you program a personality? Can robots pick up a single M&M? Why do we keep making humanoid robots? What is Elon Musk’s goal for the Tesla Optimus robot? Will robots take over my job writing video descriptions...I mean, um, all our jobs? Henny answers all these questions and much more.
This GRASP on Robotics talk is from Julie Adams at Oregon State University, on “Towards Adaptive Human-Robot Teams: Workload Estimation.”
The ability for robots, be it a single robot, multiple robots, or a robot swarm, to adapt to the humans with which they are teamed requires algorithms that allow robots to detect human performance in real time. The multidimensional workload algorithm incorporates physiological metrics to estimate overall workload and its components (cognitive, speech, auditory, visual, and physical). The algorithm is sensitive to changes in a human’s individual workload components and overall workload across domains, human-robot teaming relationships (supervisory, peer-based), and individual differences. The algorithm has also been demonstrated to detect shifts in workload in real time in order to adapt the robot’s interaction with the human and autonomously change task responsibilities when the human’s workload is over- or underloaded. Recently, the algorithm was used to analyze post hoc the resulting workload for a single human deploying a heterogeneous robot swarm in an urban environment. Current efforts are focusing on predicting the human’s future workload, recognizing the human’s current tasks, and estimating workload for previously unseen tasks.
Match ID: 32 Score: 22.86 source: theintercept.com age: 5 days qualifiers: 14.29 google, 5.71 amazon, 2.86 startup
China again holds firm on ‘zero covid,’ despite the worsening toll Tue, 29 Nov 2022 17:00:18 EST The decision to maintain a policy of border controls, lockdowns and travel restrictions came after two deaths linked to the measures reignited public anger. Match ID: 33 Score: 20.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 apple, 10.00 amazon
This is a guest post in recognition of the 75th anniversary of the invention of the transistor. It is adapted from an essay in the July 2022 IEEE Electron Device Society Newsletter. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
On the 75th anniversary of the invention of the transistor, a device to which I have devoted my entire career, I’d like to answer two questions: Does the world need better transistors? And if so, what will they be like?
I would argue, that yes, we are going to need new transistors, and I think we have some hints today of what they will be like. Whether we’ll have the will and economic ability to make them is the question.
I believe the transistor is and will remain key to grappling with the impacts of global warming. With its potential for societal, economic, and personal upheaval, climate change calls for tools that give us humans orders-of-magnitude more capability.
Semiconductors can raise the abilities of humanity like no other technology.Almost by definition, all technologies increase human abilities. But for most of them, natural resources and energy constrains make orders-of magnitude improvements questionable. Transistor-enabled technology is a unique exception for the following reasons.
As transistors improve, they enable new abilities such as computing and high-speed communication, the Internet, smartphones, memory and storage, robotics, artificial intelligence, and other things no one has thought of yet.
These abilities have wide applications, and they transform all technologies, industries, and sciences. a. Semiconductor technology is not nearly as limited in growth by its material and energy usages as other technologies. ICs use relatively small amounts of materials. As a result, they’re being made smaller, and the less materials they use, the faster, more energy efficient, and capable they become.
Theoretically, the energy required for information processing can still be reduced to less than one-thousandth of what is required today . Although we do not yet know exactly how to approach such theoretical efficiency, we know that increasing energy efficiency a thousandfold would not violate physical laws. In contrast, the energy efficiencies of most other technologies, such as motors and lighting, are already at 30 to 80 percent of their theoretical limits.
Transistors: past, present, and future
How we’ll continue to improve transistor technology is relatively clear in the short term, but it gets murkier the farther out you go from today. In the near term, you can glimpse the transistor’s future by looking at its recent past.
The basic planar (2D) MOSFET structure remained unchanged from 1960 until around 2010, when it became impossible to further increase transistor density and decrease the device’s power consumption. My lab at the University of California, Berkeley, saw that point coming more than a decade earlier. We reported the invention of the FinFET, the planar transistor’s successor, in 1999. FinFET, the first 3D MOSFET, changed the flat and wide transistor structure to a tall and narrow one. The benefit is better performance in a smaller footprint, much like the benefit of multistory buildings over single-story ones in a crowded city.
The FinFET is also what’s called a thin-body MOSFET, a concept that continues to guide the development of new devices. It arose from the insight that current will not leak through a transistor within several nanometers of the silicon surface because the surface potential there is well controlled by the gate voltage. FinFETs take this thin-body concept to heart. The device’s body is the vertical silicon fin, which is covered by oxide insulator and gate metal, leaving no silicon outside the range of strong gate control. FinFETs reduced leakage current by orders of magnitude and lowered transistor operating voltage. It also pointed toward the path for further improvement: reducing the body thickness even more.
The fin of the FinFET has become thinner and taller with each new technology node. But this progress has now become too difficult to maintain. So industry is adopting a new 3D thin-body CMOS structure, called gate-all-around (GAA). Here, a stack of ribbons of semiconductor make up the thin body.
Each evolution of the MOSFET structure has been aimed at producing better control over charge in the silicon by the gate [pink]. Dielectric [yellow] prevents charge from moving from the gate into the silicon body [blue].
The 3D thin-body trend will continue from these 3D transistors to 3D-stacked transistors, 3D monolithic circuits, and multichip packaging. In some cases, this 3D trend has already reached great heights. For instance, the regularity of the charge-trap memory-transistor array allowed NAND flash memory to be the first IC to transition from 2D circuits to 3D circuits. Since the first report of 3D NAND by Toshiba in 2007, the number of stacked layers has grown from 4 to beyond 200.
Monolithic 3D logic ICs will likely start modestly, with stacking the two transistors of a CMOS inverter to reduce all logic gates’ footprints [see “3D-Stacked CMOS Takes Moore’s Law to New Heights”]. But the number of stacks may grow. Other paths to 3D ICs may employ the transfer or deposition of additional layers of semiconductor films, such as silicon, silicon germanium, or indium gallium arsenide onto a silicon wafer.
The thin-body trend might meet its ultimate endpoint in 2D semiconductors, whose thickness is measured in atoms. Molybdenum disulfide molecules, for example, are both naturally thin and relatively large, forming a 2D semiconductor that may be no more than three atoms wide yet have very good semiconductor properties. In 2016, engineers in California and Texas used a film of the 2D-semiconductor molecule molybdenum disulfide and a carbon nanotube to demonstrate a MOSFET with a critical dimension: a gate length just 1 nanometer across. Even with a gate as short as 1 nm, the transistor leakage current was only 10 nanoamperes per millimeter, comparable with today’s best production transistor.
“The progress of transistor technology has not been even or smooth.”
One can imagine that in the distant future, the entire transistor may be prefabricated as a single molecule. These prefabricated building blocks might be brought to their precise locations in an IC through a process called directed-self-assembly (DSA). To understand DSA, it may be helpful to recall that a COVID virus uses its spikes to find and chemically dock itself onto an exact spot at the surface of particular human cells. In DSA, the docking spots, the “spikes,” and the transistor cargo are all carefully designed and manufactured. The initial docking spots may be created with lithography on a substrate, but additional docking spots may be brought in as cargo in subsequent steps. Some of the cargo may be removed by heat or other means if they are needed only during the fabrication process but not in the final product.
Besides making transistors smaller, we’ll have to keep reducing their power consumption. Here we could see an order-of-magnitude reduction through the use of what are called negative-capacitance field-effect transistors (NCFET). These require the insertion of a nanometer-thin layer of ferroelectric material, such as hafnium zirconium oxide, in the MOSFET’s gate stack. Because the ferroelectric contains its own internal electric field, it takes less energy to switch the device on or off. An additional advantage of the thin ferroelectric is the possible use of the ferroelectric’s capacity to store a bit as the state of its electric field, thereby integrating memory and computing in the same device.
The author [left] received the U.S. National Medal of Technology and Innovation from President Barack Obama [right] in 2016.
To some degree the devices I’ve described arose out of existing trends. But future transistors may have very different materials, structures, and operating mechanisms from those of today’s transistor. For example, the nanoelectromechanical switch is a return to the mechanical relays of decades past rather than an extension of the transistor. Rather than relying on the physics of semiconductors, it uses only metals, dielectrics, and the force between closely spaced conductors with different voltages applied to them.
All these examples have been demonstrated with experiments years ago. However, bringing them to production will require much more time and effort than previous breakthroughs in semiconductor technology.
Getting to the future
Will we be able to achieve these feats? Some lessons from the past indicate that we could.
The first lesson is that the progress of transistor technology has not been even or smooth. Around 1980, the rising power consumption per chip reached a painful level. The adoption of CMOS, replacing NMOS and bipolar technologies—and later, the gradual reduction of operation voltage from 5 volts to 1—gave the industry 30 years of more or less straightforward progress. But again, power became an issue. Between 2000 and 2010, the heat generated per square centimeter of IC was projected by thoughtful researchers to soon reach that of the nuclear-reactor core. The adoption of 3D thin-body FinFET and multicore processor architectures averted the crisis and ushered in another period of relatively smooth progress.
The history of transistor technology may be described as climbing one mountain after another. Only when we got to the top of one were we able see the vista beyond and map a route to climb the next taller and steeper mountain.
The second lesson is that the core strength of the semiconductor industry—nanofabrication—is formidable. History proves that, given sufficient time and economic incentives, the industry has been able to turn any idea into reality, as long as that idea does not violate scientific laws.
But will the industry have sufficient time and economic incentives to continue climbing taller and steeper mountains and keep raising humanity’s abilities?
It’s a fair question. Even as the fab industry’s resources grow, the mountains of technology development grow even faster. A time may come when no one fab company can reach the top of the mountain to see the path ahead. What happens then?
The revenue of all semiconductor fabs (both independent and those, like Intel, that are integrated companies) is about one-third of the semiconductor industry revenue. But fabs make up just 2 percent of the combined revenues of the IT, telecommunications, and consumer-electronics industries that semiconductor technology enables. Yet the fab industry bears most of the growing burden of discovering, producing, and marketing new transistors and nanofabrication technologies. That needs to change.
For the industry to survive, the relatively meager resources of the fab industry must be prioritized in favor of fab building and shareholder needs over scientific exploration. While the fab industry is lengthening its research time horizon, it needs others to take on the burden too. Humanity’s long-term problem-solving abilities deserve targeted public support. The industry needs the help of very-long-term exploratory research, publicly funded, in a Bell Labs–like setting or by university researchers with career-long timelines and wider and deeper knowledge in physics, chemistry, biology, and algorithms than corporate research currently allows. This way, humanity will continue to find new transistors and gain the abilities it will need to face the challenges in the centuries ahead.
Match ID: 34 Score: 20.00 source: spectrum.ieee.org age: 0 days qualifiers: 10.00 development, 10.00 california
Match ID: 35 Score: 20.00 source: theintercept.com age: 0 days qualifiers: 10.00 california, 10.00 apple
How to practice ‘gentle parenting’ — without losing discipline Tue, 29 Nov 2022 07:00:09 EST Gentle parenting is one of the latest parenting trends. Does this mean parents can't say no to their children? Match ID: 36 Score: 20.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 development, 10.00 amazon
All things considered, we humans are kind of big, which is very limiting to how we can comfortably interact with the world. The practical effect of this is that we tend to prioritize things that we can see and touch and otherwise directly experience, even if those things are only a small part of the world in which we live. A recent study conservatively estimates that there are 2.5 million ants for every one human on Earth. And that’s just ants. There are probably something like 7 million different species of terrestrial insects, and humans have only even noticed like 10 percent of them. The result of this disconnect is that when (for example) insect populations around the world start to crater, it takes us much longer to first notice, care, and act.
To give the small scale the attention that it deserves, we need a way of interacting with it. In a paper recently published in Scientific Reports, roboticists from Ritsumeikan University in Japan demonstrate a haptic teleoperation system that connects a human hand on one end with microfingers on the other, letting the user feel what it’s like to give a pill bug a tummy rub.
At top, a microfinger showing the pneumatic balloon actuator (PBA) and liquid metal strain gauge. At bottom left, when the PBA is deflated, the microfinger is straight. At bottom right, inflating the PBA causes the finger to bend downwards.
These microfingers are just 12 millimeters long, 3 mm wide, and 490 microns (μm) thick. Inside of each microfinger is a pneumatic balloon actuator, which is just a hollow channel that can be pressurized with air. Since the channel is on the top of the microfinger, when the channel is inflated, it bulges upward, causing the microfinger to bend down. When pressure is reduced, the microfinger returns to its original position. Separate channels in the microfinger are filled with liquid metal, and as the microfinger bends, the channels elongate, thinning out the metal. By measuring the resistance of the metal, you can tell how much the finger is being bent. This combination of actuation and force sensing means that a human-size haptic system can be used as a force feedback interface: As you move your fingers, the microfingers will move, and forces can be transmitted back to you, allowing you to feel what the microfingers feel.
The microfingers (left) can be connected to a haptic feedback and control system for use by a human.
The thought suddenly struck me: I can make micro hands for my little hands. I can make the same gloves for them as I did for my living hands, use the same system to connect them to the handles ten times smaller than my micro arms, and then ... I will have real micro arms, they will chop my movements two hundred times. With these hands I will burst into such a smallness of life that they have only seen, but where no one else has disposed of their own hands. And I got to work.
With their very real and not science fiction system, the researchers were able to successfully determine that pill bugs can exert about 10 micro-Newtons of force through their legs, which is about the same as what has been estimated using other techniques. This is just a proof of concept study, but I’m excited about the potential here, because there is still so much of the world that humans haven’t yet been able to really touch. And besides just insect-scale tickling, there’s a broader practical context here around the development of insect-scale robots. Insects have had insect-scale sensing and mobility and whatnot pretty well figured out for a long time now, and if we’re going to make robots that can do insect-like things, we’re going to do it by learning as much as we can directly from insects themselves.
“With our strain-sensing microfinger, we were able to directly measure the pushing motion and force of the legs and torso of a pill bug—something that has been impossible to achieve previously. We anticipate that our results will lead to further technological development for microfinger-insect interactions, leading to human-environment interactions at much smaller scales.” —Satoshi Konishi, Ritsumeikan University
I should also be clear that despite the headline, I don’t know if it’s actually possible to tickle a bug. A Google search for “are insects ticklish” turns up one single result, from someone asking this question on the "StonerThoughts" subreddit. There is some suggestion that tickling, or more specifically the kind of tickling that is surprising and can lead to laughter called gargalesis, has evolved in social mammals to promote bonding. The other kind of tickling is called knismesis, which is more of an unpleasant sensation that causes irritation or distress. You know, like the feeling of a bug crawling on you. It seems plausible (to me, anyway) that bugs may experience some kind of knismesis—but I think that someone needs to get in there and do some science, especially now that we have the tools to make it happen. Match ID: 39 Score: 20.00 source: spectrum.ieee.org age: 5 days qualifiers: 14.29 google, 5.71 development
The Spooky Quest to Build a Google Maps for Graveyards Fri, 25 Nov 2022 12:00:00 +0000 Atlantic Geomatics is creating a map of the UK’s cemeteries to help people track down their ancestors’ final resting place. Match ID: 40 Score: 17.86 source: www.wired.com age: 4 days qualifiers: 17.86 google
Unannounced change in rules was made last week as health experts stress importance of combating disinformation
Twitter will no longer enforce its policy against Covid-19 misinformation, raising concerns among public health experts that the change could have serious consequences if it discourages vaccination and other efforts to combat the still-spreading virus.
Eagle-eyed users spotted the change on Monday night, noting that a one-sentence update had been made to Twitter’s online rules: “Effective November 23, 2022, Twitter is no longer enforcing the COVID-19 misleading information policy.”
Continue reading... Match ID: 42 Score: 15.00 source: www.theguardian.com age: 0 days qualifiers: 15.00 musk
Is There a Method to Musk’s Madness on Twitter? 2022-11-29T00:00:00Z Elon Musk's brash management style has upended the social media platform, but was bold action necessary to address serious problems? Andy Wu discusses the tech entrepreneur's takeover of Twitter. Match ID: 47 Score: 15.00 source: hbswk.hbs.edu age: 1 day qualifiers: 15.00 musk
Layoffs Have Gutted Twitter’s Child Safety Team Mon, 28 Nov 2022 19:33:07 +0000 Just one person remains to enforce the company’s ban on child sexual abuse across Japan and the Asia Pacific region. Match ID: 48 Score: 15.00 source: www.wired.com age: 1 day qualifiers: 15.00 musk
Why I Quit Elon Musk’s Twitter Sun, 27 Nov 2022 11:00:00 +0000 A platform that once represented the new frontier of digital democracy is being used by the world’s richest man to troll us all. Match ID: 49 Score: 15.00 source: www.newyorker.com age: 2 days qualifiers: 15.00 musk
He began his career in 1980 as a management trainee at the National Dairy Development Board, in Anand, India. A year later he joined Milma, a state government marketing cooperative for the dairy industry, in Thiruvananthapuram, as a manager of planning and systems. After 15 years with Milma, he joined IBM in Tokyo as a manager of technology services.
In 2000 he helped found InApp, a company in Palo Alto, Calif., that provides software development services. He served as its CEO and executive chairman until he died.
Raja was the 2011–2012 chair of the IEEE Humanitarian Activities Committee. He wanted to find a way to mobilize engineers to apply their expertise to develop sustainable solutions that help their local community. To achieve the goal, in 2011 he founded IEEE SIGHT. Today there are more than 150 SIGHT groups in 50 countries that are working on projects such as sustainable irrigation and photovoltaic systems.
Raja also served as one of the directors of the nongovernmental organization Bedroc.in, which was established to continue the disaster rehabilitation work started by him and his team after the 2004 Indian Ocean tsunami.
Terry was a computer engineer at Hewlett-Packard in Fort Collins, Colo., for 18 years.
He joined HP in 1978 as a software developer, and he chaired the Portable Operating System Interface (POSIX) working group. POSIX is a family of standards specified by the IEEE Computer Society for maintaining compatibility among operating systems. While there, he also developed software for the Motorola 68000 microprocessor.
Terry left HP in 1997 to join Softway Solutions, also in Fort Collins, where he developed tools for Interix, a Unix subsystem of the Windows NT operating system. After Microsoft acquired Softway in 1999, he stayed on as a senior software development engineer at its Seattle location. There he worked on static analysis, a method of computer-program debugging that is done by examining the code without executing the program. He also helped to create SAL, a Microsoft source-code annotation language, which was developed to make code design easier to understand and analyze.
Terry retired in 2014. He loved science fiction, boating, cooking, and spending time with his family, according to his daughter, Kristin.
He earned a bachelor’s degree in electrical engineering in 1970 and a Ph.D. in computer science in 1978, both from the University of Washington in Seattle.
Signal processing engineer
Life senior member, 70; died 25 August
Sandham applied his signal processing expertise to a wide variety of disciplines including medical imaging, biomedical data analysis, and geophysics.
He began his career in 1974 as a physicist at the University of Glasgow. While working there, he pursued a Ph.D. in geophysics. He earned his degree in 1981 at the University of Birmingham in England. He then joined the British National Oil Corp. (now Britoil) as a geophysicist.
In 1986 he left to join the University of Strathclyde, in Glasgow, as a lecturer in the signal processing department. During his time at the university, he published more than 200 journal papers and five books that addressed blood glucose measurement, electrocardiography data analysis and compression, medical ultrasound, MRI segmentation, prosthetic limb fitting, and sleep apnea detection.
Sandham left the university in 2003 and founded Scotsig, a signal processing consulting and research business, also in Glasgow.
Sandham earned his bachelor’s degree in electrical engineering in 1974 from the University of Glasgow.
Stephen M. Brustoski
Life member, 69; died 6 January
For 40 years, Brustoski worked as a loss-prevention engineer for insurance company FM Global. He retired from the company, which was headquartered in Johnston, R.I., in 2014.
He was an elder at his church, CrossPoint Alliance, in Akron, Ohio, where he oversaw administrative work and led Bible studies and prayer meetings. He was an assistant scoutmaster for 12 years, and he enjoyed hiking and traveling the world with his family, according to his wife, Sharon.
Brustoski earned a bachelor’s degree in electrical engineering in 1973 from the University of Akron.
President and CEO of Essex Corp.
Life senior member, 96; died 7 May 2020
As president and CEO of Essex Corp., in Columbia, Md., Letaw handled the development and commercialization of optoelectronic and signal processing solutions for defense, intelligence, and commercial customers. He retired in 1995.
He had served in World War II as an aviation engineer for the U.S. Army. After he was discharged, he earned a bachelor’s degree in chemistry, then a master’s degree and Ph.D., all from the University of Florida in Gainesville, in 1949, 1951, and 1952.
After he graduated, he became a postdoctoral assistant at the University of Illinois at Urbana-Champaign. He left to become a researcher at Raytheon Technologies, an aerospace and defense manufacturer, in Wayland, Mass.
It’s been a couple of years, but the IEEE Spectrum Robot Gift Guide is back for 2022! We’ve got all kinds of new robots, and right now is an excellent time to buy one (or a dozen), since many of them are on sale this week. We’ve tried to focus on consumer robots that are actually available (or that you can at least order), but depending on when you’re reading this guide, the prices we have here may not be up to date, and we’re not taking shipping into account.
And if these robots aren’t enough for you, many of our picks from years past are still available: check out our guides from 2019, 2018, 2017, 2016, 2015, 2014, 2013, and 2012. And as always, if you have suggestions that you’d like to share, post a comment to help the rest of us find the perfect robot gift.
Lego Robotics Kits
Lego has decided to discontinue its classic Mindstorms robotics kits, but they’ll be supported for another couple of years and this is your last chance to buy one. If you like Lego’s approach to robotics education but don’t want to invest in a system at the end of its life, Lego also makes an education kit called Spike that shares many of the hardware and software features for students in grades 6 to 8.
Indi is a clever educational robot designed to teach problem solving and screenless coding to kids as young as 4, using a small wheeled robot with a color sensor and a system of colored strips that command the robot to do different behaviors. There’s also an app to access more options, and Sphero has more robots to choose from once your kid is ready for something more.
Petoi’s quadrupedal robot kits are an adorable (and relatively affordable) way to get started with legged robotics. Whether you go with Nybble the cat or Bittle the dog, you get to do some easy hardware assembly and then leverage a bunch of friendly software tools to get your little legged friend walking around and doing tricks.
Root educational robots have a long and noble history, and iRobot has built on that to create an inexpensive platform to help kids learn to code starting as young as age 4. There are two different versions of Root; the more expensive one includes an RGB sensor, a programmable eraser, and the ability to stick to vertical whiteboards and move around on them.
The latest generation of TurtleBot from Clearpath, iRobot, and Open Robotics is a powerful and versatile ROS (Robot Operating System) platform for research and product development. For aspiring roboticists in undergrad and possibly high school, the Turtlebot 4 is just about as good as it gets unless you want to spend an order of magnitude more. And the fact that TurtleBots are used so extensively means that if you need some help, the ROS community will (hopefully) have your back.
Newly updated just last year, iRobot's Create 3 is the perfect platform for folks who want to build their own robot, but not all of their own robot. The rugged mobile base is essentially a Roomba without the cleaning parts, and it's easy to add your own hardware on top. It runs ROS 2, but you can get started with Python.
Mini Pupper is one of the cutest ways of getting started with ROS. This legged robot is open source, and runs ROS on a Raspberry Pi, which makes it extra affordable if you have your own board lying around. Even if you don’t, though, the Mini Pupper kit is super affordable for what you get, and is a fun hardware project if you decide to save a little extra cash by assembling it yourself.
I’m not sure whether the world is ready for ROS 2 yet, but you can get there with Rae, which combines a pocket-size mobile robot with a pair of depth cameras and onboard computer shockingly cheaply. App support means that Rae can do cool stuff out of the box, but it’s easy to get more in-depth with it too. Rae will get delivered early next year, but it’s cool enough that we think a Kickstarter IOU is a perfectly acceptable gift.
iRobot’s brand new top-of-the-line fully autonomous vacuuming and wet-mopping combo j7+ Roomba will get your floors clean and shiny, except for carpet, which it’s smart enough to not try to shine because it’ll cleverly lift the wet mop up out of the way. It’s also cloud connected and empties itself. You’ll have to put water in it if you want it to mop, but that’s way better than mopping yourself.
Neato’s robots might not be quite as pervasive as the Roomba, but they’re excellent vacuums, and they use a planar lidar system for obstacle avoidance and map making. The nice thing about lidar (besides the fact that it works in total darkness) is that Neato robots have no cameras at all and are physically incapable of collecting imagery of you or your home.
How often do you find an affordable, useful, reliable, durable, fully autonomous home robot? Not often! But Tertill is all of these things: powered entirely by the sun, it slowly prowls around your garden, whacking weeds as they sprout while avoiding your mature plants. All you have to do is make sure it can’t escape, then just let it loose and forget about it for months at a time.
If you like the idea of having a semi-autonomous mobile robot with a direct link to Amazon wandering around your house trying to be useful, then Amazon’s Astro might not sound like a terrible idea. You’ll have to apply for one, and it sounds like it’s more like a beta program, but could be fun, I guess?
The Skydio 2+ is an incremental (but significant) update to the Skydio 2 drone, with its magically cutting-edge obstacle avoidance and extremely impressive tracking skills. There are many drones out there that are cheaper and more portable, and if flying is your thing, get one of those. But if filming is your thing, the Skydio 2+ is the drone you want to fly.
We had a blast flying DJI’s FPV drone. The VR system is exhilarating and the drone is easy to fly even for FPV beginners, but it’s powerful enough to grow along with your piloting skills. Just don’t get cocky, or you’ll crash it. Don’t ask me how I know this.
ElliQ is an embodied voice assistant that is a lot more practical than a smart speaker. It's designed for older adults who may spend a lot of time alone at home, and can help with a bunch of things, including health and wellness tasks and communicating with friends and family. ElliQ costs $250 up front, plus a subscription of between $30 and $40 per month.
Not all robots for kids are designed to teach them to code: Moxie helps to “supports social-emotional development in kids through play.” The carefully designed and curated interaction between Moxie and children helps them to communicate and build social skills in a friendly and engaging way. Note that Moxie also requires a subscription fee of $40 per month.
What is Qoobo? It is “a tailed cushion that heals your heart,” according to the folks that make it. According to us, it’s a furry round pillow that responds to your touch by moving its tail, sort of like a single-purpose cat. It’s fuzzy tail therapy!
Before you decide on a real dog, consider the Unitree Go1 instead. Sure it’s expensive, but you know what? So are real dogs. And unlike with a real dog, you only have to walk the Go1 when you feel like it, and you can turn it off and stash it in a closet or under a bed whenever you like. For a fully featured dynamic legged robot, it’s staggeringly cheap, just keep in mind that shipping is $1,000.
Match ID: 51 Score: 12.86 source: spectrum.ieee.org age: 7 days qualifiers: 7.14 google, 2.86 development, 2.86 amazon
The Best Jokes of 2022 Fri, 25 Nov 2022 11:00:00 +0000 Dr. Oz went shopping, Elon Musk broke Twitter, Chris Rock thought fast, and corn melted our hearts. Match ID: 52 Score: 10.71 source: www.newyorker.com age: 4 days qualifiers: 10.71 musk
Mice, windows, icons, and menus: these are the ingredients of computer interfaces designed to be easy to grasp, simplicity itself to use, and straightforward to describe. The mouse is a pointer. Windows divide up the screen. Icons symbolize application programs and data. Menus list choices of action.
But the development of today’s graphical user interface was anything but simple. It took some 30 years of effort by engineers and computer scientists in universities, government laboratories, and corporate research groups, piggybacking on each other’s work, trying new ideas, repeating each other’s mistakes.
This article was first published as “Of Mice and menus: designing the user-friendly interface.” It appeared in the September 1989 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The photographs and diagrams appeared in the original print version.
Throughout the 1970s and early 1980s, many of the early concepts for windows, menus, icons, and mice were arduously researched at Xerox Corp.’s Palo Alto Research Center (PARC), Palo Alto, Calif. In 1973, PARC developed the prototype Alto, the first of two computers that would prove seminal in this area. More than 1200 Altos were built and tested. From the Alto’s concepts, starting in 1975, Xerox’s System Development Department then developed the Star and introduced it in 1981—the first such user-friendly machine sold to the public.
In 1984, the low-cost Macintosh from Apple Computer Inc., Cupertino, Calif., brought the friendly interface to thousands of personal computer users. During the next five years, the price of RAM chips fell enough to accommodate the huge memory demands of bit-mapped graphics, and the Mac was followed by dozens of similar interfaces for PCs and workstations of all kinds. By now, application programmers are becoming familiar with the idea of manipulating graphic objects.
The Mac’s success during the 1980s spurred Apple Computer to pursue legal action over ownership of many features of the graphical user interface. Suits now being litigated could assign those innovations not to the designers and their companies, but to those who first filed for legal protection on them.
The GUI started with Sketchpad
The grandfather of the graphical user interface was Sketchpad [see photograph]. Massachusetts Institute of Technology student Ivan E. Sutherland built it in 1962 as a Ph.D. thesis at MIT’s Lincoln Laboratory in Lexington, Mass. Sketchpad users could not only draw points, line segments, and circular arcs on a cathode ray tube (CRT) with a light pen—they could also assign constraints to, and relationships among, whatever they drew.
Arcs could have a specified diameter, lines could be horizontal or vertical, and figures could be built up from combinations of elements and shapes. Figures could be moved, copied, shrunk, expanded, and rotated, with their constraints (shown as onscreen icons) dynamically preserved. At a time when a CRT monitor was a novelty in itself, the idea that users could interactively create objects by drawing on a computer was revolutionary.
Moreover, to zoom in on objects, Sutherland wrote the first window-drawing program, which required him to come up with the first clipping algorithm. Clipping is a software routine that calculates which part of a graphic object is to be displayed and displays only that part on the screen. The program must calculate where a line is to be drawn, compare that position to the coordinates of the window in use, and prevent the display of any line segment whose coordinates fall outside the window.
Though films of Sketchpad in operation were widely shown in the computer research community, Sutherland says today that there was little immediate fallout from the project. Running on MIT’s TX-2 mainframe, it demanded too much computing power to be practical for individual use. Many other engineers, however, see Sketchpad’s design and algorithms as a primary influence on an entire generation of research into user interfaces.
The origin of the computer mouse
The light pens used to select areas of the screen by interactive computer systems of the 1950s and 1960s—including Sketchpad—had drawbacks. To do the pointing, the user’s arm had to be lifted up from the table, and after a while that got tiring. Picking up the pen required fumbling around on the table or, if it had a holder, taking the time after making a selection to put it back.
Sensing an object with a light pen was straightforward: the computer displayed spots of light on the screen and interrogated the pen as to whether it sensed a spot, so the program always knew just what was being displayed. Locating the position of the pen on the screen required more sophisticated techniques—like displaying a cross pattern of nine points on the screen, then moving the cross until it centered on the light pen.
In 1964, Douglas Engelbart, a research project leader at SRI International in Menlo Park, Calif., tested all the commercially available pointing devices, from the still-popular light pen to a joystick and a Graphicon (a curve-tracing device that used a pen mounted on the arm of a potentiometer). But he felt the selection failed to cover the full spectrum of possible pointing devices, and somehow he should fill in the blanks.
Then he remembered a 1940s college class he had taken that covered the use of a planimeter to calculate area. (A planimeter has two arms, with a wheel on each. The wheels can roll only along their axes; when one of them rolls, the other must slide.)
If a potentiometer were attached to each wheel to monitor its rotation, he thought, a planimeter could be used as a pointing device. Engelbart explained his roughly sketched idea to engineer William English, who with the help of the SRI machine shop built what they quickly dubbed “the mouse.”
This first mouse was big because it used single-turn potentiometers: one rotation of the wheels had to be scaled to move a cursor from one side of the screen to the other. But it was simple to interface with the computer: the processor just read frequent samples of the potentiometer positioning signals through analog-to-digital converters.
The cursor moved by the mouse was easy to locate, since readings from the potentiometer determined the position of the cursor on the screen-unlike the light pen. But programmers for later windowing systems found that the software necessary to determine which object the mouse had selected was more complex than that for the light pen: they had to compare the mouse’s position with that of all the objects displayed onscreen.
The computer mouse gets redesigned—and redesigned again
Engelbart’s group at SRI ran controlled experiments with mice and other pointing devices, and the mouse won hands down. People adapted to it quickly, it was easy to grab, and it stayed where they put it. Still, Engelbart wanted to tinker with it. After experimenting, his group had concluded that the proper ratio of cursor movement to mouse movement was about 2:1, but he wanted to try varying that ratio—decreasing it at slow speeds and raising it at fast speeds—to improve user control of fine movements and speed up larger movements. Some modern mouse-control software incorporates this idea, including that of the Macintosh.
The mouse, still experimental at this stage, did not change until 1971. Several members of Engelbart’s group had moved to the newly established PARC, where many other researchers had seen the SRI mouse and the test report. They decided there was no need to repeat the tests; any experimental systems they designed would use mice.
Said English, “This was my second chance to build a mouse; it was obvious that it should be a lot smaller, and that it should be digital.” Chuck Thacker, then a member of the research staff, advised PARC to hire inventor Jack Hawley to build it.
Hawley decided the mouse should use shaft encoders, which measure position by a series of pulses, instead of potentiometers (both were covered in Engelbart’s 1970 patent), to eliminate the expensive analog-to-digital converters. The basic principle, of one wheel rolling while the other slid, was licensed from SRI.
The ball mouse was the “easiest patent I ever got. It took me five minutes to think of, half an hour to describe to the attorney, and I was done.” —Ron Rider
In 1972, the mouse changed again. Ron Rider, now vice president of systems architecture at PARC but then a new arrival, said he was using the wheel mouse while an engineer made excuses for its asymmetric operation (one wheel dragging while one turned). “I suggested that they turn a trackball upside down, make it small, and use it as a mouse instead,” Rider told IEEE Spectrum. This device came to be known as the ball mouse. “Easiest patent I ever got,” Rider said. “It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”
The pixel pattern that makes up the graphic display on a computer screen.
The motion of pressing a mouse button to Initiate an action by software; some actions require double-clicking.
Graphical user interface (GUI)
The combination of windowing displays, menus, icons, and a mouse that is increasingly used on personal computers and workstations.
An onscreen drawing that represents programs or data.
A list of command options currently available to the computer user; some stay onscreen, while pop-up or pull-down menus are requested by the user.
A device whose motion across a desktop or other surface causes an on-screen cursor to move commensurately; today’s mice move on a ball and have one, two, or three buttons.
A cathode ray tube on which Images are displayed as patterns of dots, scanned onto the screen sequentially in a predetermined pattern of lines.
A cathode ray tube whose gun scans lines, or vectors, onto the screen phosphor.
An area of a computer display, usually one of several, in which a particular program is executing.
In the PARC ball mouse design, the weight of the mouse is transferred to the ball by a swivel device and on one or two casters at the end of the mouse farthest from the wire “tail.” A prototype was built by Xerox’s Electronics Division in El Segundo, Calif., then redesigned by Hawley. The rolling ball turned two perpendicular shafts, with a drum on the end of each that was coated with alternating stripes of conductive and nonconductive material. As the drum turned, the stripes transmitted electrical impulses through metal wipers.
When Apple Computer decided in 1979 to design a mouse for its Lisa computer, the design mutated yet again. Instead of a metal ball held against the substrate by a swivel, Apple used a rubber ball whose traction depended on the friction of the rubber and the weight of the ball itself. Simple pads on the bottom of the case carried the weight, and optical scanners detected the motion of the internal wheels. The device had loose tolerances and few moving parts, so that it cost perhaps a quarter as much to build as previous ball mice.
How the computer mouse gained and lost buttons
The first, wooden, SRI mouse had only one button, to test the concept. The plastic batch of SRI mice bad three side-by-side buttons—all there was room for, Engelbart said. The first PARC mouse bad a column of three buttons-again, because that best fit the mechanical design. Today, the Apple mouse has one button, while the rest have two or three. The issue is no longer 1950—a standard 6-by-10-cm mouse could now have dozens of buttons—but human factors, and the experts have strong opinions.
Said English, now director of internationalization at Sun Microsystems Inc., Mountain View, Calif.: “Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” He sees two buttons as the minimum because two functions are basic to selecting an object: pointing to its start, then extending the motion to the end of the object.
William Verplank, a human factors specialist in the group that tested the graphical interface at Xerox from 1978 into the early 1980s, concurred. He told Spectrum that with three buttons, Alto users forgot which button did what. The group’s tests showed that one button was also confusing, because it required actions such as double-clicking to select and then open a file.
“We have agonizing videos of naive users struggling” with these problems, Verplank said. They concluded that for most users, two buttons (as used on the Star) are optimal, if a button means the same thing in every application. English experimented with one-button mice at PARC before concluding they were a bad idea.
“Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” —William English
But many interface designers dislike multiple buttons, saying that double-clicking a single button to select an item is easier than remembering which button points and which extends. Larry Tesler, formerly a computer scientist at PARC, brought the one-button mouse to Apple, where he is now vice president of advanced technology. The company’s rationale is that to attract novices to its computers one button was as simple as it could get.
More than two million one-button Apple mice are now in use. The Xerox and Microsoft two-button mice are less common than either Apple’s ubiquitous one-button model or the three-button mice found on technical workstations. Dozens of companies manufacture mice today; most are slightly smaller than a pack of cigarettes, with minor variations in shape.
How windows first came to the computer screen
In 1962, Sketchpad could split its screen horizontally into two independent sections. One section could, for example, give a close-up view of the object in the other section. Researchers call Sketchpad the first example of tiled windows, which are laid out side by side. They differ from overlapping windows, which can be stacked on top of each other, or overlaid, obscuring all or part of the lower layers.
Windows were an obvious means of adding functionality to a small screen. In 1969, Engelbart equipped NLS (as the On-Line System he invented at SRI during the 1960s was known, to distinguish it from the Off-Line System known as FLS) with windows. They split the screen into multiple parts horizontally or vertically, and introduced cross-window editing with a mouse.
By 1972, led by researcher Alan Kay, the Smalltalk programming language group at Xerox PARC had implemented their version of windows. They were working with far different technology from Sutherland or Engelbart: by deciding that their images had to be displayed as dots on the screen, they led a move from vector to raster displays, to make it simple to map the assigned memory location of each of those spots. This was the bit map invented at PARC, and made viable during the 1980s by continual performance improvements in processor logic and memory speed.
Experimenting with bit-map manipulation, Smalltalk researcher Dan Ingalls developed the bit-block transfer procedure, known as BitBlt. The BitBlt software enabled application programs to mix and manipulate rectangular arrays of pixel values in on-screen or off-screen memory, or between the two, combining the pixel values and storing the result in the appropriate bit-map location.
BitBlt made it much easier to write programs to scroll a window (move an image through it), resize (enlarge or contract) it, and drag windows (move them from one location to another on screen). It led Kay to create overlapping windows. They were soon implemented by the Smalltalk group, but made clipping harder.
Some researchers question whether overlapping windows offer more benefits than tiled on the grounds that screens with overlapping windows become so messy the user gets lost.
In a tiling system, explained researcher Peter Deutsch, who worked with the Smalltalk group, the clipping borders are simply horizontal or vertical lines from one screen border to another, and software just tracks the location of those lines. But overlapping windows may appear anywhere on the screen, randomly obscuring bits and pieces of other windows, so that quite irregular regions must be clipped. Thus application software must constantly track which portions of their windows remain visible.
Some researchers still question whether overlapping windows offer more benefits than tiled, at least above a certain screen size, on the grounds that screens with overlapping windows become so messy the user gets lost. Others argue that overlapping windows more closely match users’ work patterns, since no one arranges the papers on their physical desktop in neat horizontal and vertical rows. Among software engineers, however, overlapping windows seem to have won for the user interface world.
So has the cut-and-paste editing model that Larry Tesler developed, first for the Gypsy text editor he wrote at PARC and later for Apple. Charles Irby—who worked on Xerox’s windows and is now vice president of development at Metaphor Computer Systems Inc., Mountain View, Calif.—noted, however, that cut-and-paste worked better for pure text-editing than for moving graphic objects from one application to another.
The origin of the computer menu bar
Menus—functions continuously listed onscreen that could be called into action with key combinations—were commonly used in defense computing by the 1960s. But it was only with the advent of BitBlt and windows that menus could be made to appear as needed and to disappear after use. Combined with a pointing device to indicate a user’s selection, they are now an integral part of the user-friendly interface: users no longer need to refer to manuals or memorize available options.
Instead, the choices can be called up at a moment’s notice whenever needed. And menu design has evolved. Some new systems use nested hierarchies of menus; others offer different menu versions—one with the most commonly used commands for novices, another with all available commands for the experienced user.
Among the first to test menus on demand was PARC researcher William Newman, in a program called Markup. Hard on his heels, the Smalltalk group built in pop-up menus that appeared on screen at the cursor site when the user pressed one of the mouse buttons.
Implementation was on the whole straightforward, recalled Deutsch. The one exception was determining whether the menu or the application should keep track of the information temporarily obscured by the menu. In the Smalltalk 76 version, the popup menu saved and restored the screen bits it overwrote. But in today’s multitasking systems, that would not work, because an application may change those bits without the menu’s knowledge. Such systems add another layer to the operating system: a display manager that tracks what is written where.
The production Xerox Star, in 1981, featured a further advance: a menu bar, essentially a row of words indicating available menus that could be popped up for each window. Human factors engineer Verplank recalled that the bar was at first located at the bottom of its window. But the Star team found users were more likely to associate a bar with the window below it, so it was moved to the top of its window.
Apple simplified things in its Lisa and Macintosh with a single bar placed at the top of the screen. This menu bar relates only to the window in use: the menus could be ‘‘pulled down” from the bar, to appear below it. Designer William D. Atkinson received a patent (assigned to Apple Computer) in August 1984 for this innovation.
One new addition that most user interface pioneers consider an advantage is the tear-off menu, which the user can move to a convenient spot on the screen and “pin” there, always visible for ready access.
Many windowing interfaces now offer command-key or keyboard alternatives for many commands as well. This return to the earliest of user interfaces—key combinations—neatly supplements menus, providing both ease of use for novices and for the less experienced, and speed for those who can type faster than they can point to a menu and click on a selection.
How the computer “icon” got its name
Sketchpad had on-screen graphic objects that represented constraints (for example, a rule that lines be the same length), and the Flex machine built in 1967 at the University of Utah by students Alan Kay and Ed Cheadle had squares that represented programs and data (like today’s computer “folders”). Early work on icons was also done by Bell Northern Research, Ottawa, Canada, stemming from efforts to replace the recently legislated bilingual signs with graphic symbols.
But the concept of the computer “icon” was not formalized until 1975. David Canfield Smith, a computer science graduate student at Stanford University in California, began work on his Ph.D. thesis in 1973. His advisor was PARC’s Kay, who suggested that he look at using the graphics power of the experimental Alto not just to display text, but rather to help people program.
David Canfield Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents.
Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents: a Russian icon of a saint is holy and is to be venerated. Smith’s computer icons contained all the properties of the programs and data represented, and therefore could be linked or acted on as if they were the real thing.
After receiving his Ph.D. in 1975, Smith joined Xerox in 1976 to work on Star development. The first thing he did, he said, was to recast his concept of icons in office terms. “I looked around my office and saw papers, folders, file cabinets, a telephone, and bookshelves, and it was an easy translation to icons,” he said.
Xerox researchers developed, tested, and revised icons for the Star interface for three years before the first version was complete. At first they attempted to make the icons look like a detailed photographic rendering of the object, recalled Irby, who worked on testing and refining the Xerox windows. Trading off label space, legibility, and the number of icons that fit on the screen, they decided to constrain icons to a 1-inch (2.5-centimeter) square of 64 by 64 pixels, or 512 eight-bit bytes.
Then, Verplank recalls, they discovered that because of a background pattern based on two-pixel dots, the right-hand side of the icons appeared jagged. So they increased the width of the icons to 65 pixels, despite an outcry from programmers who liked the neat 16-bit breakdown. But the increase stuck, Verplank said, because they had already decided to store 72 bits per side to allow for white space around each icon.
After settling on a size for the icons, the Star developers tested four sets developed by two graphic designers and two software engineers. They discovered that, for example, resizing may cause problems. They shrunk the icon for a person—a head and shoulders—in order to use several of them to represent a group, only to hear one test subject say the screen resolution made the reduced icon look like a cross above a tombstone. Computer graphics artist Norm Cox, now of Cox & Hall, Dallas, Texas, was finally hired to redesign the icons.
Icon designers today still wrestle with the need to make icons adaptable to the many different system configurations offered by computer makers. Artist Karen Elliott, who has designed icons for Microsoft, Apple, Hewlett-Packard Co., and others, noted that on different systems an icon may be displayed in different colors, several resolutions, and a variety of gray shades, and it may also be inverted (light and dark areas reversed).
In the past few years, another concern has been added to icon designers’ tasks: internationalization. Icons designed in the United States often lack space for translations into languages other than English. Elliott therefore tries to leave space for both the longer words and the vertical orientation of some languages.
The main rule is to make icons simple, clean, and easily recognizable. Discarded objects are placed in a trash can on the Macintosh. On the NeXT Computer System, from NeXT Inc., Palo Alto, Calif.—the company formed by Apple cofounder Steven Jobs after he left Apple—they are dumped into a Black Hole. Elliott sees NeXT’s black hole as one of the best icons ever designed: ”It is distinct; its roundness stands out from the other, square icons, and this is important on a crowded display. It fits my image of information being sucked away, and it makes it clear that dumping something is serious.
English disagrees vehemently. The black hole “is fundamentally wrong,” he said. “You can dig paper out of a wastebasket, but you can’t dig it out of a black hole.” Another critic called the black hole familiar only to “computer nerds who read mostly science fiction and comics,” not to general users.
With the introduction of the Xerox Star in June 1981, the graphical user interface, as it is known today, arrived on the market. Though not a commercial triumph, the Star generated great interest among computer users, as the Alto before it had within the universe of computer designers.
Even before the Star was introduced, Jobs, then still at Apple, had visited Xerox PARC in November 1979 and asked the Smalltalk researchers dozens of questions about the Alto’s internal design. He later recruited Larry Tesler from Xerox to design the user interface of the Apple Lisa.
With the Lisa and then the Macintosh, introduced in January 1983 and January 1984 respectively, the graphical user interface reached the low-cost, high-volume computer market.
At almost $10,000, buyers deemed the Lisa too expensive for the office market. But aided by prizewinning advertising and its lower price, the Macintosh took the world by storm. Early Macs had only 128K bytes of RAM, which made them slow to respond because it was too little memory for heavy graphic manipulation. Also, the time needed for programmers to learn its Toolbox of graphics routines delayed application packages until well into 1985. But the Mac’s ease of use was indisputable, and it generated interest that spilled over into the MS-DOS world of IBM PCs and clones, as well as Unix-based workstations.
Who owns the graphical user interface?
The widespread acceptance of such interfaces, however, has led to bitter lawsuits to establish exactly who owns what. So far, none of several litigious companies has definitively established that it owns the software that implements windows, icons, or early versions of menus. But the suits continue.
Virtually all the companies that make and sell either wheel or ball mice paid license fees to SRI or to Xerox for their patents. Engelbart recalled that SRI patent attorneys inspected all the early work on the interface, but understood only hardware. After looking at developments like the implementation of windows, they told him that none of it was patentable.
At Xerox, the Star development team proposed 12 patents having to do with the user interface. The company’s patent committee rejected all but two on hardware—one on BitBlt, the other on the Star architecture. At the time, Charles Irby said, it was a good decision. Patenting required full disclosure, and no precedents then existed for winning software patent suits.
The most recent and most publicized suit was filed in March 1988, by Apple, against both Microsoft and Hewlett-Packard Co., Palo Alto, Calif. Apple alleges that HP’s New Wave interface, requiring version 2.03 of Microsoft’s Windows program, embodies the copyrighted “audio visual computer display” of the Macintosh without permission; that the displays of Windows 2.03 are illegal copies of the Mac’s audiovisual works; and that Windows 2.03 also exceeds the rights granted in a November 198S agreement in which Microsoft acknowledged that the displays in Windows 1.0 were derivatives of those in Apple’s Lisa and Mac.
In March 1989, U.S. District Judge William W. Schwarzer ruled Microsoft had exceeded the bounds of its license in creating Windows 2.03. Then in July 1989 Schwarzer ruled that all but 11 of the 260 items that Apple cited in its suit were, in fact, acceptable under the 1985 agreement. The larger issue—whether Apple’s copyrights are valid, and whether Microsoft and HP infringed on them—will not now be examined until 1990.
Among those 11 are overlapping windows and movable icons. According to Pamela Samuelson, a noted software intellectual property expert and visiting professor at Emory University Law School, Atlanta, Ga., many experts would regard both as functional features of an interface that cannot be copyrighted, rather than “expressions” of an idea protectable by copyright.
But lawyers for Apple—and for other companies that have filed lawsuits to protect the “look and feel’’ of their screen displays—maintain that if such protection is not granted, companies will lose the economic incentive to market technological innovations. How is Apple to protect its investment in developing the Lisa and Macintosh, they argue, if it cannot license its innovations to companies that want to take advantage of them?
If the Apple-Microsoft case does go to trial on the copyright issues, Samuelson said, the court may have to consider whether Apple can assert copyright protection for overlapping windows-an interface feature on which patents have also been granted. In April 1989, for example, Quarterdeck Office Systems Inc., Santa Monica, Calif., received a patent for a multiple windowing system in its Desq system software, introduced in 1984.
Adding fuel to the legal fire, Xerox said in May 1989 it would ask for license fees from companies that use the graphical user interface. But it is unclear whether Xerox has an adequate claim to either copyright or patent protection for the early graphical interface work done at PARC. Xerox did obtain design patents on later icons, noted human factors engineer Verplank. Meanwhile, both Metaphor and Sun Microsystems have negotiated licenses with Xerox for their own interfaces.
To Probe Further
The September 1989 IEEE Computer contains an article, “The Xerox ‘Star’: A Retrospective,” by Jeff Johnson et al., covering development of the Star. “Designing the Star User Interface,’’ [PDF] by David C. Smith et al., appeared in the April 1982 issue of Byte.
The Sept. 12, 1989, PC Magazine contains six articles on graphical user interfaces for personal computers and workstations. The July 1989 Byte includes ‘‘A Guide to [Graphical User Interfaces),” by Frank Hayes and Nick Baran, which describes 12 current interfaces for workstations and personal computers. “The Interface of Tomorrow, Today,’’ by Howard Reingold, in the July 10, 1989, InfoWorld does the same. “The interface that launched a thousand imitations,” by Richard Rawles, in the March 21, 1989, MacWeek covers the Macintosh interface.
The human factors of user interface design are discussed in The Psychology of Everyday Things, by Donald A. Norman (Basic Books Inc., New York, 1988). The January 1989 IEEE Software contains several articles on methods, techniques, and tools for designing and implementing graphical interfaces. The Way Things Work, by David Macaulay (Houghton Mifflin Co., Boston, 1988), contains a detailed drawing of a ball mouse.
William Atkinson received patent no. 4,464,652 for the pulldown menu system on Aug. 8, 1984, and assigned it to Apple. Gary Pope received patent no. 4,823,108, for an improved system for displaying images in “windows” on a computer screen, on April 18, 1989, and assigned it to Quarterdeck Office Systems.
The wheel mouse patent, no. 3,541,541, “X-Y position indicator for a display system,” was issued to Douglas Engelbart on Nov. 17, 1970, and assigned to SRI International. The ball mouse patent, no. 3,835,464, was issued to Ronald Rider on Sept. 10, 1974, and assigned to Xerox.
The vacuum-tube triode wasn’t quite 20 years old when physicists began trying to create its successor, and the stakes were huge. Not only had the triode made long-distance telephony and movie sound possible, it was driving the entire enterprise of commercial radio, an industry worth more than a billion dollars in 1929. But vacuum tubes were power-hungry and fragile. If a more rugged, reliable, and efficient alternative to the triode could be found, the rewards would be immense.
The goal was a three-terminal device made out of semiconductors that would accept a low-current signal into an input terminal and use it to control the flow of a larger current flowing between two other terminals, thereby amplifying the original signal. The underlying principle of such a device would be something called the field effect—the ability of electric fields to modulate the electrical conductivity of semiconductor materials. The field effect was already well known in those days, thanks to diodes and related research on semiconductors.
In the cutaway photo of a point-contact, two thin conductors are visible; these connect to the points that make contact with a tiny slab of germanium. One of these points is the emitter and the other is the collector. A third contact, the base, is attached to the reverse side of the germanium.AT&T ARCHIVES AND HISTORY CENTER
But building such a device had proved an insurmountable challenge to some of the world’s top physicists for more than two decades. Patents for transistor-like devices had been filed
starting in 1925, but the first recorded instance of a working transistor was the legendary point-contact device built at AT&T Bell Telephone Laboratories in the fall of 1947.
Though the point-contact transistor was the most important invention of the 20th century, there exists, surprisingly, no clear, complete, and authoritative account of how the thing actually worked. Modern, more robust junction and planar transistors rely on the physics in the bulk of a semiconductor, rather than the surface effects exploited in the first transistor. And relatively little attention has been paid to this gap in scholarship.
It was an ungainly looking assemblage of germanium, plastic, and gold foil, all topped by a squiggly spring. Its inventors were a soft-spoken Midwestern theoretician, John Bardeen, and a voluble and “
somewhat volatile” experimentalist, Walter Brattain. Both were working under William Shockley, a relationship that would later prove contentious. In November 1947, Bardeen and Brattain were stymied by a simple problem. In the germanium semiconductor they were using, a surface layer of electrons seemed to be blocking an applied electric field, preventing it from penetrating the semiconductor and modulating the flow of current. No modulation, no signal amplification.
Sometime late in 1947 they hit on a solution. It featured two pieces of barely separated gold foil gently pushed by that squiggly spring into the surface of a small slab of germanium.
Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate. Indeed, the current edition of that bible of undergraduate EEs,
The Art of Electronics by Horowitz and Hill, makes no mention of the point-contact transistor at all, glossing over its existence by erroneously stating that the junction transistor was a “Nobel Prize-winning invention in 1947.” But the transistor that was invented in 1947 was the point-contact; the junction transistor was invented by Shockley in 1948.
So it seems appropriate somehow that the most comprehensive explanation of the point-contact transistor is contained within
John Bardeen’s lecture for that Nobel Prize, in 1956. Even so, reading it gives you the sense that a few fine details probably eluded even the inventors themselves. “A lot of people were confused by the point-contact transistor,” says Thomas Misa, former director of the Charles Babbage Institute for the History of Science and Technology, at the University of Minnesota.
Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate.
A year after Bardeen’s lecture, R. D. Middlebrook, a professor of electrical engineering at Caltech who would go on to do pioneering work in power electronics,
wrote: “Because of the three-dimensional nature of the device, theoretical analysis is difficult and the internal operation is, in fact, not yet completely understood.”
Nevertheless, and with the benefit of 75 years of semiconductor theory, here we go. The point-contact transistor was built around a thumb-size slab of
n-type germanium, which has an excess of negatively charged electrons. This slab was treated to produce a very thin surface layer that was p-type, meaning it had an excess of positive charges. These positive charges are known as holes. They are actually localized deficiencies of electrons that move among the atoms of the semiconductor very much as a real particle would. An electrically grounded electrode was attached to the bottom of this slab, creating the base of the transistor. The two strips of gold foil touching the surface formed two more electrodes, known as the emitter and the collector.
That’s the setup. In operation, a small positive voltage—just a fraction of a volt—is applied to the emitter, while a much larger negative voltage—4 to 40 volts—is applied to the collector, all with reference to the grounded base. The interface between the
p-type layer and the n-type slab created a junction just like the one found in a diode: Essentially, the junction is a barrier that allows current to flow easily in only one direction, toward lower voltage. So current could flow from the positive emitter across the barrier, while no current could flow across that barrier into the collector.
The Western Electric Type-2 point-contact transistor was the first transistor to be manufactured in large quantities, in 1951, at Western Electric’s plant in Allentown, Pa. By 1960, when this photo was taken, the plant had switched to producing junction transistors.AT&T ARCHIVES AND HISTORY CENTER
Now, let’s look at what happens down among the atoms. First, we’ll disconnect the collector and see what happens around the emitter without it. The emitter injects positive charges—holes—into the
p-type layer, and they begin moving toward the base. But they don’t make a beeline toward it. The thin layer forces them to spread out laterally for some distance before passing through the barrier into the n-type slab. Think about slowly pouring a small amount of fine powder onto the surface of water. The powder eventually sinks, but first it spreads out in a rough circle.
Now we connect the collector. Even though it can’t draw current by itself through the barrier of the
p-n junction, its large negative voltage and pointed shape do result in a concentrated electric field that penetrates the germanium. Because the collector is so close to the emitter, and is also negatively charged, it begins sucking up many of the holes that are spreading out from the emitter. This charge flow results in a concentration of holes near the p-n barrier underneath the collector. This concentration effectively lowers the “height” of the barrier that would otherwise prevent current from flowing between the collector and the base. With the barrier lowered, current starts flowing from the base into the collector—much more current than what the emitter is putting into the transistor.
The amount of current depends on the height of the barrier. Small decreases or increases in the emitter’s voltage cause the barrier to fluctuate up and down, respectively. Thus very small changes in the the emitter current control very large changes at the collector, so voilà! Amplification. (EEs will notice that the functions of base and emitter are reversed compared with those in later transistors, where the base, not the emitter, controls the response of the transistor.)
Ungainly and fragile though it was, it
was a semiconductor amplifier, and its progeny would change the world. And its inventors knew it. The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil, with that tiny slit separating the emitter and collector contacts. This configuration gave reliable power gain, and the duo knew then that they had succeeded. In his carpool home that night, Brattain told his companions he’d just done “the most important experiment that I’d ever do in my life” and swore them to secrecy. The taciturn Bardeen, too, couldn’t resist sharing the news. As his wife, Jane, prepared dinner that night, he reportedly said, simply, “We discovered something today.” With their children scampering around the kitchen, she responded, “That’s nice, dear.”
It was a transistor, at last, but it was pretty rickety. The inventors later hit on the idea of electrically forming the collector by passing large currents through it during the transistor’s manufacturing. This technique enabled them to get somewhat larger current flows that weren’t so tightly confined within the surface layer. The electrical forming was a bit hit-or-miss, though. “They would just throw out the ones that didn’t work,” Misa notes.
The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil…
The Bell Labs group wasn’t alone in its successful pursuit of a transistor. In Aulnay-sous-Bois, a suburb northeast of Paris, two German physicists, Herbert Mataré and Heinrich Welker, were also trying to build a three-terminal semiconductor amplifier. Working for a French subsidiary of Westinghouse, they were following up on very
intriguing observations Mataré had made while developing germanium and silicon rectifiers for the German military in 1944. The two succeeded in creating a reliable point-contact transistor in June 1948.
They were astounded, a week or so later, when Bell Labs finally revealed the news of its own transistor, at a press conference on 30 June 1948. Though they were developed completely independently, and in secret, the two devices were more or less identical.
Here the story of the transistor takes a weird turn, breathtaking in its brilliance and also disturbing in its details. Bardeen’s and Brattain’s boss,
William Shockley, was furious that his name was not included with Bardeen’s and Brattain’s on the original patent application for the transistor. He was convinced that Bardeen and Brattain had merely spun his theories about using fields in semiconductors into their working device, and had failed to give him sufficient credit. Yet in 1945, Shockley had built a transistor based on those very theories, and it hadn’t worked.
In 1953, RCA engineer Gerald Herzog led a team that designed and built the first "all-transistor" television (although, yes, it had a cathode-ray tube). The team used point-contact transistors produced by RCA under a license from Bell Labs. TRANSISTOR MUSEUM JERRY HERZOG ORAL HISTORY
At the end of December, barely two weeks after the initial success of the point-contact transistor, Shockley traveled to Chicago for the annual meeting of the American Physical Society. On New Year’s Eve, holed up in his hotel room and fueled by a potent mix of jealousy and indignation, he began designing a transistor of his own. In three days he scribbled
some 30 pages of notes. By the end of the month, he had the basic design for what would become known as the bipolar junction transistor, or BJT, which would eventually supersede the point-contact transistor and reign as the dominant transistor until the late 1970s.
With insights gleaned from the Bell Labs work, RCA began developing its own point-contact transistors in 1948. The group included the seven shown here—four of which were used in RCA's experimental, 22-transistor television set built in 1953. These four were the TA153 [top row, second from left], the TA165 [top, far right], the TA156 [bottom row, middle] and the TA172 [bottom, right].TRANSISTOR MUSEUM JONATHAN HOPPE COLLECTION
The BJT was based on Shockley’s conviction that charges could, and should, flow through the bulk semiconductors rather than through a thin layer on their surface. The
device consisted of three semiconductor layers, like a sandwich: an emitter, a base in the middle, and a collector. They were alternately doped, so there were two versions: n-type/p-type/n-type, called “NPN,” and p-type/n-type/p-type, called “PNP.”
The BJT relies on essentially the same principles as the point-contact, but it uses two
p-n junctions instead of one. When used as an amplifier, a positive voltage applied to the base allows a small current to flow between it and the emitter, which in turn controls a large current between the collector and emitter.
Consider an NPN device. The base is
p-type, so it has excess holes. But it is very thin and lightly doped, so there are relatively few holes. A tiny fraction of the electrons flowing in combines with these holes and are removed from circulation, while the vast majority (more than 97 percent) of electrons keep flowing through the thin base and into the collector, setting up a strong current flow.
But those few electrons that do combine with holes must be drained from the base in order to maintain the
p-type nature of the base and the strong flow of current through it. That removal of the “trapped” electrons is accomplished by a relatively small flow of current through the base. That trickle of current enables the much stronger flow of current into the collector, and then out of the collector and into the collector circuit. So, in effect, the small base current is controlling the larger collector circuit.
Electric fields come into play, but they do not modulate the current flow, which the early theoreticians thought would have to happen for such a device to function. Here’s the gist: Both of the
p-n junctions in a BJT are straddled by depletion regions, in which electrons and holes combine and there are relatively few mobile charge carriers. Voltage applied across the junctions sets up electric fields at each, which push charges across those regions. These fields enable electrons to flow all the way from the emitter, across the base, and into the collector.
In the BJT, “the applied electric fields affect the carrier density, but because that effect is exponential, it only takes a little bit to create a lot of diffusion current,” explains Ioannis “John” Kymissis, chair of the department of electrical engineering at Columbia University.
The very first transistors were a type known as point contact, because they relied on metal contacts touching the surface of a semiconductor. They ramped up output current—labeled “Collector current” in the top diagram—by using an applied voltage to overcome a barrier to charge flow. Small changes to the input, or “emitter,” current modulate this barrier, thus controlling the output current.
The bipolar junction transistor accomplishes amplification using much the same principles but with two semiconductor interfaces, or junctions, rather than one. As with the point-contact transistor, an applied voltage overcomes a barrier and enables current flow that is modulated by a smaller input current. In particular, the semiconductor junctions are straddled by depletion regions, across which the charge carriers diffuse under the influence of an electric field.Chris Philpot
The BJT was more rugged and reliable than the point-contact transistor, and those features primed it for greatness. But it took a while for that to become obvious. The BJT was the technology used to make integrated circuits, from the first ones in the early 1960s all the way until the late 1970s, when metal-oxide-semiconductor field-effect transistors (MOSFETs) took over. In fact, it was these field-effect transistors, first the junction field-effect transistor and then MOSFETs, that finally realized the decades-old dream of a three-terminal semiconductor device whose operation was based on the field effect—Shockley’s original ambition.
Such a glorious future could scarcely be imagined in the early 1950s, when AT&T and others were struggling to come up with practical and efficient ways to manufacture the new BJTs. Shockley himself went on to literally put the silicon into Silicon Valley. He moved to Palo Alto and in 1956 founded a company that led the switch from germanium to silicon as the electronic semiconductor of choice. Employees from his company would go on to found Fairchild Semiconductor, and then Intel.
Later in his life, after losing his company because of his terrible management, he became a professor at Stanford and began promulgating ungrounded and unhinged theories about race, genetics, and intelligence. In 1951 Bardeen left Bell Labs to become a professor at the University of Illinois at Urbana-Champaign, where he won a second Nobel Prize for physics, for a theory of superconductivity. (He is the only person to have won two Nobel Prizes in physics.) Brattain stayed at Bell Labs until 1967, when he joined the faculty at Whitman College, in Walla Walla, Wash.
Shockley died a largely friendless pariah in 1989. But his transistor would change the world, though it was still not clear as late as 1953 that the BJT would be the future. In an interview that year,
Donald G. Fink, who would go on to help establish the IEEE a decade later, mused, “Is it a pimpled adolescent, now awkward, but promising future vigor? Or has it arrived at maturity, full of languor, surrounded by disappointments?”
It was the former, and all of our lives are so much the better because of it.
This article appears in the December 2022 print issue as “The First Transistor and How it Worked .”
Match ID: 54 Score: 10.71 source: spectrum.ieee.org age: 9 days qualifiers: 7.14 genetic, 3.57 google
Match ID: 55 Score: 10.00 source: www.reddit.com age: 0 days qualifiers: 10.00 amazon
Bodies-in-suitcases suspect appears in New Zealand court Tue, 29 Nov 2022 21:25:20 EST A woman who was extradited from South Korea this week after the bodies of her two children were found in abandoned suitcases has made her first court appearance in New Zealand Match ID: 56 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
EXPLAINER: Why are China's COVID rules so strict? Tue, 29 Nov 2022 20:46:57 EST At the outbreak of the COVID-19 pandemic, China set out its “zero-COVID” measures that were harsh, but not out of line with what many other countries were doing to try and contain the virus Match ID: 57 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
U.S. victory over Iran in Qatar closes the loop on 1998 defeat Tue, 29 Nov 2022 20:38:20 EST For two nations that have no diplomatic relations and no shortage of political enmity, there were plenty of tensions that had nothing to do with qualification. Match ID: 58 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Chinese spaceship with 3 aboard docks with space station Tue, 29 Nov 2022 20:30:38 EST Three Chinese astronauts have docked with their country’s space station, where they will overlap for several days with the three-member crew already onboard and expand the facility to its maximum size Match ID: 59 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Finland says it must ensure Ukraine wins war against Russia Tue, 29 Nov 2022 20:16:18 EST Finland’s leader says it must give more weapons and support to Ukraine to ensure it wins its war against Russia Match ID: 60 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Chinese state TV obscures maskless crowd in World Cup broadcast Tue, 29 Nov 2022 20:07:05 EST The World Cup comes at an awkward time for Beijing’s censorship apparatus, as protesters challenge Chinese President Xi Jinping’s coronavirus policies. Match ID: 62 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Freddie Roman, stand-up staple of Borscht Belt circuit, dies at 85 Tue, 29 Nov 2022 20:05:04 EST He was among the dozens of Jewish comedians who honed their craft at the all-inclusive resorts in the Catskill Mountains. He joked that his cholesterol level was 9-1-1. Match ID: 63 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Oath Keepers founder Stewart Rhodes guilty of Jan. 6 seditious conspiracy Tue, 29 Nov 2022 19:52:11 EST The panel of seven men and five women deliberated for three days before finding Rhodes and a co-defendant guilty of conspiring to oppose by force the lawful transition of presidential power. Match ID: 64 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Mexico high court upholds keeping military on police duties Tue, 29 Nov 2022 19:40:07 EST Mexico’s Supreme Court has upheld a constitutional change that allows the military to continue in law enforcement duties until 2028 Match ID: 65 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Justices seem conflicted in immigration enforcement case Tue, 29 Nov 2022 18:51:32 EST Texas sued the Biden administration after it said agents would prioritize for deportation only recent border crossers and those deemed a threat to public safety. Match ID: 68 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Today: England beat Wales 3-0 to secure their place at the top of Group B, setting up a Round of 16 game against Senegal. It’s the end of the road for Wales, and Elis James joins us to offer his perspective.
Continue reading... Match ID: 69 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 apple
Whole Foods drops Maine lobster, citing whale concerns Tue, 29 Nov 2022 18:35:33 EST The grocery chain said it will stop selling Maine lobsters after two environmental groups cited concerns about the North American right whale. Match ID: 70 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Senate passes bill to protect same-sex, interracial marriages Tue, 29 Nov 2022 18:31:40 EST The bipartisan measure would repeal the Defense of Marriage Act, which defined marriage as being between one man and one woman. Match ID: 71 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Mitch McConnell and Kevin McCarthy break silence over meeting and say no room in party for antisemitism or white supremacy
The top two Republicans in Congress have broken their silence about Donald Trump’s dinner last week with the rightwing extremist Nick Fuentes, saying the Republican party has no place for antisemitism or white supremacy.
The Senate Republican leader, Mitch McConnell, and Kevin McCarthy, who may become House speaker in January, had not commented previously on the 22 November meeting.
Continue reading... Match ID: 72 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 apple
Uneasy calm grips Ukraine as West prepares winter aid Tue, 29 Nov 2022 17:22:34 EST An uneasy calm is hanging over Kyiv as residents of the Ukrainian capital prepare for Russian missile attacks aiming to take out more energy infrastructure as winter approaches Match ID: 77 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Move could help restore drying lake, a former resort destination that has deteriorated into an environmental crisis amid drought
The US government said on Monday it will spend up to $250m over four years to help mitigate an environmental health disaster that has been brewing in California’s Salton Sea for nearly two decades.
The inland lake, which is fed by agricultural runoff and wastewater, has slowly been shrinking, exposing a powdery shoreline laced with arsenic, selenium and DDT. Dust from the drying lake has wafted into surrounding communities, exacerbating pollution and consequently respiratory conditions in one of California’s poorest and most environmentally burdened regions.
Continue reading... Match ID: 81 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 california
U.S. unveils plan to rebuild Ukraine energy grid after Russian assault Tue, 29 Nov 2022 16:16:50 EST Secretary of State Antony Blinken said the United States will put $53 million toward helping Ukraine procure transformers, circuit breakers and other hardware. Match ID: 83 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
The bigger the change in Congress, the busier the lame duck Tue, 29 Nov 2022 15:06:55 EST Our era of flip-flopping congressional control means more incentive to get things done while still in the majority. Match ID: 85 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Readers on how years of poor Tory leadership mean there is no end in sight to the chronic shortage of homes
John Harris dissects key aspects of the housing “crisis” in a typically thoughtful and powerful way (The Tories are tearing themselves apart over housing – but this is another crisis of their own making, 27 November). But on the way he displays a crucial misunderstanding. There is no presumption in favour of development in the planning system; it’s a presumption in favour of sustainable development – something with which surely no one could disagree? This is an egregious example of the Tory use of sophistry that has made a massive contribution to the issue that Harris so ably describes. Far from being an objective, science-based definition, it is in reality a circular argument that the government inserted in the national planning guidance.
In effect, “sustainable” is what the government, Humpty Dumpty-like, says it is. The assessment of major housing proposals, which so often go to appeal, is comically perfunctory, the overriding criterion being the supply of new housing, however and wherever built. Many people participate in this charade. We have been building in unsustainability – carbon emissions, destruction of habitat, poor health and unaffordability – throughout the last 12 years. The cost of retrofitting will be astronomical. We need the houses we need. Campaigners cannot morally deny that, but development must be based on sound sustainability principles and by applying rigorous tests that are available but are never used effectively.
Continue reading... Match ID: 86 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 development
Senate Majority Leader Chuck Schumer, a New York Democrat, told reporters Tuesday that he and Senate Minority Leader Mitch McConnell, a Kentucky Republican, have agreed to try to pass legislation that would prevent a U.S. railroad strike as soon as possible. Schumer's comment adds to the optimistic talk about averting a strike, as President Joe Biden said earlier Tuesday that he's confident that it can be avoided, and an analyst said there appears to be bipartisan support in Congress to act. House Speaker Nancy Pelosi, a California Democrat, said a strike must be avoided and her chamber would pass the necessary legislation on Wednesday.
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 87 Score: 10.00 source: www.marketwatch.com age: 0 days qualifiers: 10.00 california
NASA to Cancel GeoCarb Mission, Expands Greenhouse Gas Portfolio Tue, 29 Nov 2022 11:34 EST NASA announced Monday it intends to cancel development of its GeoCarb mission, and instead implement a plan for pursuing alternate options to measure and observe greenhouse gases. Match ID: 89 Score: 10.00 source: www.nasa.gov age: 0 days qualifiers: 10.00 development
Republican officials turn to election rejection Tue, 29 Nov 2022 10:24:39 EST Donald Trump's push to reject election results catches on at the county level. Match ID: 91 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
How we got here with Kyrie Irving, explained Tue, 29 Nov 2022 10:00:31 EST The NBA star has become a sentient Rorschach test for so many things, some dumb, some not so dumb. Match ID: 92 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Biden seizes on gun control despite hurdles in Congress Tue, 29 Nov 2022 10:00:56 EST Biden is increasingly seizing on gun control—especially an assault weapons ban—as a political rallying cry, even as the votes remain elusive in Congress. Match ID: 94 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
‘A Rover’s Story’ grew out of a little girl’s empathy Tue, 29 Nov 2022 08:00:52 EST Jasmine Warga watched launch of Mars rover Perseverance with her family. Daughter’s concern for the rover sparked an idea for the book. Match ID: 99 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Seventy-five years is a long time. It’s so long that most of us don’t remember a time before the transistor, and long enough for many engineers to have devoted entire careers to its use and development. In honor of this most important of technological achievements, this issue’s package of articles explores the transistor’s historical journey and potential future.
In “The First Transistor and How it Worked,” Glenn Zorpette dives deep into how the point-contact transistor came to be. Then, in “The Ultimate Transistor Timeline,” Stephen Cass lays out the device’s evolution, from the flurry of successors to the point-contact transistor to the complex devices in today’s laboratories that might one day go commercial. The transistor would never have become so useful and so ubiquitous if the semiconductor industry had not succeeded in making it small and cheap. We try to give you a sense of that scale in “The State of the Transistor.”
So what’s next in transistor technology? In less than 10 years’ time, transistors could take to the third dimension, stacked atop each other, write Marko Radosavljevic and Jack Kavalieros in “Taking Moore’s Law to New Heights.” And we asked experts what the transistor will be like on the 100th anniversary of its invention in “The Transistor of 2047.”
Meanwhile, IEEE’s celebration of the transistor’s 75th anniversary continues. The Electron Devices Society has been at it all year, writes Joanna Goodrich in The Institute, and has events planned into 2023 that you can get involved in. So go out and celebrate the device that made the modern world possible.
Match ID: 100 Score: 10.00 source: spectrum.ieee.org age: 0 days qualifiers: 10.00 development
From a bunker, an acting mayor keeps her front-line Ukraine town alive Tue, 29 Nov 2022 07:08:04 EST Russia has shelled city hall so incessantly that Svitlana Mandrych had to move her office underground, where she answers pleas for help from desperate residents. Match ID: 101 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
What’s at Stake in the University of California Graduate-Worker Strike Tue, 29 Nov 2022 11:00:00 +0000 The seventy per cent of Americans who support unions should understand that the future of organized labor won’t be in coal mines or steel mills but in places that might cut against the stereotypes. Match ID: 102 Score: 10.00 source: www.newyorker.com age: 0 days qualifiers: 10.00 california
Threatened with jail for live-streaming traffic stop, he sued Tue, 29 Nov 2022 06:00:01 EST The U.S. Court of Appeals for the Fourth Circuit is debating whether streaming is different from recording, and whether passengers in cars can record at all. Match ID: 103 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Democrat set to succeed Nancy Pelosi maintains ties to Aipac and others but could be challenged by critics in his own caucus
Hakeem Jeffries might be about to make history but some critics fear that on one issue, at least, he will be on the wrong side of it.
The progressive New York congressman widely expected to lead the Democrats in the US House of Representatives will be the first person of color to head either party in the chamber. Jeffries’ election as House minority leader in the new Congress in January would also see the baton pass to a new generation of Democratic leaders as the speaker, Nancy Pelosi, 82, steps aside.
Continue reading... Match ID: 106 Score: 10.00 source: www.theguardian.com age: 0 days qualifiers: 10.00 apple
Reporter still haunted by Itaewon crowd crush, a tragedy close to home Tue, 29 Nov 2022 00:15:43 EST A Post reporter who lives in the Seoul district covered the tragedy that night and for days after. A month later, she reflects on the difficulty of moving on. Match ID: 109 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Miss Manners: Am I too nice to waitstaff? Tue, 29 Nov 2022 00:00:00 EST Reader is confused by their mother saying they’re too nice to waitstaff. Match ID: 111 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Carolyn Hax: Mom blames herself for lack of grandkids Tue, 29 Nov 2022 00:00:00 EST After following in mother's footsteps with disordered eating, a reader decides not to risk having kids — and Mom now blames herself. Match ID: 112 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 amazon
Hans Magnus Enzensberger, German poet and intellectual, dies at 93 Mon, 28 Nov 2022 20:49:27 EST His unorthodox poems and essays made him one of postwar Germany’s leading authors. He also found a global audience with his children’s book “The Number Devil.” Match ID: 117 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
Three weeks after election, Arizona remains in turmoil over results Mon, 28 Nov 2022 19:55:55 EST Cochise County flouted a Monday deadline to certify the results, while officials in Maricopa County faced threats for following through on that duty. Match ID: 118 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
What good did the 2022 election do for Biden 2024? Mon, 28 Nov 2022 16:50:29 EST Democratic leaders might be more confident in him now. Democratic-leaning voters are another matter. Match ID: 122 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
9 root vegetable recipes worth rooting for Mon, 28 Nov 2022 13:00:53 EST Recipes for glazing, mashing, pan-frying and roasting your root vegetables this season. Match ID: 127 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
Shanquella Robinson reportedly died while on holiday after viral video shows her being beaten, apparently by an American woman
The US is weighing an extradition request from Mexico after authorities in the country charged an American woman with murdering another US woman shown being beaten while they vacationed in a viral video.
Prosecutors in the Mexican state of Baja California Sur have not named the suspect in the death of North Carolina’s Shanquella Robinson, who reportedly died of a severe spinal cord or neck injury while on holiday in Mexico on 29 October.
Continue reading... Match ID: 128 Score: 10.00 source: www.theguardian.com age: 1 day qualifiers: 10.00 california
The climate-friendly way to furnish your home Mon, 28 Nov 2022 11:26:20 EST How to keep old furniture out of landfills and choose new, sustainable pieces that will do minimal harm to the environment. Match ID: 130 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
Daily Cartoon: Monday, November 28th Mon, 28 Nov 2022 15:37:43 +0000 “Help me eat this turkey-mashed-potato-cranberry-sauce-apple-pie sandwich so we can move on with our lives.” Match ID: 131 Score: 10.00 source: www.newyorker.com age: 1 day qualifiers: 10.00 apple
5 cookie baking tips for better batches every time Mon, 28 Nov 2022 10:00:59 EST Be a better cookie baker with these simple but clever tips from a bevy of new books. Match ID: 132 Score: 10.00 source: www.washingtonpost.com age: 1 day qualifiers: 10.00 amazon
My Last Will and Testament, on VHS Mon, 28 Nov 2022 11:00:00 +0000 Shouts & Murmurs by Jay Katsir: It’s better than a digital Video Will that goes out as an e-mail blast, or a message on a WhatsApp group called Father Has Perished. Match ID: 133 Score: 10.00 source: www.newyorker.com age: 1 day qualifiers: 10.00 whatsapp
Apple Tracks You More Than You Think Sat, 26 Nov 2022 14:00:00 +0000 Plus: WikiLeaks’ website is falling apart, tax websites are sending your data to Facebook, and cops take down a big phone-number-spoofing operation. Match ID: 136 Score: 8.57 source: www.wired.com age: 3 days qualifiers: 8.57 apple
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND
Enjoy today’s videos!
Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs, and even operate in the dark.
This robot is designed as a preliminary platform for humanoid robot research. The platform will be further extended with soles as well as upper limbs. In this video, the current lower limb version of the platform shows its capability in traversing uneven terrains without an active or passive ankle joint. The underactuation nature of the robot system has been well addressed with our locomotion-control framework, which also provides a new perspective on the leg design of bipedal robots.
Inbiodroid is a startup “dedicated to the development of fully immersive telepresence technologies that create a deeper connection between people and their environment.” Hot off the ANA Avatar XPrize competition, they’re doing a Kickstarter to fund the next generation of telepresence robots.
A robot that can feel what a therapist feels when treating a patient, that can adjust the intensity of rehabilitation exercises at any time according to the patient's abilities and needs, and that can thus go on for hours without getting tired: It seems like fiction, and yet researchers from the Vrije Universiteit Brussel and Imec have now finished a prototype that unites all these skills in one robot.
Pickle robots unload trucks. This is a short overview of the Pickle Robot Unload System in action at the end of October 2022—autonomously picking floor-loaded freight to unload a trailer. As a robotic system built on AI and advanced sensors, the system gets better and faster all the time.
Learning agile skills can be challenging with reward shaping. Imitation learning provides an alternative solution by assuming access to decent expert references. However, such experts are not always available. We propose Wasserstein Adversarial Skill Imitation (WASABI), which acquires agile behaviors from partial and potentially physically incompatible demonstrations. In our work, Solo, a quadruped robot, learns highly dynamic skills (for example, backflips) from only handheld human demonstrations.
NASA and the European Space Agency are developing plans for one of the most ambitious campaigns ever attempted in space: bringing the first samples of Mars material safely back to Earth for detailed study. The diverse set of scientifically curated samples now being collected by NASA’s Mars Perseverance rover could help scientists answer the question of whether ancient life ever arose on the Red Planet.
The Canadian Space Agency plans to send a rover to the moon as early as 2026 to explore a polar region. The mission will demonstrate key technologies and accomplish meaningful science. Its objectives are to gather imagery, measurements, and data on the surface of the moon, as well as to have the rover survive an entire night on the moon. Lunar nights, which last about 14 Earth days, are extremely cold and dark, posing a significant technological challenge.
Covariant Robotic Induction automates previously manual induction processes. This video shows the Covariant Robotic Induction solution picking a wide range of item types from totes, scanning bar codes, and inducting items onto a unit sorter. Note the robot’s ability to effectively handle items that are traditionally difficult to pick, such as transparent polybagged apparel and small, oddly shaped health and beauty items, and place them precisely onto individual trays.
The solution will integrate Boston Dynamics’ Spot robot; the ExynPak, powered by ExynAI; and the Trimble X7 total station. It will enable fully autonomous missions inside complex and dynamic construction environments, which can result in consistent and precise reality capture for production and quality-control workflows.
Our most advanced programmable robot yet is back and better than ever. Sphero RVR+ includes an advanced gearbox to improve torque and payload capacity; enhanced sensors, including an improved color sensor; and an improved rechargeable and swappable battery.
Complexity, cost, and power requirements for the actuation of individual robots can play a large factor in limiting the size of robotic swarms. Here we present PCBot, a minimalist robot that can precisely move on an orbital shake table using a bi-stable solenoid actuator built directly into its PCB. This allows the actuator to be built as part of the automated PCB manufacturing process, greatly reducing the impact it has on manual assembly.
Drone-racing world champion Thomas Bitmatta designed an indoor drone-racing track for ETH Zurich’s autonomous high-speed racing drones, and in something like half an hour, the autonomous drones were able to master the track at superhuman speeds (with the aid of a motion-capture system).
Moravec’s paradox is the observation that many things that are difficult for robots to do come easily to humans, and vice versa. Stanford University professor Chelsea Finn has been tasked to explain this concept to 5 different people: a child, a teen, a college student, a grad student, and an expert.
AI advancements have been motivated and inspired by human intelligence for decades. How can we use AI to expand our knowledge and understanding of the world and ourselves? How can we leverage AI to enrich our lives? In his Tanner Lecture, Eric Horvitz, chief science officer at Microsoft, will explore these questions and more, tracing the arc of intelligence from its origins and evolution in humans to its manifestations and prospects in the tools we create and use.
It was a great idea for its time—a network of NASA communications satellites high in geostationary orbit, providing nearly continuous radio contact between controllers on the ground and some of the agency’s highest-profile missions: the space shuttles, the International Space Station, the Hubble Space Telescope, and dozens of others.
The satellites were called TDRS—short for Tracking and Data Relay Satellite—and the first was launched in 1983 on the maiden voyage of the space shuttle Challenger. Twelve more would follow, quietly providing a backbone for NASA’s orbital operations. But they’ve gotten old, they’re expensive, and in the 40 years since they began, they’ve been outpaced by commercial satellite networks.
So what comes next? That’s the 278-million-dollar question—but, importantly, it’s not a multibillion-dollar question.
“Now it’ll be just plug and play. They can concentrate on the mission, and they don’t have to worry about comms, because we provide that for them.” —Craig Miller, Viasat
NASA, following its mantra to get out of the business of routine space operations, has now awarded US $278.5 million in contracts to six companies: Amazon’s Project Kuiper, Inmarsat Government, SES Government Solutions, SpaceX, Telesat, and Viasat. The agency is asking them to offer services that are reliable, adaptable for all sorts of missions, easy for NASA to use, and—ideally—orders of magnitude less expensive than TDRS.
“It’s an ambitious wish list,” says Eli Naffah, communications services project manager at NASA’s Glenn Research Center, in Cleveland. “We’re looking to have industry tell us, based on their capabilities and their business interests, what they would like to provide to us as a service that they would provide to others broadly.”
Inmarsat now operates a number of geostationary satellites in their GX fleet. The projected GX7 satellite [left] is expected to launch in 2023.Inmarsat Government
Satellite communication is one area that has taken off as a business proposition, independent of NASA’s space efforts. Internet and television transmission, GPS, phone service—all of these have become giant enterprises, ubiquitous in people’s lives. Economy of scale and competition have brought prices down dramatically. (That’s very different from, say, space tourism, which attracts a lot of attention but for now is still something that only the very wealthy can afford.)
NASA benefits, in the case of communications, from being a relatively small player, especially if it can get out from under the costs of running something like the TDRS system. The commercial satellite companies take over those costs—which, they say, is fine, since they were spending the money anyway.
“We love having customers like NASA,” says Craig Miller, president for government systems at Viasat. “They’re a joy to work with, their mission is in alignment with a lot of our core values, but we make billions of dollars a year selling Internet to other sources.”
Each of the six companies under the new NASA contract takes a different approach. Inmarsat, SES, and Viasat, for instance, would use large relay satellites, like TDRS, each seeming to hover over a fixed spot on Earth’s equator because, at an altitude of 35,786 kilometers, one orbit takes precisely 24 hours. Amazon and SpaceX, by contrast, would use swarms of smaller satellites in low Earth orbit, only 3,700 km in altitude. (SpaceX, at last count, had launched more than 2,200 of its Starlink satellites.) SES and Telesat would offer two-for-one packages, with service both from high and lower orbits. As for radio frequencies, the companies might use C band, Ka band, L band, optical—whatever their existing clients have needed. And so on.
Sixty SpaceX Starlink satellites wait for deployment from their launch rocket in low Earth orbit, in this photograph from 2019.SpaceX
It may sound like an alphabet soup of ways to solve one basic need—being in contact with its satellites—but engineers say that’s a minor trade-off for NASA if it can piggyback on others’ communications networks. “This allows NASA and our other government users to achieve their missions without the upfront capital expenditure and the full life-cycle cost” of running the TDRS system, said Britt Lewis, a senior vice president of Inmarsat Government, in an email to IEEE Spectrum.
One major advantage to the space agency would be the sheer volume of service available to it. In years past, the TDRS system could handle only so many transmissions at a time; if a particular mission needed to send a large volume of data, it had to book time in advance.
“Now it’ll be just plug and play,” says Miller at Viasat. “They can concentrate on the mission, and they don’t have to worry about comms, because we provide that for them.”
NASA says it expects each company will complete technology development and in-space demonstrations by 2025, with the most successful starting to take over operations for the agency by 2030. There will probably be no single winner: “We’re not really looking to have any one particular company be able to provide all the services on our list,” says NASA’s Naffah.
NASA's TDRS-M communications satellite launched in 2017. NASA
The TDRS satellites have proved durable; TDRS-3, launched by the space shuttle Discovery in 1988, is still usable as a spare if newer satellites break down. NASA says it will probably continue to use the system into the 2030s, but it planned no more launches after the last (of TDRS-13 a.k.a. TDRS-M) in 2017.
If everything works out, says Amazon in an email, “This model would allow organizations like NASA to rely on commercial operators for near-Earth communications while shifting their focus to more ambitious operations, like solving technical challenges for deep space exploration and science missions.”
At which point the sky's the limit. NASA focuses on the moon, Mars, and other exploration, while it buys routine services from the private sector.
“We can provide the same kind of broadband capabilities that you’re used to having on Earth,” says Viasat’s Miller. He smiles at this thought. “We can provide Netflix to the ISS.”
Match ID: 139 Score: 8.57 source: spectrum.ieee.org age: 200 days qualifiers: 3.57 trade, 2.14 musk, 1.43 development, 1.43 amazon
Are you looking for a new graphic design tool? Would you like to read a detailed review of Canva? As it's one of the tools I love using. I am also writing my first ebook using canva and publish it soon on my site you can download it is free. Let's start the review.
Canva is a free graphic design web application that allows you to create invitations, business cards, flyers, lesson plans, banners, and more using professionally designed templates. You can upload your own photos from your computer or from Google Drive, and add them to Canva's templates using a simple drag-and-drop interface. It's like having a basic version of Photoshop that doesn't require Graphic designing knowledge to use. It’s best for nongraphic designers.
Who is Canva best suited for?
Canva is a great tool for small business owners, online entrepreneurs, and marketers who don’t have the time and want to edit quickly.
To create sophisticated graphics, a tool such as Photoshop can is ideal. To use it, you’ll need to learn its hundreds of features, get familiar with the software, and it’s best to have a good background in design, too.
Also running the latest version of Photoshop you need a high-end computer.
So here Canva takes place, with Canva you can do all that with drag-and-drop feature. It’s also easier to use and free. Also an even-more-affordable paid version is available for $12.95 per month.
Free vs Pro vs Enterprise Pricing plan
The product is available in three plans: Free, Pro ($12.99/month per user or $119.99/year for up to 5 people), and Enterprise ($30 per user per month, minimum 25 people).
Free plan Features
250,000+ free templates
100+ design types (social media posts, presentations, letters, and more)
100+ million premium and stock photos, videos, audio, and graphics
610,000+ premium and free templates with new designs daily
Access to Background Remover and Magic Resize
Create a library of your brand or campaign's colors, logos, and fonts with up to 100 Brand Kits
Remove image backgrounds instantly with background remover
Resize designs infinitely with Magic Resize
Save designs as templates for your team to use
100GB of cloud storage
Schedule social media content to 8 platforms
Enterprise Plan Features
Everything Pro has plus:
Establish your brand's visual identity with logos, colors and fonts across multiple Brand Kits
Control your team's access to apps, graphics, logos, colors and fonts with brand controls
Built-in workflows to get approval on your designs
Set which elements your team can edit and stay on brand with template locking
Log in with single-sign on (SSO) and have access to 24/7 Enterprise-level support.
How to Use Canva?
To get started on Canva, you will need to create an account by providing your email address, Google, Facebook or Apple credentials. You will then choose your account type between student, teacher, small business, large company, non-profit, or personal. Based on your choice of account type, templates will be recommended to you.
You can sign up for a free trial of Canva Pro, or you can start with the free version to get a sense of whether it’s the right graphic design tool for your needs.
When you sign up for an account, Canva will suggest different post types to choose from. Based on the type of account you set up you'll be able to see templates categorized by the following categories: social media posts, documents, presentations, marketing, events, ads, launch your business, build your online brand, etc.
Start by choosing a template for your post or searching for something more specific. Search by social network name to see a list of post types on each network.
Next, you can choose a template. Choose from hundreds of templates that are ready to go, with customizable photos, text, and other elements.
You can start your design by choosing from a variety of ready-made templates, searching for a template matching your needs, or working with a blank template.
Canva has a lot to choose from, so start with a specific search.if you want to create business card just search for it and you will see alot of templates to choose from
Inside the Canva designer, the Elements tab gives you access to lines and shapes, graphics, photos, videos, audio, charts, photo frames, and photo grids.The search box on the Elements tab lets you search everything on Canva.
To begin with, Canva has a large library of elements to choose from. To find them, be specific in your search query. You may also want to search in the following tabs to see various elements separately:
The Photos tab lets you search for and choose from millions of professional stock photos for your templates.
You can replace the photos in our templates to create a new look. This can also make the template more suited to your industry.
You can find photos on other stock photography sites like pexel, pixabay and many more or simply upload your own photos.
When you choose an image, Canva’s photo editing features let you adjust the photo’s settings (brightness, contrast, saturation, etc.), crop, or animate it.
When you subscribe to Canva Pro, you get access to a number of premium features, including the Background Remover. This feature allows you to remove the background from any stock photo in library or any image you upload.
The Text tab lets you add headings, normal text, and graphical text to your design.
When you click on text, you'll see options to adjust the font, font size, color, format, spacing, and text effects (like shadows).
Canva Pro subscribers can choose from a large library of fonts on the Brand Kit or the Styles tab. Enterprise-level controls ensure that visual content remains on-brand, no matter how many people are working on it.
Create an animated image or video by adding audio to capture user’s attention in social news feeds.
If you want to use audio from another stock site or your own audio tracks, you can upload them in the Uploads tab or from the more option.
Want to create your own videos? Choose from thousands of stock video clips. You’ll find videos that range upto 2 minutes
You can upload your own videos as well as videos from other stock sites in the Uploads tab.
Once you have chosen a video, you can use the editing features in Canva to trim the video, flip it, and adjust its transparency.
On the Background tab, you’ll find free stock photos to serve as backgrounds on your designs. Change out the background on a template to give it a more personal touch.
The Styles tab lets you quickly change the look and feel of your template with just a click. And if you have a Canva Pro subscription, you can upload your brand’s custom colors and fonts to ensure designs stay on brand.
If you have a Canva Pro subscription, you’ll have a Logos tab. Here, you can upload variations of your brand logo to use throughout your designs.
With Canva, you can also create your own logos. Note that you cannot trademark a logo with stock content in it.
Publishing with Canva
With Canva, free users can download and share designs to multiple platforms including Instagram, Facebook, Twitter, LinkedIn, Pinterest, Slack and Tumblr.
Canva Pro subscribers can create multiple post formats from one design. For example, you can start by designing an Instagram post, and Canva's Magic Resizer can resize it for other networks, Stories, Reels, and other formats.
Canva Pro subscribers can also use Canva’s Content Planner to post content on eight different accounts on Instagram, Facebook, Twitter, LinkedIn, Pinterest, Slack, and Tumblr.
Canva Pro allows you to work with your team on visual content. Designs can be created inside Canva, and then sent to your team members for approval. Everyone can make comments, edits, revisions, and keep track via the version history.
When it comes to printing your designs, Canva has you covered. With an extensive selection of printing options, they can turn your designs into anything from banners and wall art to mugs and t-shirts.
Canva Print is perfect for any business seeking to make a lasting impression. Create inspiring designs people will want to wear, keep, and share. Hand out custom business cards that leave a lasting impression on customers' minds.
The Canva app is available on the Apple App Store and Google Play. The Canva app has earned a 4.9 out of five star rating from over 946.3K Apple users and a 4.5 out of five star rating from over 6,996,708 Google users.
In addition to mobile apps, you can use Canva’s integration with other Internet services to add images and text from sources like Google Maps, Emojis, photos from Google Drive and Dropbox, YouTube videos, Flickr photos, Bitmojis, and other popular visual content elements.
Canva Pros and Cons
A user-friendly interface
Canva is a great tool for people who want to create professional graphics but don’t have graphic design skills.
Hundreds of templates, so you'll never have to start from scratch.
Wide variety of templates to fit multiple uses
Branding kits to keep your team consistent with the brand colors and fonts
Creating visual content on the go
You can find royalty free images, audio, and video without having to subscribe to another service.
Some professional templates are available for Pro user only
Advanced photo editing features like blurring or erasing a specific area are missing.
Some elements that fall outside of a design are tricky to retrieve.
Features (like Canva presentations) could use some improvement.
If you are a regular user of Adobe products, you might find Canva's features limited.
Prefers to work with vectors. Especially logos.
Expensive enterprise pricing
In general, Canva is an excellent tool for those who need simple images for projects. If you are a graphic designer with experience, you will find Canva’s platform lacking in customization and advanced features – particularly vectors. But if you have little design experience, you will find Canva easier to use than advanced graphic design tools like Adobe Photoshop or Illustrator for most projects. If you have any queries let me know in the comments section.
Match ID: 140 Score: 8.57 source: www.crunchhype.com age: 282 days qualifiers: 3.57 trade, 3.57 google, 1.43 apple
The pastoralists had been detained over the death of a police officer during protests against government plans to evict them from ancestral land
Prosecutors in Tanzania have dropped murder charges against 24 Maasai pastoralists who were detained over the death of a police officer earlier this year.
The officer died in June during protests against government plans to evict them from their ancestral land in Loliondo, in Ngorongoro District, to make way for a conservation and a luxury hunting reserve.
Continue reading... Match ID: 142 Score: 7.14 source: www.theguardian.com age: 4 days qualifiers: 7.14 development
NASA’s Artemis I mission launched early in the predawn hours this morning, at 1:04 a.m. eastern time, carrying with it the hopes of a space program aiming now to land American astronauts back on the moon. The Orion spacecraft now on its way to the moon also carries with it a lot of CubeSat-size science. (As of press time, some satellites have even begun to tweet.)
And while the objective of Artemis I is to show that the launch system and spacecraft can make a trip to the moon and return safely to Earth, the mission is also a unique opportunity to send a whole spacecraft-load of science into deep space. In addition to the interior of the Orion capsule itself, there are enough nooks and crannies to handle a fair number of CubeSats, and NASA has packed as many experiments as it can into the mission. From radiation phantoms to solar sails to algae to a lunar surface payload, Artemis I has a lot going on.
Most of the variety of the science on Artemis I comes in the form of CubeSats, little satellites that are each the size of a large shoebox. The CubeSats are tucked snugly into berths inside the Orion stage adapter, which is the bit that connects the interim cryogenic propulsion stage to the ESA service module and Orion. Once the propulsion stage lifts Orion out of Earth orbit and pushes it toward the moon, the stage and adapter will separate from Orion, and the CubeSats will launch themselves.
Ten CubeSats rest inside the Orion stage adapter at NASA’s Kennedy Space Center.NASA KSC
While the CubeSats look identical when packed up, each one is totally unique in both hardware and software, with different destinations and mission objectives. There are 10 in total (three weren’t ready in time for launch, which is why there are a couple of empty slots in the image above).
Here is what each one is and does:
While the CubeSats head off to do their own thing, inside the Orion capsule itself will be the temporary home of a trio of mannequins. The first, a male-bodied version provided by NASA, is named Commander Moonikin Campos, after NASA electrical engineer Arturo Campos, who was the guy who wrote the procedures that allowed the Apollo 13 command module to steal power from the lunar module’s batteries, one of many actions that saved the Apollo 13 crew.
Moonikin Campos prepares for placement in the Orion capsule.NASA
Moonikin Campos will spend the mission in the Orion commander’s seat, wearing an Orion crew survival system suit. Essentially itself a spacecraft, the suit is able to sustain its occupant for up to six days if necessary. Moonikin Campos’s job will be to pretend to be an astronaut, and sensors inside him will measure radiation, acceleration, and vibration to help NASA prepare to launch human astronauts in the next Artemis mission.
Helga and Zohar in place on the flight deck of the Orion spacecraft.NASA/DLR
Accompanying Moonikin Campos are two female-bodied mannequins, named Helga and Zohar, developed by the German Aerospace Center (DLR) along with the Israel Space Agency. These are more accurately called “anthropomorphic phantoms,” and their job is to provide a detailed recording of the radiation environment inside the capsule over the course of the mission. The phantoms are female because women have more radiation-sensitive tissue than men. Both Helga and Zohar have over 6,000 tiny radiation detectors placed throughout their artificial bodies, but Zohar will be wearing an AstroRad radiation protection vest to measure how effective it is.
NASA’s Biology Experiment-1 is transferred to the Orion team.NASA/KSC
The final science experiment to fly onboard Orion is NASA’s Biology Experiment-1. The experiment is really just seeing what time in deep space does to some specific kinds of biology, so all that has to happen is for Orion to successfully haul some packages of sample tubes around the moon and back. Samples include:
Plant seeds to characterize how spaceflight affects nutrient stores
Photosynthetic algae to identify genes that contribute to its survival in deep space
Aspergillus fungus to investigate radioprotective effects of melanin and DNA damage response
Yeast used as a model organism to identify genes that enable adaptations to conditions in both low Earth orbit and deep space
There is some concern that because of the extensive delays with the Artemis launch, the CubeSats have been sitting so long that their batteries may have run down. Some of the CubeSats could be recharged, but for others, recharging was judged to be so risky that they were left alone. Even for CubeSats that don’t start right up, though, it’s possible that after deployment, their solar panels will be able to get them going. But at this point, there’s still a lot of uncertainty, and the CubeSats’ earthbound science teams are now pinning their hopes on everything going well after launch.
For the rest of the science payloads, success mostly means Orion returning to Earth safe and sound, which will also be a success for the Artemis I mission as a whole. And assuming it does so, there will be a lot more science to come.
Match ID: 143 Score: 7.14 source: spectrum.ieee.org age: 13 days qualifiers: 7.14 genes
Non-fungible tokens (NFTs) are the most popular digital assets today, capturing the attention of cryptocurrency investors, whales and people from around the world. People find it amazing that some users spend thousands or millions of dollars on a single NFT-based image of a monkey or other token, but you can simply take a screenshot for free. So here we share some freuently asked question about NFTs.
1) What is an NFT?
NFT stands for non-fungible token, which is a cryptographic token on a blockchain with unique identification codes that distinguish it from other tokens. NFTs are unique and not interchangeable, which means no two NFTs are the same. NFTs can be a unique artwork, GIF, Images, videos, Audio album. in-game items, collectibles etc.
2) What is Blockchain?
A blockchain is a distributed digital ledger that allows for the secure storage of data. By recording any kind of information—such as bank account transactions, the ownership of Non-Fungible Tokens (NFTs), or Decentralized Finance (DeFi) smart contracts—in one place, and distributing it to many different computers, blockchains ensure that data can’t be manipulated without everyone in the system being aware.
3) What makes an NFT valuable?
The value of an NFT comes from its ability to be traded freely and securely on the blockchain, which is not possible with other current digital ownership solutionsThe NFT points to its location on the blockchain, but doesn’t necessarily contain the digital property. For example, if you replace one bitcoin with another, you will still have the same thing. If you buy a non-fungible item, such as a movie ticket, it is impossible to replace it with any other movie ticket because each ticket is unique to a specific time and place.
4) How do NFTs work?
One of the unique characteristics of non-fungible tokens (NFTs) is that they can be tokenised to create a digital certificate of ownership that can be bought, sold and traded on the blockchain.
As with crypto-currency, records of who owns what are stored on a ledger that is maintained by thousands of computers around the world. These records can’t be forged because the whole system operates on an open-source network.
NFTs also contain smart contracts—small computer programs that run on the blockchain—that give the artist, for example, a cut of any future sale of the token.
5) What’s the connection between NFTs and cryptocurrency?
Non-fungible tokens (NFTs) aren't cryptocurrencies, but they do use blockchain technology. Many NFTs are based on Ethereum, where the blockchain serves as a ledger for all the transactions related to said NFT and the properties it represents.5) How to make an NFT?
Anyone can create an NFT. All you need is a digital wallet, some ethereum tokens and a connection to an NFT marketplace where you’ll be able to upload and sell your creations
6) How to validate the authencity of an NFT?
When you purchase a stock in NFT, that purchase is recorded on the blockchain—the bitcoin ledger of transactions—and that entry acts as your proof of ownership.
7) How is an NFT valued? What are the most expensive NFTs?
The value of an NFT varies a lot based on the digital asset up for grabs. People use NFTs to trade and sell digital art, so when creating an NFT, you should consider the popularity of your digital artwork along with historical statistics.
In the year 2021, a digital artist called Pak created an artwork called The Merge. It was sold on the Nifty Gateway NFT market for $91.8 million.
8) Can NFTs be used as an investment?
Non-fungible tokens can be used in investment opportunities. One can purchase an NFT and resell it at a profit. Certain NFT marketplaces let sellers of NFTs keep a percentage of the profits from sales of the assets they create.
9) Will NFTs be the future of art and collectibles?
Many people want to buy NFTs because it lets them support the arts and own something cool from their favorite musicians, brands, and celebrities. NFTs also give artists an opportunity to program in continual royalties if someone buys their work. Galleries see this as a way to reach new buyers interested in art.
10) How do we buy an NFTs?
There are many places to buy digital assets, like opensea and their policies vary. On top shot, for instance, you sign up for a waitlist that can be thousands of people long. When a digital asset goes on sale, you are occasionally chosen to purchase it.
11) Can i mint NFT for free?
To mint an NFT token, you must pay some amount of gas fee to process the transaction on the Etherum blockchain, but you can mint your NFT on a different blockchain called Polygon to avoid paying gas fees. This option is available on OpenSea and this simply denotes that your NFT will only be able to trade using Polygon's blockchain and not Etherum's blockchain. Mintable allows you to mint NFTs for free without paying any gas fees.
12) Do i own an NFT if i screenshot it?
The answer is no. Non-Fungible Tokens are minted on the blockchain using cryptocurrencies such as Etherum, Solana, Polygon, and so on. Once a Non-Fungible Token is minted, the transaction is recorded on the blockchain and the contract or license is awarded to whoever has that Non-Fungible Token in their wallet.
12) Why are people investing so much in NFT?
Non-fungible tokens have gained the hearts of people around the world, and they have given digital creators the recognition they deserve. One of the remarkable things about non-fungible tokens is that you can take a screenshot of one, but you don’t own it. This is because when a non-fungible token is created, then the transaction is stored on the blockchain, and the license or contract to hold such a token is awarded to the person owning the token in their digital wallet.
You can sell your work and creations by attaching a license to it on the blockchain, where its ownership can be transferred. This lets you get exposure without losing full ownership of your work. Some of the most successful projects include Cryptopunks, Bored Ape Yatch Club NFTs, SandBox, World of Women and so on. These NFT projects have gained popularity globally and are owned by celebrities and other successful entrepreneurs. Owning one of these NFTs gives you an automatic ticket to exclusive business meetings and life-changing connections.
That’s a wrap. Hope you guys found this article enlightening. I just answer some question with my limited knowledge about NFTs. If you have any questions or suggestions, feel free to drop them in the comment section below. Also I have a question for you, Is bitcoin an NFTs? let me know in The comment section below
Match ID: 144 Score: 7.14 source: www.crunchhype.com age: 296 days qualifiers: 3.57 trade, 3.57 google
Computer code developed by a company called Pushwoosh is in about 8,000 Apple and Google smartphone apps. The company pretends to be American when it is actually Russian.
According to company documents publicly filed in Russia and reviewed by Reuters, Pushwoosh is headquartered in the Siberian town of Novosibirsk, where it is registered as a software company that also carries out data processing. It employs around 40 people and reported revenue of 143,270,000 rubles ($2.4 mln) last year. Pushwoosh is registered with the Russian government to pay taxes in Russia...
Match ID: 146 Score: 6.43 source: www.schneier.com age: 13 days qualifiers: 3.57 google, 1.43 california, 1.43 apple
The marketing industry is turning to artificial intelligence (AI) as a way to save time and execute smarter, more personalized campaigns. 61% of marketers say AI software is the most important aspect of their data strategy.
If you’re late to the AI party, don’t worry. It’s easier than you think to start leveraging artificial intelligence tools in your marketing strategy. Here are 11 AI marketing tools every marketer should start using today.
Jasper is a content writing and content generation tool that uses artificial intelligence to identify the best words and sentences for your writing style and medium in the most efficient, quick, and accessible way.
It's trusted by 50,000+ marketers for creating engaging marketing campaigns, ad copy, blog posts, and articles within minutes which would traditionally take hours or days. Special Features:
Blog posts have been optimized for search engines and rank high on Google and other search engines. This is a huge plus for online businesses that want to generate traffic to their website through content marketing.
99.9% Original Content and guarantees that all content it generates will be original, so businesses can focus on their online reputation rather than worrying about penalties from Google for duplicate content.
Long-Form Article Writing – Jasper.ai is also useful for long-form writing, allowing users to create articles of up to 10,000 words without any difficulty. This is ideal for businesses that want to produce in-depth content that will capture their audience’s attention.
Wait! I've got a pretty sweet deal for you. Sign up through the link below, and you'll get(10k Free Credits)
Personalize is an AI-powered technology that helps you identify and produce highly targeted sales and marketing campaigns by tracking the products and services your contacts are most interested in at any given time. The platform uses an algorithm to identify each contact’s top three interests, which are updated in real-time based on recent site activity.
Identifies top three interests based on metrics like time on page, recency, and frequency of each contact
Works with every ESP and CRM
Easy to get up and running in days
Enterprise-grade technology at a low cost for SMBs
3. Seventh Sense
Seventh Sense provides behavioral analytics that helps you win attention in your customers’ overcrowded email inboxes. Choosing the best day and time to send an email is always a gamble. And while some days of the week generally get higher open rates than others, you’ll never be able to nail down a time that’s best for every customer. Seventh Sense eases your stress of having to figure out the perfect send-time and day for your email campaigns. The AI-based platform figures out the best timing and email frequency for each contact based on when they’re opening emails. The tool is primarily geared toward HubSpot and Marketo customers
AI determines the best send-time and email frequency for each contact
Connects with HubSpot and Marketo
Phrasee uses artificial intelligence to help you write more effective subject lines. With its AI-based Natural Language Generation system, Phrasee uses data-driven insights to generate millions of natural-sounding copy variants that match your brand voice. The model is end-to-end, meaning when you feed the results back to Phrasee, the prediction model rebuilds so it can continuously learn from your audience.
Instantly generates millions of human-sounding, brand-compliant copy variants
Creates tailored language models for every customer
Learns what your audience responds to and rebuilds the prediction model every time
5. Hubspot Seo
HubSpot Search Engine Optimization (SEO) is an integral tool for the Human Content team. It uses machine learning to determine how search engines understand and categorize your content. HubSpot SEO helps you improve your search engine rankings and outrank your competitors. Search engines reward websites that organize their content around core subjects, or topic clusters. HubSpot SEO helps you discover and rank for the topics that matter to your business and customers.
Helps you discover and rank topics that people are searching for
Provides suggestions for your topic clusters and related subjects
Integrates with all other HubSpot content tools to help you create a well-rounded content strategy
When you’re limited to testing two variables against each other at a time, it can take months to get the results you’re looking for. Evolv AI lets you test all your ideas at once. It uses advanced algorithms to identify the top-performing concepts, combine them with each other, and repeat the process to achieve the best site experience.
Figures out which content provides the best performance
Lets you test multiple ideas in a single experiment instead of having to perform many individual tests over a long period
Lets you try all your ideas across multiple pages for full-funnel optimization
Offers visual and code editors
Acrolinx is a content alignment platform that helps brands scale and improves the quality of their content. It’s geared toward enterprises – its major customers include big brands like Google, Adobe, and Amazon - to help them scale their writing efforts. Instead of spending time chasing down and fixing typos in multiple places throughout an article or blog post, you can use Acrolinx to do it all right there in one place. You start by setting your preferences for style, grammar, tone of voice, and company-specific word usage. Then, Acrolinx checks and scores your existing content to find what’s working and suggest areas for improvement. The platform provides real-time guidance and suggestions to make writing better and strengthen weak pages.
Reviews and scores existing content to ensure it meets your brand guidelines
Finds opportunities to improve your content and use automation to shorten your editorial process.
Integrates with more than 50 tools and platforms, including Google Docs, Microsoft Word, WordPress, and most web browsers.
MarketMuse uses an algorithm to help marketers build content strategies. The tool shows you where to target keywords to rank in specific topic categories, and recommends keywords you should go after if you want to own particular topics. It also identifies gaps and opportunities for new content and prioritizes them by their probable impact on your rankings. The algorithm compares your content with thousands of articles related to the same topic to uncover what’s missing from your site.
The built-in editor shows how in-depth your topic is covered and what needs improvement
Finds gaps and opportunities for new content creation, prioritized by their probable impact and your chance of ranking
Copilot is a suite of tools that help eCommerce businesses maintain real-time communication with customers around the clock at every stage of the funnel. Promote products, recover shopping carts and send updates or reminders directly through Messenger.
Integrate Facebook Messenger directly with your website, including chat history and recent interactions for a fluid customer service experience
Run drip messenger campaigns to keep customers engaged with your brand
Send abandoned carts, out-of-stock, restock, preorder, order status, and shipment notifications to contacts
Send branded images, promotional content, or coupon codes to those who opt in
Collect post-purchase feedback, reviews, and customer insight
Demonstrate social proof on your website with a widget, or push automatic Facebook posts sharing recent purchases
Display a promotional banner on your website to capture contacts instantly
Yotpo’s deep learning technology evaluates your customers’ product reviews to help you make better business decisions. It identifies key topics that customers mention related to your products—and their feelings toward them. The AI engine extracts relevant reviews from past buyers and presents them in smart displays to convert new shoppers. Yotpo also saves you time moderating reviews. The AI-powered moderation tool automatically assigns a score to each review and flags reviews with negative sentiment so you can focus on quality control instead of manually reviewing every post.
Makes it easy for shoppers to filter reviews and find the exact information they’re looking for
Analyzes customer feedback and sentiments to help you improve your products
Integrates with most leading eCommerce platforms, including BigCommerce, Magento, and Shopify.
11. Albert AI
Albert is a self-learning software that automates the creation of marketing campaigns for your brand. It analyzes vast amounts of data to run optimized campaigns autonomously, allowing you to feed in your own creative content and target markets, and then use data from its database to determine key characteristics of a serious buyer. Albert identifies potential customers that match those traits, and runs trial campaigns on a small group of customers—with results refined by Albert himself—before launching it on a larger scale.
Albert plugs into your existing marketing technology stack, so you still have access to your accounts, ads, search, social media, and more. Albert maps tracking and attribution to your source of truth so you can determine which channels are driving your business.
Breaks down large amounts of data to help you customize campaigns
Plugs into your marketing technology stack and can be used across diverse media outlets, including email, content, paid media, and mobile
There are many tools and companies out there that offer AI tools, but this is a small list of resources that we have found to be helpful. If you have any other suggestions, feel free to share them in the comments below this article. As marketing evolves at such a rapid pace, new marketing strategies will be invented that we haven't even dreamed of yet. But for now, this list should give you a good starting point on your way to implementing AI into your marketing mix.
Note: This article contains affiliate links, meaning we make a small commission if you buy any premium plan from our link.
Match ID: 147 Score: 6.43 source: www.crunchhype.com age: 139 days qualifiers: 3.57 google, 1.43 microsoft, 1.43 amazon
Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.
Ng’s current efforts are focused on his company
Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?
Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
When you say you want a foundation model for computer vision, what do you mean by that?
Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
What needs to happen for someone to build a foundation model for video?
Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.
Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI
I remember when my students and I published the first
NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
I expect they’re both convinced now.
Ng: I think so, yes.
Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
How do you define data-centric AI, and why do you consider it a movement?
Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a
data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.
You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?
Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?
Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?
Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.
One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
When you talk about engineering the data, what do you mean exactly?
Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
What about using synthetic data, is that often a good solution?
Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
Do you mean that synthetic data would allow you to try the model on more data sets?
Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?
Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.
One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?
Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.
Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?
Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
Match ID: 149 Score: 5.71 source: theintercept.com age: 5 days qualifiers: 5.71 apple
Neuberger wins clearance to manage assets in China for Chinese residents Mon, 28 Nov 2022 12:39:44 -0500 Neuberger Berman said Monday it became the second global institution to receive final approval from the China Securities Regulatory Commission (CSRC) to launch a wholly owned, newly established mutual fund business in China. Neuberger Berman will now be allowed to manage local assets for local clients, which has not been allowed previously. BlackRock Inc. was the first to receive approval. Patrick Liu, CEO of Neuberger Berman Fund Management (China) (FMC), said the country's commitment to opening up to high-quality financial services "will bring significant opportunities for local investors." Michelle Wei will become chief investment officer - equities of the FMC. Match ID: 150 Score: 5.00 source: www.marketwatch.com age: 1 day qualifiers: 5.00 uber
The global business landscape is constantly evolving. Digital transformation— compounded by the challenges of globalization, supply-chain stability, demographic shifts, and climate change—is pressuring companies and government agencies to innovate and safely deploy sustainable technologies.
As digital transformation continues, the pervasive growth of technology increasingly intersects with industry, government, and societal interests. Companies and organizations need access to technologies that can enhance efficiencies, productivity, and competitive advantage.
Governments seek influence over emerging technologies to preserve economic interests, advance global trade, and protect their citizens. Consumers are demanding more transparency regarding organizational motives, practices, and processes.
For those and other reasons, new types of stakeholders are seeking a voice in the technology standardization process.
How organizations benefit from developing standards
The need is evidenced in the membership gains at the IEEE Standards Association. IEEE SA membership for organizations, also known as entity membership, has increased by more than 150 percent in the past six years. Academic institutions, government agencies, and other types of organizations now account for more than 30 percent of the member base.
Entity membership offers the ability to help shape technology development and ensure your organization’s interests are represented in the standards development process. Other benefits include balloting privileges, leadership eligibility, and networking opportunities.
IEEE SA welcomes different types of organizations because they bring varied perspectives and they voice concerns that need to be addressed during the standards development process. Engaging diverse viewpoints from companies of all sizes and types also helps to identify and address changing market needs.
From a geographic standpoint, IEEE SA welcomes participation from all regions of the world. Diverse perspectives and contributions to the development cycle enable innovation to be shared and realized by all stakeholders.
Programs on blockchain, IoT, and other emerging technology
An increasing number of new standards projects from emerging technology areas have created a more robust and diversified portfolio of work. The technologies include artificial intelligence and machine learning, blockchain and distributed ledger technologies, quantum computing, cloud computing, the Internet of Things, smart cities, smart factories and online gaming. There is also more participation from the health care, automotive, and financial services sectors.
IEEE SA has grown and evolved its programs to address market needs, but its purpose has not changed. The organization is focused on empowering innovators to raise the world’s standards for the benefit of humanity.
Those innovators might be individuals or organizations looking to make a difference in the world, but it can be accomplished only when we all work together.
A cybersecurity expert examines how the powerful game whatever system is put before them, leaving it to others to cover the cost.
Schneier, a professor at Harvard Kennedy School and author of such books as Data and Goliath and Click Here To Kill Everybody, regularly challenges his students to write down the first 100 digits of pi, a nearly impossible task—but not if they cheat, concerning which he admonishes, “Don’t get caught.” Not getting caught is the aim of the hackers who exploit the vulnerabilities of systems of all kinds. Consider right-wing venture capitalist Peter Thiel, who located a hack in the tax code: “Because he was one of the founders of PayPal, he was able to use a $2,000 investment to buy 1.7 million shares of the company at $0.001 per share, turning it into $5 billion—all forever tax free.” It was perfectly legal—and even if it weren’t, the wealthy usually go unpunished. The author, a fluid writer and tech communicator, reveals how the tax code lends itself to hacking, as when tech companies like Apple and Google avoid paying billions of dollars by transferring profits out of the U.S. to corporate-friendly nations such as Ireland, then offshoring the “disappeared” dollars to Bermuda, the Caymans, and other havens. Every system contains trap doors that can be breached to advantage. For example, Schneier cites “the Pudding Guy,” who hacked an airline miles program by buying low-cost pudding cups in a promotion that, for $3,150, netted him 1.2 million miles and “lifetime Gold frequent flier status.” Since it was all within the letter if not the spirit of the offer, “the company paid up.” The companies often do, because they’re gaming systems themselves. “Any rule can be hacked,” notes the author, be it a religious dietary restriction or a legislative procedure. With technology, “we can hack more, faster, better,” requiring diligent monitoring and a demand that everyone play by rules that have been hardened against tampering...
Match ID: 152 Score: 5.00 source: www.schneier.com age: 11 days qualifiers: 3.57 google, 1.43 apple
The machine-learning consortium
MLCommons released the latest set of benchmark results last week, offering a glimpse at the capabilities of new chips and old as they tackled executing lightweight AI on the tiniest systems and training neural networks at both server and supercomputer scales. The benchmark tests saw the debut of new chips from Intel and Nvidia as well as speed boosts from software improvements and predictions that new software will play a role in speeding the new chips in the years after their debut.
Training AI has been a problem that’s driven billions of dollars in investment, and it seems to be paying off. “A few years ago we were talking about training these networks in days or weeks, now we’re talking about minutes,” says
Dave Salvator, director of product marketing at Nvidia.
There are eight benchmarks in the MLPerf training suite, but here I’m showing results from just two—image classification and natural-language processing—because although they don’t give a complete picture, they’re illustrative of what’s happening. Not every company puts up benchmark results every time; in the past, systems from
Baidu, Google, Graphcore, and Qualcomm have made marks, but none of these were on the most recent list. And there are companies whose goal is to train the very biggest neural networks, such as Cerebras and SambaNova, that have never participated.
Another note about the results I’m showing—they are incomplete. To keep the eye glazing to a minimum, I’ve listed only the fastest system of each configuration. There were already four categories in the main “closed” contest: cloud (self-evident), on premises (systems you could buy and install in-house right now), preview (systems you can buy soon but not now), and R&D (interesting but odd, so I excluded them). I then listed the fastest training result for each category for each configuration—the number of accelerators in a computer. If you want to see the complete list, it’s at the
A casual glance shows that machine-learning training is still very much Nvidia’s house. It can bring a supercomputer-scale number of GPUs to the party to smash through training problems in mere seconds. Its
A100 GPUs have dominated the MLPerf list for several iterations now, and it powers Microsoft’s Azure cloud AI offerings as well as systems large and small from partners including Dell, HPE, and Fujitsu. But even among the A100 gang there’s real competition, particularly between Dell and HPE.
But perhaps more important was Azure’s standing. On image classification, the cloud systems were essentially a match for the best A100 on-premises computers. The results strengthen Microsoft’s case that renting resources in the cloud is as good as buying your own. And that case might might be even stronger soon. This week
Nvidia and Microsoft announced a multiyear collaboration that would see the inclusion of Nvidia’s upcoming GPU, the H100, in the Azure cloud.
This was the first peek at training abilities for the H100. And Nivida’s Dave Salvator emphasized how much progress happens—largely due to software improvements—in the years after a new chip comes out. On a per-chip basis, the A100 delivers 2.5 times the average performance today versus its
first run at the MLPerf benchmarks in 2020. Compared to A100’s debut scores, H100 delivered 6.7 times the speed. But compared to A100 with today’s software, the gain is only 2.6-fold.
In a way, H100 seems a bit overpowered for the MLPerf benchmarks, tearing through most of them in minutes using a fraction of the A100 hardware needed to match it. And in truth, it is meant for bigger things. “H100 is our solution for the most advanced models where we get into the millions, even billions of hyperparameters,” says Salvator.
Salvator says a lot of the gain is from the H100’s “transformer engine.” Essentially, it’s the intelligent use of low-precision—efficient but less accurate—computations whenever possible. The scheme is particularly designed for neural networks called transformers, of which the natural language processing benchmark
BERT is an example. Transformers are in the works for many other machine learning tasks. “Transformer-based networks have been literally transformative to AI,” says Salvator. “It’s a horrible pun.“
Memory is a bottleneck for all sorts of AI, but it’s particularly limiting in BERT and other transformer models. Such neural networks rely on a quality called “attention.” You can think of it as how many words a language processor is aware of at once. It doesn’t scale up well, largely because it leads to a huge increase in writing to system memory. Earlier this year
Hazy Research (the name for Chris Re’s lab at Stanford) deployed an algorithm to an Azure cloud system that shaved 10 percent of the training time off Microsoft’s best effort. For this round, Azure and Hazy Research worked together to demonstrate the algorithm—called Flash Attention.
Both the image-classification and natural-language-processing tables show Intel’s competitive position. The company showed results for the
Habana Gaudi2, its second generation AI accelerator, and the Sapphire Rapids Xeon CPU, which will be commercially available in the coming months. For the latter, the company was out to prove that you can do a lot of machine-learning training without a GPU.
A setup with 32 CPUs landed well behind a Microsoft Azure cloud-based system with only four GPUs on object recognition, but it still finished in less than an hour and a half, and for natural-language processing, it nearly matched that Azure system. In fact, none of the training took longer than 90 minutes, even on much more modest CPU-only computers.
“This is for customers for whom training is part of the workload, but it’s not
the workload,” says Jordan Plawner, an Intel senior director and AI product manager. Intel is reasoning that if a customer is retraining only once a week, whether the work takes 30 minutes or 5 minutes is of too little importance for them to spend on a GPU accelerator they don’t need for the rest of the week.
Habana Gaudi2 is a different story. As the company’s dedicated machine-learning accelerator, the 7-nanometer chip goes up against Nvidia’s A100 (another 7-nm chip) and soon will face the 5-nm H100. In that light, it performed well on certain tests. On image classification, an eight-chip system landed only a couple of minutes behind an eight-chip H100. But the gap was much wider with the H100 at the natural-language-processing task, though it still narrowly bested an equal-size and Hazy-Research-enhanced A100 system.
“We’re not done with Gaudi 2,” says Habana’s Eitan Medina. Like others, Habana is hoping to speed learning by strategically using low-precision computations on certain layers of neural networks. The chip has 8-bit floating-point capabilities, but so far the smallest precision the company has engaged on the chip for MLPerf training purposes is
MLCommons released results for training high-performance computers—supercomputers and other big systems—at the same time as those for training servers. The HPC benchmarks are not as established and have fewer participants, but they still give a snapshot of how machine learning is done in the supercomputing space and what the goals are. There are three benchmarks: CosmoFlow estimates physical quantities from cosmological image data; DeepCAM spots hurricanes and atmospheric rivers in climate simulation data; and OpenCatalyst predicts the energy levels of molecular configurations.
There are two ways to measure systems on these benchmarks. One is to run a number of instances of the same neural network on the supercomputer, and the other is to just throw a bunch of resources at a single instance of the problem and see how long it takes. The table below is the latter and just for CosmoFlow, because it’s much simpler to read. (Again, feel free to view the whole schemozzle at MLCommons.)
The CosmoFlow results show four supercomputers powered by as many different types of CPU architectures and two types of GPU. Three of the four were accelerated by Nvidia GPUs, but Fugaku, the
second most powerful computer in the world, used only its own custom-built processor, the Fujitsu A64FX.
The MLPerf HPC benchmarks came out only the week before
Supercomputing 2022, in Dallas, one of the two conferences at which new Top500 rankings of supercomputers are announced.
A separate benchmark for supercomputing AI has also been developed. Instead of training particular neural networks, it solves “a system of linear equations using novel, mixed-precision algorithms that exploit modern hardware.” Although results from the two benchmarks don’t line up, there is overlap between the
HPL-MxP list and the CosmoFlow results including: Nvidia’s Selene, Riken’s Fugaku, and Germany’s JUWELS.
Tiny ML systems
The latest addition to the MLPerf effort is a suite of benchmarks designed to test the speed and energy efficiency of microcontrollers and other small chips that execute neural networks that do things like spotting keywords and other low-power, always-on tasks. MLPerf Tiny, as it’s called, is too new for real trends to have emerged in the data. But the results released so far show a couple of standouts. The table here shows the fastest “visual wakewords” results for each type of processor, and shows that
Syntiant and Greenwave Technologies have an edge over the competition.
Match ID: 153 Score: 5.00 source: spectrum.ieee.org age: 12 days qualifiers: 3.57 google, 1.43 microsoft
Elon Musk, step aside. You may be the richest rich man in the space business, but you’re not first. Musk’s SpaceX corporation is a powerful force, with its weekly launches and visions of colonizing Mars. But if you want a broader view of how wealthy entrepreneurs have shaped space exploration, you might want to look at George Ellery Hale, James Lick, William McDonald or—remember this name—John D. Hooker.
All this comes up now because SpaceX, joining forces with the billionaire
Jared Isaacman, has made what sounds at first like a novel proposal to NASA: It would like to see if one of the company’s Dragon spacecraft can be sent to service the fabled, invaluable (and aging) Hubble Space Telescope, last repaired in 2009.
Private companies going to the rescue of one of NASA’s crown jewels? NASA’s mantra in recent years has been to let
private enterprise handle the day-to-day of space operations—communications satellites, getting astronauts to the space station, and so forth—while pure science, the stuff that makes history but not necessarily money, remains the province of government. Might that model change?
“We’re working on crazy ideas all the time,” said
Thomas Zurbuchen, NASA’s space science chief. "Frankly, that’s what we’re supposed to do.”
It’s only a six-month feasibility study for now; no money will change hands between business and NASA. But Isaacman, who made his fortune in
payment-management software before turning to space, suggested that if a Hubble mission happens, it may lead to other things. “Alongside NASA, exploration is one of many objectives for the commercial space industry,” he said on a media teleconference. “And probably one of the greatest exploration assets of all time is the Hubble Space Telescope.”
So it’s possible that at some point in the future, there may be a SpaceX Dragon, perhaps with Isaacman as a crew member, setting out to grapple the Hubble, boost it into a higher orbit, maybe even replace some worn-out components to lengthen its life.
Aerospace companies say privately mounted repair sounds like a good idea. So good that they’ve proposed it already.
The Chandra X-ray telescope, as photographed by space-shuttle astronauts after they deployed it in July 1999. It is attached to a booster that moved it into an orbit 10,000 by 100,000 kilometers from Earth.NASA
Northrop Grumman, one of the United States’ largest aerospace contractors, has quietly suggested to NASA that it might service one of the Hubble’s sister telescopes, the Chandra X-ray Observatory. Chandra was launched into Earth orbit by the space shuttle Columbia in 1999 (Hubble was launched from the shuttle Discovery in 1990), and the two often complement each other, observing the same celestial phenomena at different wavelengths.
As in the case of the SpaceX/Hubble proposal, Northrop Grumman’s Chandra study is at an early stage. But there are a few major differences. For one, Chandra was assembled by TRW, a company that has since been bought by Northrop Grumman. And another company subsidiary,
SpaceLogistics, has been sending what it calls Mission Extension Vehicles (MEVs) to service aging Intelsat communications satellites since 2020. Two of these robotic craft have launched so far. The MEVs act like space tugs, docking with their target satellites to provide them with attitude control and propulsion if their own systems are failing or running out of fuel. SpaceLogistics says it is developing a next-generation rescue craft, which it calls a Mission Robotic Vehicle, equipped with an articulated arm to add, relocate, or possibly repair components on orbit.
“We want to see if we can apply this to space-science missions,” says
Jon Arenberg, Northrop Grumman’s chief mission architect for science and robotic exploration, who worked on Chandra and, later, the James Webb Space Telescope. He says a major issue for servicing is the exacting specifications needed for NASA’s major observatories; Chandra, for example, records the extremely short wavelengths of X-ray radiation (0.01–10 nanometers).
“We need to preserve the scientific integrity of the spacecraft,” he says. “That’s an absolute.”
But so far, the company says, a mission seems possible. NASA managers have listened receptively. And Northrop Grumman says a servicing mission could be flown for a fraction of the cost of a new telescope.
New telescopes need not be government projects. In fact, NASA’s chief economist,
Alexander MacDonald, argues that almost all of America’s greatest observatories were privately funded until Cold War politics made government the major player in space exploration. That’s why this story began with names from the 19th and 20th centuries—Hale, Lick, and McDonald—to which we should add Charles Yerkes and, more recently, William Keck. These were arguably the Elon Musks of their times—entrepreneurs who made millions in oil, iron, or real estate before funding the United States’ largest telescopes. (Hale’s father manufactured elevators—highly profitable in the rebuilding after the Great Chicago Fire of 1871.) The most ambitious observatories, MacDonald calculated for his book The Long Space Age, were about as expensive back then as some of NASA’s modern planetary probes. None of them had very much to do with government.
To be sure, government will remain a major player in space for a long time. “NASA pays the cost, predominantly, of the development of new commercial crew vehicles, SpaceX’s Dragon being one,” MacDonald says. “And now that those capabilities exist, private individuals can also pay to utilize those capabilities.” Isaacman doesn’t have to build a spacecraft; he can hire one that SpaceX originally built for NASA.
“I think that creates a much more diverse and potentially interesting space-exploration future than we have been considering for some time,” MacDonald says.
So put these pieces together: Private enterprise has been a driver of space science since the 1800s. Private companies are already conducting on-orbit satellite rescues. NASA hasn’t said no to the idea of private missions to service its orbiting observatories.
And why does John D. Hooker’s name matter? In 1906, he agreed to put up US $45,000 (about $1.4 million today) to make the mirror for a
100-inch reflecting telescope at Mount Wilson, Calif. One astronomer made the Hooker Telescope famous by using it to determine that the universe, full of galaxies, was expanding.
The astronomer’s name was
Edwin Hubble. We’ve come full circle.
Match ID: 154 Score: 5.00 source: spectrum.ieee.org age: 40 days qualifiers: 2.14 musk, 1.43 development, 1.43 apple
In 2014, Ukrainian soldiers fighting in Crimea knew that the sight of Russian drones would soon be followed by a heavy barrage of Russian artillery. During that war, the Russian military integrated drones into tactical missions, using them to hunt for Ukrainian forces, whom they then pounded with artillery and cannon fire. Russian drones weren’t as advanced as those of their Western counterparts, but the Russian military’s integration of drones into its battlefield tactics was second to none.
Eight years later, the Russians are again invading Ukraine. And since the earlier incursion, the Russian military has spent approximately US $9 billion to domestically produce an armada of some 500 drones (a.k.a. unmanned aerial vehicles, or UAVs). But, astonishingly, three weeks into this invasion, the Russians have not had anywhere near their previous level of success with their drones. There are even signs that in the drone war, the Ukrainians have an edge over the Russians.
How could the drone capabilities of these two militaries have experienced such differing fortunes over the same period? The answer lies in a combination of trade embargoes, tech development, and the rising importance of countermeasures.
Since 2014’s invasion of Crimea, Russia’s drone-development efforts have lagged—during a time of dynamic evolution and development across the UAV industry.
First, some background. Military drones come in a wide variety of sizes, purposes, and capabilities, but they can be grouped into a few categories. On one end of the spectrum are relatively tiny flying bombs, small enough to be carried in a rucksack. On the other end are high-altitude drones, with wingspans up to 25 meters and capable of staying aloft for 30 or 40 hours, of being operated from consoles thousands of kilometers from the battlefield, and of firing air-to-surface missiles with deadly precision. In between are a range of intermediate-size drones used primarily for surveillance and reconnaissance.
Russia’s fleet of drones includes models in each of these categories. However, sanctions imposed after the 2014 invasion of Crimea blocked the Russian military from procuring some key technologies necessary to stay on the cutting edge of drone development, particularly in optics, lightweight composites, and electronics. With relatively limited capabilities of its own in these areas, Russia’s drone development efforts became somewhat sluggish during a time of dynamic evolution and development elsewhere.
Current stalwarts in the Russian arsenal include the Zala Kyb, which is a “loitering munition” that can dive into a target and explode. The most common Russian drones are midsize ones used for surveillance and reconnaissance. These include the Eleron-3SV and the Orlan-10 drones, both of which have been used extensively in Syria and Ukraine. In fact, just last week, an Orlan-10 operator was awarded a military medal for locating a site from which Ukrainian soldiers were ambushing Russian tanks, and also a Ukrainian basing area outside Kyiv containing ten artillery pieces, which were subsequently destroyed. Russia’s only large, missile-firing drone is the Kronshtadt Orion, which is similar to the American MQ-1 Predator and can be used for precision strikes as well as reconnaissance. An Orion was credited with an air strike on a command center in Ukraine in early March 2022.
Meanwhile, since the 2014 Crimea war, when they had no drones at all, the Ukrainians have methodically assembled a modest but highly capable set of drones. The backbone of the fleet, with some 300 units fielded, are the A1-SM Fury and the Leleka-100 reconnaissance drones, both designed and manufactured in Ukraine. The A1-SM Fury entered service in April 2020, and the Leleka-100, in May, 2021.
On offense, the Ukrainian and Russian militaries are closely matched in the drone war. The difference is on defense.
The heavy hitter for Ukraine in this war, though, is the Bayraktar TB2 drone, a combat aerial flyer with a wingspan of 12 meters and an armament of four laser-guided bombs. As of the beginning of March, and after losing two TB2s to Russian-backed separatist forces in Lugansk, Ukraine had a complement of 30 of the drones, which were designed and developed in Turkey. These drones are specifically aimed at destroying tanks and as of 24 March had been credited with destroying 26 vehicles, 10 surface-to-air missile systems, and 3 command posts. Various reports have put the cost of a TB2 at anywhere from $1 million to $10 million. It’s much cheaper than the tens of millions fetched for better-known combat drones, such as the MQ-9 Reaper, the backbone of the U.S. Air Force’s fleet of combat drones.
The Ukrainian arsenal also includes the Tu-141 reconnaissance drones, which are large, high-altitude Soviet-era drones that have had little success in the war. At the small end of the Ukraine drone complement are 100 Switchblade drones, which were donated by the United States as part of the $800 million weapons package announced on 16 March. The Switchblades are loitering munitions similar in size and functionality to the Russian Zala Kyb.
The upshot is that on offense, the Ukrainian and Russian militaries are closely matched in the drone war. The difference is on defense: Ukraine has the advantage when it comes to counter-drone technology. A decade ago, counter-drone technology mostly meant using radar to detect drones and surface-to-air missiles to shoot them down. It quickly proved far too costly and ineffective. Drone technology advanced at a brisk pace over the past decade, so counter-drone technology had to move rapidly to keep up. In Russia, it didn’t. Here, again, the Russian military was hampered by technology embargoes and a domestic industrial base that has been somewhat stagnant and lacking in critical capabilities. For contrast, the combined industrial base of the countries supporting Ukraine in this war is massive and has invested heavily in counter-drone technology.
Russia has deployed electronic warfare systems to counter enemy drones and have likely been using the Borisoglebsk 2 MT-LB and R-330Zh Zhitel systems, which use a combination of jamming and spoofing. These systems fill the air with radio-frequency energy, increasing the noise threshold to such a level that the drone cannot distinguish control signals from the remote pilot. Another standard counterdrone technique is sending false signals to the drone, with the most common being fake (“spoofed”) GPS signals, which disorient the flyer. Jamming and spoofing systems are easy to target because they emit radio-frequency waves at fairly high intensities. In fact, open-source images show that Ukrainian forces have already destroyed three of these Russian counterdrone systems.
The exact systems that have been provided to the Ukrainians is not publicly known, but it’s possible to make an educated guess from among the many systems available.
Additionally, some of the newer drones being used by the Ukrainians include features to withstand such electronic attacks. For example, when one of these drones detects a jamming signal, it switches to frequencies that are not being jammed; if it is still unable to reestablish a connection, the drone operates autonomously with a series of preset maneuvers until a connection can be reestablished.
Meanwhile, Ukraine has access to the wide array of NATO counterdrone technologies. The exact systems that have been provided to the Ukrainians is not publicly known, but it’s possible to make an educated guess from among the many systems available. One of the more powerful ones, from Lockheed Martin, repurposes a solid-state, phased-array radar system developed to spot incoming munitions, to detect and identify a drone. The system then tracks the drone and uses high-energy lasers to shoot it down. Raytheon’s counterdrone portfolio includes similar capabilities along with drone-killing drones and systems capable of beaming high-power microwaves that disrupt the drone’s electronics.
While most major Western defense contractors have some sort of counterdrone system, there has also been significant innovation in the commercial sector, given the mass proliferation of commercial drones. While many of these technologies are aimed at smaller drones, some of the technologies, including acoustic sensing and radio-frequency localization, are effective against larger drones as well. Also, a dozen small companies have developed jamming and spoofing systems specifically aimed at countering modern drones.
Although we don’t know specifically which counterdrone systems are being deployed by the Ukrainians, the images of the destroyed drones tell a compelling story. In the drone war, many of the flyers on both sides have been captured or destroyed on the ground, but more than half were disabled while in flight. The destroyed Ukrainian drones often show tremendous damage, including burn marks and other signs that they were shot down by a Russian surface-to-air missile. A logical conclusion is that the Russians’ electronic counterdrone systems were not effective. Meanwhile, the downed Russian drones are typically much more intact, showing relatively minor damage consistent with a precision strike from a laser or electromagnetic pulse. This is exactly what you would expect if the drones had been dispatched by one of the newer Western counterdrone systems.
In the first three weeks of this conflict, Russian drones have failed to achieve the level of success that they did in 2014. The Ukrainians, on the other hand, have logged multiple victories with drone and counterdrone forces assembled in just 8 years. The Russian drones, primarily domestically sourced, have been foiled repeatedly by NATO counterdrone technology. Meanwhile, the Ukrainian drones, such as the TB2s procured from NATO-member Turkey, have had multiple successes against the Russian counterdrone systems. Match ID: 155 Score: 5.00 source: spectrum.ieee.org age: 249 days qualifiers: 3.57 trade, 1.43 development
ProWritingAid VS Grammarly: When it comes to English grammar, there are two Big Players that everyone knows of: the Grammarly and ProWritingAid. but you are wondering which one to choose so here we write a detail article which will help you to choose the best one for you so Let's start
What is Grammarly?
Grammarly is a tool that checks for grammatical errors, spelling, and punctuation.it gives you comprehensive feedback on your writing. You can use this tool to proofread and edit articles, blog posts, emails, etc.
Grammarly also detects all types of mistakes, including sentence structure issues and misused words. It also gives you suggestions on style changes, punctuation, spelling, and grammar all are in real-time. The free version covers the basics like identifying grammar and spelling mistakes
whereas the Premium version offers a lot more functionality, it detects plagiarism in your content, suggests word choice, or adds fluency to it.
Features of Grammarly
Spelling and Word Suggestion: Grammarly detects basic to advance grammatical errors and also help you why this is an error and suggest to you how you can improve it
Create a Personal Dictionary: The Grammarly app allows you to add words to your personal dictionary so that the same mistake isn't highlighted every time you run Grammarly.
Different English Style: Check to spell for American, British, Canadian, and Australian English.
Plagiarism: This feature helps you detect if a text has been plagiarized by comparing it with over eight billion web pages.
Wordiness: This tool will help you check your writing for long and hard-to-read sentences. It also shows you how to shorten sentences so that they are more concise.
Passive Voice: The program also notifies users when passive voice is used too frequently in a document.
Punctuations: This feature flags all incorrect and missing punctuation.
Repetition: The tool provides recommendations for replacing the repeated word.
Proposition: Grammarly identifies misplaced and confused prepositions.
Plugins: It offers Microsoft Word, Microsoft Outlook, and Google Chrome plugins.
What is ProWritingAid?
ProWritingAid is a style and grammar checker for content creators and writers. It helps to optimize word choice, punctuation errors, and common grammar mistakes, providing detailed reports to help you improve your writing.
ProWritingAid can be used as an add-on to WordPress, Gmail, and Google Docs. The software also offers helpful articles, videos, quizzes, and explanations to help improve your writing.
Features of ProWriting Aid
Here are some key features of ProWriting Aid:
Grammar checker and spell checker: This tool helps you to find all grammatical and spelling errors.
Find repeated words: The tool also allows you to search for repeated words and phrases in your content.
Context-sensitive style suggestions: You can find the exact style of writing you intend and suggest if it flows well in your writing.
Check the readability of your content: Pro Writing Aid helps you identify the strengths and weaknesses of your article by pointing out difficult sentences and paragraphs.
Sentence Length: It also indicates the length of your sentences.
Check Grammatical error: It also checks your work for any grammatical errors or typos, as well.
Overused words: As a writer, you might find yourself using the same word repeatedly. ProWritingAid's overused words checker helps you avoid this lazy writing mistake.
Consistency: Check your work for inconsistent usage of open and closed quotation marks.
Echoes: Check your writing for uniformly repetitive words and phrases.
Difference between Grammarly and Pro-Writing Aid
Grammarly and ProWritingAid are well-known grammar-checking software. However, if you're like most people who can't decide which to use, here are some different points that may be helpful in your decision.
Grammarly vs ProWritingAid
Grammarly is a writing enhancement tool that offers suggestions for grammar, vocabulary, and syntax whereas ProWritingAid offers world-class grammar and style checking, as well as advanced reports to help you strengthen your writing.
Grammarly provides Android and IOS apps whereas ProWritingAid doesn't have a mobile or IOS app.
Grammarly offers important suggestions about mistakes you've made whereas ProWritingAid shows more suggestions than Grammarly but all recommendations are not accurate
Grammarly has a more friendly UI/UX whereas the ProWritingAid interface is not friendly as Grammarly.
Grammarly is an accurate grammar checker for non-fiction writing whereas ProWritingAid is an accurate grammar checker for fiction writers.
Grammarly finds grammar and punctuation mistakes, whereas ProWritingAid identifies run-on sentences and fragments.
Grammarly provides 24/7 support via submitting a ticket and sending emails. ProWritingAid’s support team is available via email, though the response time is approximately 48 hours.
Grammarly offers many features in its free plan, whereas ProWritingAid offers some basic features in the free plan.
Grammarly does not offer much feedback on big picture writing; ProWritingAid offers complete feedback on big picture writing.
Grammarly is a better option for accuracy, whereas ProWritingAid is better for handling fragmented sentences and dialogue. It can be quite useful for fiction writers.
ProWritingAid VS Grammarly: Pricing Difference
ProWritingAid comes with three pricing structures. The full-year cost of ProWritingAid is $79, while its lifetime plans cost $339. You also can opt for a monthly plan of $20.
Grammarly offers a Premium subscription for $30/month for a monthly plan $20/month for quarterly and $12/month for an annual subscription.
The Business plan costs $12.50 per month for each member of your company.
ProWritingAid vs Grammarly – Pros and Cons
It allows you to fix common mistakes like grammar and spelling.
Offers most features in the free plan
Allows you to edit a document without affecting the formatting.
Active and passive voice checker
Plagiarism checker (paid version)
Proofread your writing and correct all punctuation, grammar, and spelling errors.
Allows you to make changes to a document without altering its formatting.
Helps users improve vocabulary
Browser extensions and MS word add-ons
Available on all major devices and platforms
Grammarly will also offer suggestions to improve your style.
Enhance the readability of your sentence
Free mobile apps
Offers free version
Supports only English
Customer support only via email
Limits to 150,000 words
Subscription plans can be a bit pricey
Plagiarism checker is only available in a premium plan
Doesn’t offer a free trial
No refund policy
The free version is ideal for basic spelling and grammatical mistakes, but it does not correct advanced writing issues.
Some features are not available for Mac.
It offers more than 20 different reports to help you improve your writing.
Less expensive than other grammar checkers.
This tool helps you strengthen your writing style as it offers big-picture feedback.
ProWritingAid has a life plan with no further payments required.
Compatible with Google Docs!
Prowritingaid works on both Windows and Mac.
They offer more integrations than most tools.
Editing can be a little more time-consuming when you add larger passages of text.
ProWritingAid currently offers no mobile app for Android or iOS devices.
Plagiarism checker is only available in premium plans.
All recommendations are not accurate
Summarizing the Ginger VS Grammarly: My Recommendation
As both writing assistants are great in their own way, you need to choose the one that suits you best.
For example, go for Grammarly if you are a non-fiction writer
Go for ProWritingAid if you are a fiction writer.
ProWritingAid is better at catching errors found in long-form content. However, Grammarly is more suited to short blog posts and other similar tasks.
ProWritingAid helps you clean up your writing by checking for style, structure, and content while Grammarly focuses on grammar and punctuation.
Grammarly has a more friendly UI/UX whereas; ProWritingAid offers complete feedback on big picture writing.
Both ProWritingAid and Grammarly are awesome writing tools, without a doubt. but as per my experience, Grammarly is a winner here because Grammarly helps you to review and edit your content. Grammarly highlights all the mistakes in your writing within seconds of copying and pasting the content into Grammarly’s editor or using the software’s native feature in other text editors.
Not only does it identify tiny grammatical and spelling errors, it tells you when you overlook punctuations where they are needed. And, beyond its plagiarism-checking capabilities, Grammarly helps you proofread your content. Even better, the software offers a free plan that gives you access to some of its features.
Match ID: 156 Score: 5.00 source: www.crunchhype.com age: 261 days qualifiers: 3.57 google, 1.43 microsoft
SEMrush and Ahrefs are among the most popular tools in the SEO industry. Both companies have been in business for years and have thousands of customers per month.
If you're a professional SEO or trying to do digital marketing on your own, at some point you'll likely consider using a tool to help with your efforts. Ahrefs and SEMrush are two names that will likely appear on your shortlist.
In this guide, I'm going to help you learn more about these SEO tools and how to choose the one that's best for your purposes.
What is SEMrush?
SEMrush is a popular SEO tool with a wide range of features—it's the leading competitor research service for online marketers. SEMrush's SEO Keyword Magic tool offers over 20 billion Google-approved keywords, which are constantly updated and it's the largest keyword database.
The program was developed in 2007 as SeoQuake is a small Firefox extension
Most accurate keyword data: Accurate keyword search volume data is crucial for SEO and PPC campaigns by allowing you to identify what keywords are most likely to bring in big sales from ad clicks. SEMrush constantly updates its databases and provides the most accurate data.
Largest Keyword database: SEMrush's Keyword Magic Tool now features 20-billion keywords, providing marketers and SEO professionals the largest database of keywords.
All SEMrush users receive daily ranking data, mobile volume information, and the option to buy additional keywords by default with no additional payment or add-ons needed
Most accurate position tracking tool: This tool provides all subscribers with basic tracking capabilities, making it suitable for SEO professionals. Plus, the Position Tracking tool provides local-level data to everyone who uses the tool.
SEO Data Management: SEMrush makes managing your online data easy by allowing you to create visually appealing custom PDF reports, including Branded and White Label reports, report scheduling, and integration with GA, GMB, and GSC.
Toxic link monitoring and penalty recovery: With SEMrush, you can make a detailed analysis of toxic backlinks, toxic scores, toxic markers, and outreach to those sites.
Content Optimization and Creation Tools: SEMrush offers content optimization and creation tools that let you create SEO-friendly content. Some features include the SEO Writing Assistant, On-Page SEO Check, er/SEO Content Template, Content Audit, Post Tracking, Brand Monitoring.
Ahrefs is a leading SEO platform that offers a set of tools to grow your search traffic, research your competitors, and monitor your niche. The company was founded in 2010, and it has become a popular choice among SEO tools. Ahrefs has a keyword index of over 10.3 billion keywords and offers accurate and extensive backlink data updated every 15-30 minutes and it is the world's most extensive backlink index database.
Backlink alerts data and new keywords: Get an alert when your site is linked to or discussed in blogs, forums, comments, or when new keywords are added to a blog posting about you.
Intuitive interface: The intuitive design of the widget helps you see the overall health of your website and search engine ranking at a glance.
Site Explorer: The Site Explorer will give you an in-depth look at your site's search traffic.
Reports with charts and graphs
A question explorer that provides well-crafted topic suggestions
Direct Comparisons: Ahrefs vs SEMrush
Now that you know a little more about each tool, let's take a look at how they compare. I'll analyze each tool to see how they differ in interfaces, keyword research resources, rank tracking, and competitor analysis.
Ahrefs and SEMrush both offer comprehensive information and quick metrics regarding your website's SEO performance. However, Ahrefs takes a bit more of a hands-on approach to getting your account fully set up, whereas SEMrush's simpler dashboard can give you access to the data you need quickly.
In this section, we provide a brief overview of the elements found on each dashboard and highlight the ease with which you can complete tasks.
The Ahrefs dashboard is less cluttered than that of SEMrush, and its primary menu is at the very top of the page, with a search bar designed only for entering URLs.
Additional features of the Ahrefs platform include:
You can see analytics from the dashboard, including search engine rankings to domain ratings, referring domains, and backlink
Jumping from one tool to another is easy. You can use the Keyword Explorer to find a keyword to target and then directly track your ranking with one click.
The website offers a tooltip helper tool that allows you to hover your mouse over something that isn't clear and get an in-depth explanation.
When you log into the SEMrush Tool, you will find four main modules. These include information about your domains, organic keyword analysis, ad keyword, and site traffic.
You'll also find some other options like
A search bar allows you to enter a domain, keyword, or anything else you wish to explore.
A menu on the left side of the page provides quick links to relevant information, including marketing insights, projects, keyword analytics, and more.
The customer support resources located directly within the dashboard can be used to communicate with the support team or to learn about other resources such as webinars and blogs.
Detailed descriptions of every resource offered. This detail is beneficial for new marketers, who are just starting.
Both Ahrefs and SEMrush have user-friendly dashboards, but Ahrefs is less cluttered and easier to navigate. On the other hand, SEMrush offers dozens of extra tools, including access to customer support resources.
When deciding on which dashboard to use, consider what you value in the user interface, and test out both.
If you're looking to track your website's search engine ranking, rank tracking features can help. You can also use them to monitor your competitors.
Let's take a look at Ahrefs vs. SEMrush to see which tool does a better job.
The Ahrefs Rank Tracker is simpler to use. Just type in the domain name and keywords you want to analyze, and it spits out a report showing you the search engine results page (SERP) ranking for each keyword you enter.
Rank Tracker looks at the ranking performance of keywords and compares them with the top rankings for those keywords. Ahrefs also offers:
You'll see metrics that help you understand your visibility, traffic, average position, and keyword difficulty.
It gives you an idea of whether a keyword would be profitable to target or not.
SEMRush offers a tool called Position Tracking. This tool is a project tool—you must set it up as a new project. Below are a few of the most popular features of the SEMrush Position Tracking tool:
All subscribers are given regular data updates and mobile search rankings upon subscribing
The platform provides opportunities to track several SERP features, including Local tracking.
Intuitive reports allow you to track statistics for the pages on your website, as well as the keywords used in those pages.
Identify pages that may be competing with each other using the Cannibalization report.
Ahrefs is a more user-friendly option. It takes seconds to enter a domain name and keywords. From there, you can quickly decide whether to proceed with that keyword or figure out how to rank better for other keywords.
SEMrush allows you to check your mobile rankings and ranking updates daily, which is something Ahrefs does not offer. SEMrush also offers social media rankings, a tool you won't find within the Ahrefs platform. Both are good which one do you like let me know in the comment.
Keyword research is closely related to rank tracking, but it's used for deciding which keywords you plan on using for future content rather than those you use now.
When it comes to SEO, keyword research is the most important thing to consider when comparing the two platforms.
The Ahrefs Keyword Explorer provides you with thousands of keyword ideas and filters search results based on the chosen search engine.
Ahrefs supports several features, including:
It can search multiple keywords in a single search and analyze them together. At SEMrush, you also have this feature in Keyword Overview.
Ahrefs has a variety of keywords for different search engines, including Google, YouTube, Amazon, Bing, Yahoo, Yandex, and other search engines.
When you click on a keyword, you can see its search volume and keyword difficulty, but also other keywords related to it, which you didn't use.
SEMrush's Keyword Magic Tool has over 20 billion keywords for Google. You can type in any keyword you want, and a list of suggested keywords will appear.
The Keyword Magic Tool also lets you to:
Show performance metrics by keyword
Search results are based on both broad and exact keyword matches.
Show data like search volume, trends, keyword difficulty, and CPC.
Show the first 100 Google search results for any keyword.
Identify SERP Features and Questions related to each keyword
SEMrush has released a new Keyword Gap Tool that uncovers potentially useful keyword opportunities for you, including both paid and organic keywords.
Both of these tools offer keyword research features and allow users to break down complicated tasks into something that can be understood by beginners and advanced users alike.
If you're interested in keyword suggestions, SEMrush appears to have more keyword suggestions than Ahrefs does. It also continues to add new features, like the Keyword Gap tool and SERP Questions recommendations.
Both platforms offer competitor analysis tools, eliminating the need to come up with keywords off the top of your head. Each tool is useful for finding keywords that will be useful for your competition so you know they will be valuable to you.
Ahrefs' domain comparison tool lets you compare up to five websites (your website and four competitors) side-by-side.it also shows you how your site is ranked against others with metrics such as backlinks, domain ratings, and more.
Use the Competing Domains section to see a list of your most direct competitors, and explore how many keywords matches your competitors have.
To find more information about your competitor, you can look at the Site Explorer and Content Explorer tools and type in their URL instead of yours.
SEMrush provides a variety of insights into your competitors' marketing tactics. The platform enables you to research your competitors effectively. It also offers several resources for competitor analysis including:
Traffic Analytics helps you identify where your audience comes from, how they engage with your site, what devices visitors use to view your site, and how your audiences overlap with other websites.
SEMrush's Organic Research examines your website's major competitors and shows their organic search rankings, keywords they are ranking for, and even if they are ranking for any (SERP) features and more.
The Market Explorer search field allows you to type in a domain and lists websites or articles similar to what you entered. Market Explorer also allows users to perform in-depth data analytics on These companies and markets.
SEMrush wins here because it has more tools dedicated to competitor analysis than Ahrefs. However, Ahrefs offers a lot of functionality in this area, too. It takes a combination of both tools to gain an advantage over your competition.
Lite Monthly: $99/month
Standard Monthly: $179/month
Annually Lite: $990/year
Annually Standard: $1790/year
Pro Plan: $119.95/month
Business Plan: $449.95/month
Which SEO tool should you choose for digital marketing?
When it comes to keyword data research, you will become confused about which one to choose.
Consider choosing Ahrefs if you
Like friendly and clean interface
Searching for simple keyword suggestions
Want to get more keywords for different search engines like Amazon, Bing, Yahoo, Yandex, Baidu, and more
Consider SEMrush if you:
Want more marketing and SEO features
Need competitor analysis tool
Need to keep your backlinks profile clean
Looking for more keyword suggestions for Google
Both tools are great. Choose the one which meets your requirements and if you have any experience using either Ahrefs or SEMrush let me know in the comment section which works well for you.
Match ID: 157 Score: 5.00 source: www.crunchhype.com age: 273 days qualifiers: 3.57 google, 1.43 amazon
Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.
IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.
Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.
“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”
The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.
Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).
Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT
In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.
As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.
In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.
“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.
On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.
While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.
“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”
This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.
“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.
Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.
Match ID: 158 Score: 5.00 source: spectrum.ieee.org age: 295 days qualifiers: 3.57 trade, 1.43 development
Are you a great Chrome user? That’s nice to hear. But first, consider whether or not there are any essential Chrome extensions you are currently missing from your browsing life, so here we're going to share with you 10 Best Chrome Extensions That Are Perfect for Everyone. So Let's Start.
When you have too several passwords to remember, LastPass remembers them for you.
This chrome extension is an easy way to save you time and increase security. It’s a single password manager that will log you into all of your accounts. you simply ought to bear in mind one word: your LastPass password to log in to all or any your accounts.
Save usernames and passwords and LastPasswill log you in automatically.
Fill the forms quickly to save your addresses, credit card numbers and more.
MozBar is an SEO toolbar extension that makes it easy for you to analyze your web pages' SEO while you surf. You can customize your search so that you see data for a particular region or for all regions. You get data such as website and domain authority and link profile. The status column tells you whether there are any no-followed links to the page.You can also compare link metrics. There is a pro version of MozBar, too.
Grammarly is a real-time grammar checking and spelling tool for online writing. It checks spelling, grammar, and punctuation as you type, and has a dictionary feature that suggests related words. if you use mobile phones for writing than Grammerly also have a mobile keyboard app.
VidIQ is a SaaS product and Chrome Extension that makes it easier to manage and optimize your YouTube channels. It keeps you informed about your channel's performance with real-time analytics and powerful insights.
Learn more about insights and statistics beyond YouTube Analytics
Find great videos with the Trending tab.
You can check out any video’s YouTube rankings and see how your own video is doing on the charts.
Keep track the history of the keyword to determine when a keyword is rising or down in popularity over time.
Quickly find out which videos are performing the best on YouTube right now.
Let this tool suggest keywords for you to use in your title, description and tags.
ColorZilla is a browser extension that allows you to find out the exact color of any object in your web browser. This is especially useful when you want to match elements on your page to the color of an image.
Advanced Color Picker (similar to Photoshop's)
Ultimate CSS Gradient Generator
The "Webpage Color Analyzer" site helps you determine the palette of colors used in a particular website.
Palette Viewer with 7 pre-installed palettes
Eyedropper - sample the color of any pixel on the page
Color History of recently picked colors
Displays some info about the element, including the tag name, class, id and size.
Auto copy picked colors to clipboard
Get colors of dynamic hover elements
Pick colors from Flash objects
Pick colors at any zoom level
Honey is a chrome extension with which you save each product from the website and notify it when it is available at low price it's one among the highest extensions for Chrome that finds coupon codes whenever you look online.
Best for finding exclusive prices on Amazon.
A free reward program called Honey Gold.
Searches and filters the simplest value fitting your demand.
7. GMass: Powerful Chrome Extension for Gmail Marketers
GMass (or Gmail Mass) permits users to compose and send mass emails using Gmail. it is a great tool as a result of you'll use it as a replacement for a third-party email sending platform. you will love GMass to spice up your emailing functionality on the platform.
8. Notion Web Clipper: Chrome Extension for Geeks
It's a Chrome extension for geeks that enables you to highlight and save what you see on the web.
It's been designed by Notion, that could be a Google space different that helps groups craft higher ideas and collaborate effectively.
Save anything online with just one click
Use it on any device
Organize your saved clips quickly
Tag, share and comment on the clips
If you are someone who works online, you need to surf the internet to get your business done. And often there is no time to read or analyze something. But it's important that you do it. Notion Web Clipper will help you with that.
9. WhatFont: Chrome Extension for identifying Any Site Fonts
WhatFont is a Chrome extension that allows web designers to easily identify and compare different fonts on a page. The first time you use it on any page, WhatFont will copy the selected page.It Uses this page to find out what fonts are present and generate an image that shows all those fonts in different sizes. Besides the apparent websites like Google or Amazon, you'll conjointly use it on sites wherever embedded fonts ar used.
Similar Web is an SEO add on for both Chrome and Firefox.It allows you to check web site traffic and key metrics for any web site, as well as engagement rate, traffic ranking, keyword ranking, and traffic source. this is often a good tool if you are looking to seek out new and effective SEO ways similarly as analyze trends across the web.
Discover keyword trends
Know fresh keywords
Get benefit from the real traffic insights
Analyze engagement metrics
Explore unique visitors data
Analyze your industry's category
Use month to date data
How to Install chrome Extension in Android
I know everyone knows how to install extension in pc but most of people don't know how to install it in android phone so i will show you how to install it in android
1. Download Kiwi browser from Play Store and then Open it.
Continue reading below
2. Tap the three dots at the top right corner and select Extension.
3. Click on (+From Store) to access chrome web store or simple search chrome web store and access it.
4. Once you found an extension click on add to chrome a message will pop-up asking if you wish to confirm your choice. Hit OK to install the extension in the Kiwi browser.
5. To manage extensions on the browser, tap the three dots in the upper right corner. Then select Extensions to access a catalog of installed extensions that you can disable, update or remove with just a few clicks.
Your Chrome extensions should install on Android, but there’s no guarantee all of them will work. Because Google Chrome Extensions are not optimized for Android devices.
We hope this list of 10 best chrome extensions that is perfect for everyone will help you in picking the right Chrome Extensions. We have selected the extensions after matching their features to the needs of different categories of people. Also which extension you like the most let me know in the comment section
Match ID: 159 Score: 5.00 source: www.crunchhype.com age: 302 days qualifiers: 3.57 google, 1.43 amazon
Email is the marketing tool that helps you create a seamless, connected, frictionless buyer journey. More importantly, email marketing allows you to build relationships with prospects, customers, and past customers. It's your chance to speak to them right in their inbox, at a time that suits them. Along with the right message, email can become one of your most powerful marketing channels.
2. What is benefits of email marketing?
Email marketing is best way for creating long term relationship with your clients, and increasing sales in our company.
Benefits of email marketing for bussiness:
Better brand recognition
Statistics of what works best
More traffic to your products/services/newsletter
Most bussinesses are using email marketing and making tons of money with email marketing.
3. What is the simplest day and time to send my marketing emails?
Again, the answer to this question varies from company to company. And again, testing is the way to find out what works best. Typically, weekends and mornings seem to be times when multiple emails are opened, but since your audience may have different habits, it's best to experiment and then use your data to decide.
4. Which metrics should I be looking at?
The two most important metrics for email marketing are open rate and click-through rate. If your emails aren't opened, subscribers will never see your full marketing message, and if they open them but don't click through to your site, your emails won't convert.
5. How do I write a decent subject line?
The best subject lines are short and to the point, accurately describing the content of the email, but also catchy and intriguing, so the reader wants to know more. Once Again, this is the perfect place for A/B testing, to see what types of subject lines work best with your audience. Your call to action should be clear and simple. It should be somewhere at the top of your email for those who haven't finished reading the entire email, then repeated at the end for those reading all the way through. It should state exactly what you want subscribers to do, for example "Click here to download the premium theme for free.
6. Is email marketing still effective?
Email marketing is one of the most effective ways for a business to reach its customers directly. Think about it. You don't post something on your site hoping people will visit it. You don't even post something on a social media page and hope fans see it. You're sending something straight to each person's inbox, where they'll definitely see it! Even if they don't open it, they'll still see your subject line and business name every time you send an email, so you're still communicating directly with your audience.
7. However do I grow my email subscribers list? Should i buy an email list or build it myself?
Buying an email list is waste of time & money. These email accounts are unverified and not interested in your brand. The mailing list is useless if your subscribers do not open your emails. There are different ways to grow your mailing list.
Give them a free ebook and host it on a landing page where they have to enter the email to download the file and also create a forum page on your website, asks your visitors what questions they might have about your business, and collects email addresses to follow up with them.
8. How do I prevent audience from unsubscribing?
If the subject line of the email is irrelevant to customers, they will ignore it multiple times. But, if it keeps repeating, they are intercepted and unsubscribed from your emails. So, send relevant emails for the benefit of the customer. Don't send emails that often only focus on sales, offers and discounts.
Submit information about your business and offers so you can connect with customers. You can also update them on recent trends in your industry. The basic role of an email is first and foremost to connect with customers, get the most out of this tool.
9. What is the difference between a cold email and a spam email?
Cold emails are mostly sales emails that are sent with content align to the needs of the recipient. It is usually personalized and includes a business perspective. However, it is still an unsolicited email. And all unsolicited emails are marked as SPAM.
Regularly receiving this type of unsolicited email in your users' inboxes, chances are your emails will soon be diverted to spam or junk folders. The most important thing to prevent this from happening is to respect your recipients' choice to opt-out of receiving emails from you. You can add the links to easily unsubscribe. You must be familiar with the CAN-SPAM Act and its regulations.
10. Where can I find email template?
Almost all email campaign tools provide you with ready-made templates. Whether you use MailChimp or Pardot, you'll get several email templates ready to use.
However, if you want to create a template from scratch, you can do so.Most of email campaign tools have option to paste the HTML code of your own design.
11. What email marketing trend will help marketers succeed in 2022?
Is it a trend to listen to and get to know your customers? I think people realize how bad it feels for a brand or a company to obsess over themselves without knowing their customers personal needs. People who listen empathetically and then provide value based on what they learn will win.
You can approach email marketing in different ways. We have compiled a list of most frequently asked questions to help you understand how to get started, what constraints you need to keep in mind, and what future development you will need, we don’t have 100% answers to every situation and there’s always a chance you will have something new and different to deal with as you market your own business.
Match ID: 160 Score: 5.00 source: www.crunchhype.com age: 304 days qualifiers: 3.57 google, 1.43 development
Government watchdog says £3.5bn aid in 20 years to 2020 failed to achieve aim of stabilising Afghan government
The UK’s £3.5bn aid to Afghanistan between 2000 and 2020 was implicated in corruption and human rights abuses and failed to achieve its primary objective of stabilising the country’s government, an assessment by the UK government’s aid watchdog has found.
Describing the two-decade aid project as the UK’s single most ambitious programme of state building, the Independent Commission for Aid Impact (ICAI) says decisions to spend aid on counterinsurgency operations were flawed, adding that efforts to reduce gender inequality are likely to be wiped out by the Taliban.
Continue reading... Match ID: 161 Score: 4.29 source: www.theguardian.com age: 6 days qualifiers: 4.29 development
NASA Awards Launch Services Task Order for TROPICS CubeSats Mission Wed, 23 Nov 2022 08:35 EST NASA has selected Rocket Lab USA Inc. of Long Beach, California, to provide the launch service for the agency’s Time-Resolved Observations of Precipitation Structure and Storm Intensity with a Constellation of Smallsats (TROPICS) mission, as part of the agency's Venture-class Acquisition of Dedicated and Rideshare (VADR) launch services contract. Match ID: 162 Score: 4.29 source: www.nasa.gov age: 6 days qualifiers: 4.29 california
“Energy and information are two basic currencies of organic and social systems,” the economics Nobelist Herb Simon once
observed. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.”
Electric vehicles at scale alter the terms of both basic currencies concurrently. Reliable, secure supplies of minerals and software are core elements for EVs, which represent a “shift from a fuel-intensive to a material-intensive energy system,” according to a
report by the International Energy Agency (IEA). For example, the mineral requirements for an EV’s batteries and electric motors are six times that of an internal-combustion-engine (ICE) vehicle, which can increase the average weight of an EV by 340 kilograms (750 pounds). For something like the Ford Lightning, the weight can be more than twice that amount.
EVs also create a shift from an electromechanical-intensive to an information-intensive vehicle. EVs offer a virtual clean slate from which to accelerate the design of safe,
software-defined vehicles, with computing and supporting electronics being the prime enabler of a vehicle’s features, functions, and value. Software also allows for the decoupling of the internal mechanical connections needed in an ICE vehicle, permitting an EV to be controlled remotely or autonomously. An added benefit is that the loss of the ICE power train not only reduces the components a vehicle requires but also frees up space for increased passenger comfort and storage.
The effects of Simon’s profound changes are readily apparent, forcing a 120-year-old industry to fundamentally reinvent itself. EVs require automakers to design new manufacturing processes and build plants to make both EVs and their batteries. Ramping up the battery supply chain is the automakers’ current “most challenging topic,” according to VW chief financial officer Arno Antlitz.
It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.
Furthermore, Kristin Dziczek a policy analyst with the Federal Reserve Bank of Chicago adds, there are
scores of new global EV competitors actively seeking to replace the legacy automakers. The “simplicity” of EVs in comparison with ICE vehicles allows these disruptors to compete virtually from scratch with legacy automakers, not only in the car market itself but for the material and labor inputs as well.
Batteries and the supply-chain challenge
Another critical question is whether all the planned battery-plant output
will support expected EV production demands. For instance, the United States will require 8 million EV batteries annually by 2030 if its target to make EVs half of all new-vehicle sales is met, with that number rising each year after. As IEA executive director Fatih Birolobserves, “Today, the data shows a looming mismatch between the world’s strengthened climate ambitions and the availability of critical minerals that are essential to realizing those ambitions.”
This mismatch worries automakers.
GM, Ford, Tesla, and others have moved to secure batteries through 2025, but it could be tricky after that. Rivian Automotive chief executive RJ Scaringe was recently quoted in the Wall Street Journal as saying that “90 to 95 percent of the (battery) supply chain does not exist,” and that the current semiconductor chip shortage is “a small appetizer to what we are about to feel on battery cells over the next two decades.”
The competition for securing raw materials, along with the increased consumer demand, has caused EV prices to spike. Ford has
raised the price of the Lightning $6,000 to $8,500, and CEO Jim Farley bluntly states that in regard to material shortages in the foreseeable future, “I don’t think we should be confident in any other outcomes than an increase in prices.”
Stiff Competition for Engineering Talent
One critical area of resource competition is over the limited supply of software and systems engineers with the mechatronics and robotics expertise needed for EVs. Major automakers have moved aggressively to bring more software and systems-engineering expertise on board, rather than have it reside at their suppliers, as they have traditionally done. Automakers feel that if they’re not in control of the software, they’re not in control of their product.
Even for the large auto suppliers, the transition to EVs will not be an easy road. For instance, automakers are demanding these suppliers absorb more cost cuts because automakers are finding EVs so expensive to build. Not only do automakers want to bring cutting-edge software expertise in-house, they want greater inside expertise in critical EV supply-chain components, especially batteries.
The underlying reason for the worry: Supplying sufficient raw materials to existing and planned battery plants as well as to the manufacturers of
other renewable energy sources and military systems—who are competing for the same materials—has several complications to overcome. Among them is the need for more mines to provide the metals required, which have spiked in price as demand has increased. For example, while demand for lithium is growing rapidly, investment in mines has significantly lagged the investment that has been aimed toward EVs and battery plants. It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.
Mining the raw materials, of course, assumes that there is sufficient refining capability to process them,
which, outside of China, is limited. This is especially true in the United States, which, according to a Biden Administration special supply-chain investigative report, has “limited raw material production capacity and virtually no processing capacity.” Consequently, the report states, the United States “exports the limited raw materials produced today to foreign markets.” For example, output from the only nickel mine in the United States, the Eagle mine in Minnesota, is sent to Canada for smelting.
“Energy and information are two basic currencies of organic and social systems. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.” —Herb Simon
Another solution may be recycling both EV batteries as well as the waste and rejects from battery manufacturing, which can run
between 5 to 10 percent of production. Effective recycling of EV batteries “has the potential to reduce primary demand compared to total demand in 2040, by approximately 25 percent for lithium, 35 percent for cobalt and nickel, and 55 percent for copper,” according to a report by the University of Sidney’sInstitute for Sustainable Futures.
A recent United Nations provision has banned the use of mercury in spacecraft propellant. Although no private company has actually used mercury propellant in a launched spacecraft, the possibility was alarming enough—and the dangers extreme enough—that the ban was enacted just a few years after one U.S.-based startup began toying with the idea. Had the company gone through with its intention to sell mercury propellant thrusters to some of the companies building massive satellite constellations over the coming decade, it would have resulted in Earth’s upper atmosphere being laced with mercury.
Mercury is a neurotoxin. It’s also bio-accumulative, which means it’s absorbed by the body at a faster rate than the body can remove it. The most common way to get mercury poisoning is through eating contaminated seafood. “It’s pretty nasty,” says Michael Bender, the international coordinator of the Zero Mercury Working Group (ZMWG). “Which is why this is one of the very few instances where the governments of the world came together pretty much unanimously and ratified a treaty.”
Bender is referring to the 2013 Minamata Convention on Mercury, a U.N. treaty named for a city in Japan whose residents suffered from mercury poisoning from a nearby chemical factory for decades. Because mercury pollutants easily find their way into the oceans and the atmosphere, it’s virtually impossible for one country to prevent mercury poisoning within its borders. “Mercury—it’s an intercontinental pollutant,” Bender says. “So it required a global treaty.”
Today, the only remaining permitted uses for mercury are in fluorescent lighting and dental amalgams, and even those are being phased out. Mercury is otherwise found as a by-product of other processes, such as the burning of coal. But then a company hit on the idea to use it as a spacecraft propellant.
In 2018, an employee at Apollo Fusion approached the Public Employees for Environmental Responsibility (PEER), a nonprofit that investigates environmental misconduct in the United States. The employee—who has remained anonymous—alleged that the Mountain View, Calif.–based space startup was planning to build and sell thrusters that used mercury propellant to multiple companies building low Earth orbit (LEO) satellite constellations.
Apollo Fusion wasn’t the first to consider using mercury as a propellant. NASA originally tested it in the 1960s and 1970s with two Space Electric Propulsion Tests (SERT), one of which was sent into orbit in 1970. Although the tests demonstrated mercury’s effectiveness as a propellant, the same concerns over the element’s toxicity that have seen it banned in many other industries halted its use by the space agency as well.
“I think it just sort of fell off a lot of folks’ radars,” says Kevin Bell, the staff counsel for PEER. “And then somebody just resurrected the research on it and said, ‘Hey, other than the environmental impact, this was a pretty good idea.’ It would give you a competitive advantage in what I imagine is a pretty tight, competitive market.”
That’s presumably why Apollo Fusion was keen on using it in their thrusters. Apollo Fusion as a startup emerged more or less simultaneously with the rise of massive LEO constellations that use hundreds or thousands of satellites in orbits below 2,000 kilometers to provide continual low-latency coverage. Finding a slightly cheaper, more efficient propellant for one large geostationary satellite doesn’t move the needle much. But doing the same for thousands of satellites that need to be replaced every several years? That’s a much more noticeable discount.
Were it not for mercury’s extreme toxicity, it would actually make an extremely attractive propellant. Apollo Fusion wanted to use a type of ion thruster called a Hall-effect thruster. Ion thrusters strip electrons from the atoms that make up a liquid or gaseous propellant, and then an electric field pushes the resultant ions away from the spacecraft, generating a modest thrust in the opposite direction. The physics of rocket engines means that the performance of these engines increases with the mass of the ion that you can accelerate.
Mercury is heavier than either xenon or krypton, the most commonly used propellants, meaning more thrust per expelled ion. It’s also liquid at room temperature, making it efficient to store and use. And it’s cheap—there’s not a lot of competition with anyone looking to buy mercury.
Bender says that ZMWG, alongside PEER, caught wind of Apollo Fusion marketing its mercury-based thrusters to at least three companies deploying LEO constellations—One Web, Planet Labs, and SpaceX. Planet Labs, an Earth-imaging company, has at least 200 CubeSats in low Earth orbit. One Web and SpaceX, both wireless-communication providers, have many more. One Web plans to have nearly 650 satellites in orbit by the end of 2022. SpaceX already has nearly 1,500 active satellites aloft in its Starlink constellation, with an eye toward deploying as many as 30,000 satellites before its constellation is complete. Other constellations, like Amazon’s Kuiper constellation, are also planning to deploy thousands of satellites.
In 2019, a group of researchers in Italy and the United States estimated how much of the mercury used in spacecraft propellant might find its way back into Earth’s atmosphere. They figured that a hypothetical LEO constellation of 2,000 satellites, each carrying 100 kilograms of propellant, would emit 20 tonnes of mercury every year over the course of a 10-year life span. Three quarters of that mercury, the researchers suggested, would eventually wind up in the oceans.
That amounts to 1 percent of global mercury emissions from a constellation only a fraction of the size of the one planned by SpaceX alone. And if multiple constellations adopted the technology, they would represent a significant percentage of global mercury emissions—especially, the researchers warned, as other uses of mercury are phased out as planned in the years ahead.
Fortunately, it’s unlikely that any mercury propellant thrusters will even get off the ground. Prior to the fourth meeting of the Minamata Convention, Canada, the European Union, and Norway highlighted the dangers of mercury propellant, alongside ZMWG. The provision to ban mercury usage in satellites was passed on 26 March 2022.
The question now is enforcement. “Obviously, there aren’t any U.N. peacekeepers going into space to shoot down” mercury-based satellites, says Bell. But the 137 countries, including the United States, who are party to the convention have pledged to adhere to its provisions—including the propellant ban.
The United States is notable in that list because as Bender explains, it did not ratify the Minamata Convention via the U.S. Senate but instead deposited with the U.N. an instrument of acceptance. In a 7 November 2013 statement (about one month after the original Minamata Convention was adopted), the U.S. State Department said the country would be able to fulfill its obligations “under existing legislative and regulatory authority.”
Bender says the difference is “weedy” but that this appears to mean that the U.S. government has agreed to adhere to the Minamata Convention’s provisions because it already has similar laws on the books. Except there is still no existing U.S. law or regulation banning mercury propellant. For Bender, that creates some uncertainty around compliance when the provision goes into force in 2025.
Still, with a U.S. company being the first startup to toy with mercury propellant, it might be ideal to have a stronger U.S. ratification of the Minamata Convention before another company hits on the same idea. “There will always be market incentives to cut corners and do something more dangerously,” Bell says.
Update 19 April 2022: In an email, a spokesperson for Astra stated that the company's propulsion system, the Astra Spacecraft Engine, does not use mercury. The spokesperson also stated that Astra has no plans to use mercury propellant and that the company does not have anything in orbit that uses mercury.
Updated 20 April 2022 to clarify that Apollo Fusion was building thrusters that used mercury, not that they had actually used them.
Match ID: 164 Score: 4.29 source: spectrum.ieee.org age: 224 days qualifiers: 2.14 musk, 1.43 amazon, 0.71 startup
First one on the list is copy.ai. It is an AI based copy writer tool. Basically what a copywriter tool does is, it gives you content that you can post on your blog or video when you give it a few descriptions about the topic you want content on.So copy ai can help you write instagram captions gives you blog idea, product descriptions, facebook content, startup ideas, viral ideas, a lot of things it can do, you just make an account in this website, then select a tool and fill in the necessary description and the AI will generate content on what you ask for.
For tutorials go to their official Youtube channel .An awesome tool that is going to be really handy in the future.
Hotpot.ai offers a collection of AI tools for designers, as well as for anyone, it has an “AI picture restorer” which removes scratches ,and basically restores your old photo into amazing pictures and makes it look brand new.
Ai picture colorizer , turns your black and white photo into color. And there is a background remover tool, picture enlarger and a lot more for designers, check it out,and explore all the tools.
Deep-nostalgia became very popular on the internet when people started
making reaction videos of their parents reacting to animated pictures of their grandparents. So deep - nostalgia is a very cool app, that will animate any photo of a person.
So what makes it really cool is that fact that you can upload an old photo of your family and see them animate and living. Which is pretty cool and creepy at the same time if they are dead already.. Really amazing service from myheritage, I created a lot of cool animations with my old photos as well as with the photos of my grandparents.
Having a nice looking profile picture is really important if you want that professional feel in your socials. Whether in linkedin or twitter having a
distinct and catchy profile picture can make all the difference. So that's where pfpmaker comes in. it a free online tool to create amazing professional profile pictures that fits you. It generates a lot of profile pictures and you can also make small changes to already created profile pictures if you want to,as well.
Speaking of brands, getting a good logo for your brand is the most frustrating
thing ever, so brandmark.io makes it super easy. It will create a logo for your brand within 2 clicks. So you goto this website. Type in your brand name and slogan if you have any, and give BRAND KEYWORDS that relate to your brand, then pick a color style and done, the ai will
generate amazing logos for you.
You can also make minor edits to the suggested logos to better fit your needs as well. But to get that png you need to pay a hefty price, but if you are looking for some logo ideas, this is a great place to start.
Even in the previous websites, some had picture enlarger tools. This deep-image.ai is a dedicated image enlarger, which supports upto 4x enlargement for free. The UI is pretty good and the tool is pretty fast with amazing results.
Bigjpg does the same as deep-image.ai , but this service offers a little bit more options like if your photo is an artwork it scales image differently than normal photos and it supports upto 4x enlargement for free and you can also set noise reduction options. Very good tool,
Lumen5 is an online marketing video maker that makes it really easy to create branding or informational videos within a couple of clicks. They have really great templates and various aspect ratios for various social media platforms.
You can also edit each element of the video if you don't like the preset, and the best part is, they have a ton of , I mean a ton of free stock photos and videos.You can also upload your own videos or any type of media. Definitely a good tool if you don't know how to work with complex tools like after effects, but want to create a sick video for your brand.
If you are struggling to find good names for your brand or youtube channel, give
namelix a try. It's an ai based name generator that will suggest good names for your brand depending on the keyword that you give.. Also logo for your brand. Pretty cool and an amazing piece of tool. So that's been it , those are my favourite free AI based tools that you can use right now,
Which one You like the most Let me know in the Comments below.
Match ID: 165 Score: 4.29 source: www.crunchhype.com age: 309 days qualifiers: 3.57 google, 0.71 startup
Kleiman plans to give a presentation next year about the programmers as part of the IEEE Industry Hub Initiative’s Impact Speaker series. The initiative aims to introduce industry professionals and academics to IEEE and its offerings.
Planning for the event, which is scheduled to be held in Silicon Valley, is underway. Details are to be announced before the end of the year.
The Institute spoke with Kleiman, who teaches Internet technology and governance for lawyers at American University, in Washington, D.C., about her mission to publicize the programmers’ contributions. The interview has been condensed and edited for clarity.
Kathy Kleiman delves into the ENIAC programmers’ lives and the pioneering work they did in her book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer.Kathy Kleiman
What inspired you to film the documentary?
Kathy Kleiman: The ENIAC was a secret project of the U.S. Army during World War II. It was the first general-purpose, programmable, all-electronic computer—the key to the development of our smartphones, laptops, and tablets today. The ENIAC was a highly experimental computer, with 18,000 vacuums, and some of the leading technologists at the time didn’t think it would work, but it did.
Six months after the war ended, the Army decided to reveal the existence of ENIAC and heavily publicize it. To do so, in February 1946 the Army took a lot of beautiful, formal photos of the computer and the team of engineers that developed it. I found these pictures while researching women in computer science as an undergraduate at
Harvard. At the time, I knew of only two women in computer science: Ada Lovelace and then U.S. Navy Capt. Grace Hopper. [Lovelace was the first computer programmer; Hopper co-developed COBOL, one of the earliest standardized computer languages.] But I was sure there were more women programmers throughout history, so I went looking for them and found the images taken of the ENIAC.
The pictures fascinated me because they had both men and women in them. Some of the photos had just women in front of the computer, but they weren’t named in any of the photos’ captions. I tracked them down after I found their identities, and four of six original ENIAC programmers responded. They were in their late 70s at the time, and over the course of many years they told me about their work during World War II and how they were recruited by the U.S. Army to be “human computers.”
Eckert and Mauchly promised the U.S. Army that the ENIAC could calculate artillery trajectories in seconds rather than the hours it took to do the calculations by hand. But after they built the 2.5-meter-tall by 24-meter-long computer, they couldn’t get it to work. Out of approximately 100 human computers working for the U.S. Army during World War II, six women were chosen to write a program for the computer to run differential calculus equations. It was hard because the program was complex, memory was very limited, and the direct programming interface that connected the programmers to the ENIAC was hard to use. But the women succeeded. The trajectory program was a great success. But Bartik, McNulty, Meltzer, Snyder, Spence, and Teitelbaum’s contributions to the technology were never recognized. Leading technologists and the public never knew of their work.
I was inspired by their story and wanted to share it. I raised funds, researched and recorded 20 hours of broadcast-quality oral histories with the ENIAC programmers—which eventually became the documentary. It allows others to see the women telling their story.
“If we open the doors to history, I think it would make it a lot easier to recruit the wonderful people we are trying to urge to enter engineering, computer science, and related fields.”
Why was the accomplishment of the six women important?
Kleiman: The ENIAC is considered by many to have launched the information age.
We generally think of women leaving the factory and farm jobs they held during World War II and giving them back to the men, but after ENIAC was completed, the six women continued to work for the U.S. Army. They helped world-class mathematicians program the ENIAC to complete “hundred-year problems” [problems that would take 100 years to solve by hand]. They also helped teach the next generation of ENIAC programmers, and some went on to create the foundations of modern programming.
What influenced you to continue telling the ENIAC programmers’ story in your book?
Kleiman: After my documentary premiered at the film festival, young women from tech companies who were in the audience came up to me to share why they were excited to learn the programmers’ story. They were excited to learn that women were an integral part of the history of early computing programming, and were inspired by their stories. Young men also came up to me and shared stories of their grandmothers and great-aunts who programmed computers in the 1960s and ’70s and inspired them to explore careers in computer science.
I met more women and men like the ones in Seattle all over the world, so it seemed like a good idea to tell the full story along with its historical context and background information about the lives of the ENIAC programmers, specifically what happened to them after the computer was completed.
What did you find most rewarding about sharing their story?
Kleiman: It was wonderful and rewarding to get to know the ENIAC programmers. They were incredible, wonderful, warm, brilliant, and exceptional people. Talking to the people who created the programming was inspiring and helped me to see that I could work at the cutting edge too. I entered Internet law as one of the first attorneys in the field because of them.
What I enjoy most is that the women’s experiences inspire young people today just as they inspired me when I was an undergraduate.
Clockwise from top left: Jean Bartik, Kathleen Antonelli, Betty Holberton, Ruth Teitelbaum, Marlyn Meltzer, Frances Spence.Clockwise from top left: The Bartik Family; Bill Mauchly, Priscilla Holberton, Teitelbaum Family, Meltzer Family, Spence Family
Is it important to highlight the contributions made throughout history by women in STEM?
Kleiman: [Actor] Geena Davis founded the Geena Davis Institute on Gender in Media, which works collaboratively with the entertainment industry to dramatically increase the presence of female characters in media. It’s based on the philosophy of “you can’t be what you can’t see.”
That philosophy is both right and wrong. I think you can be what you can’t see, and certainly every pioneer who has ever broken a racial, ethnic, religion, or gender barrier has done so. However, it’s certainly much easier to enter a field if there are role models who look like you. To that end, many computer scientists today are trying to diversify the field. Yet I know from my work in Internet policy and my recent travels across the country for my book tour that many students still feel locked out because of old stereotypes in computing and engineering. By sharing strong stories of pioneers in the fields who are women and people of color, I hope we can open the doors to computing and engineering. I hope history and herstory that is shared make it much easier to recruit young people to join engineering, computer science, and related fields.
Are you planning on writing more books or producing another documentary?
Kleiman: I would like to continue the story of the ENIAC programmers and write about what happened to them after the war ended. I hope that my next book will delve into the 1950s and uncover more about the history of the Universal Automatic Computer, the first modern commercial computer series, and the diverse group of people who built and programmed it.
Match ID: 166 Score: 4.29 source: spectrum.ieee.org age: 8 days qualifiers: 1.43 seattle, 1.43 development, 1.43 amazon
In the latest push for nuclear power in space, the Pentagon’s Defense Innovation Unit (DIU) awarded a contract in May to Seattle-based Ultra Safe Nuclear to advance its nuclear power and propulsion concepts. The company is making a soccer ball–size radioisotope battery it calls EmberCore. The DIU’s goal is to launch the technology into space for demonstration in 2027.
Ultra Safe Nuclear’s system is intended to be lightweight, scalable, and usable as both a propulsion source and a power source. It will be specifically designed to give small-to-medium-size military spacecraft the ability to maneuver nimbly in the space between Earth orbit and the moon. The DIU effort is part of the U.S. military’s recently announced plans to develop a surveillance network in cislunar space.
Besides speedy space maneuvers, the DIU wants to power sensors and communication systems without having to worry about solar panels pointing in the right direction or batteries having enough charge to work at night, says Adam Schilffarth, director of strategy at Ultra Safe Nuclear. “Right now, if you are trying to take radar imagery in Ukraine through cloudy skies,” he says, “current platforms can only take a very short image because they draw so much power.”
Radioisotope power sources are well suited for small, uncrewed spacecraft, adds Christopher Morrison, who is leading EmberCore’s development. Such sources rely on the radioactive decay of an element that produces energy, as opposed to nuclear fission, which involves splitting atomic nuclei in a controlled chain reaction to release energy. Heat produced by radioactive decay is converted into electricity using thermoelectric devices.
Radioisotopes have provided heat and electricity for spacecraft since 1961. The Curiosity and Perseverance rovers on Mars, and deep-space missions including Cassini, New Horizons, and Voyager all use radioisotope batteries that rely on the decay of plutonium-238, which is nonfissile—unlike plutonium-239, which is used in weapons and power reactors.
For EmberCore, Ultra Safe Nuclear has instead turned to medical isotopes such as cobalt-60 that are easier and cheaper to produce. The materials start out inert, and have to be charged with neutrons to become radioactive. The company encapsulates the material in a proprietary ceramic for safety.
Cobalt-60 has a half-life of five years (compared to plutonium-238’s 90 years), which is enough for the cislunar missions that the DOD and NASA are looking at, Morrison says. He says that EmberCore should be able to provide 10 times as much power as a plutonium-238 system, providing over 1 million kilowatt-hours of energy using just a few pounds of fuel. “This is a technology that is in many ways commercially viable and potentially more scalable than plutonium-238,” he says.
One downside of the medical isotopes is that they can produce high-energy X-rays in addition to heat. So Ultra Safe Nuclear wraps the fuel with a radiation-absorbing metal shield. But in the future, the EmberCore system could be designed for scientists to use the X-rays for experiments. “They buy this heater and get an X-ray source for free,” says Schilffarth. “We’ve talked with scientists who right now have to haul pieces of lunar or Martian regolith up to their sensor because the X-ray source is so weak. Now we’re talking about a spotlight that could shine down to do science from a distance.”
Ultra Safe Nuclear’s contract is one of two awarded by the DIU—which aims to speed up the deployment of commercial technology through military use—to develop nuclear power and propulsion for spacecraft. The other contract was awarded to Avalanche Energy, which is making a lunchbox-size fusion device it calls an Orbitron. The device will use electrostatic fields to trap high-speed ions in slowly changing orbits around a negatively charged cathode. Collisions between the ions can result in fusion reactions that produce energetic particles.
Both companies will use nuclear energy to power high-efficiency electric propulsion systems. Electric propulsion technologies such as ion thrusters, which use electromagnetic fields to accelerate ions and generate thrust, are more efficient than chemical rockets, which burn fuel. Solar panels typically power the ion thrusters that satellites use today to change their position and orientation. Schilffarth says that the higher power from EmberCore should give a greater velocity change of 10 kilometers per second in orbit than today’s electric propulsion systems.
Ultra Safe Nuclear is also one of three companies developing nuclear fission thermal propulsion systems for NASA and the Department of Energy. Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is seeking companies to develop a fission-based nuclear thermal rocket engine, with demonstrations expected in 2026.
This article appears in the August 2022 print issue as “Spacecraft to Run on Radioactive Decay.”
Match ID: 167 Score: 4.29 source: spectrum.ieee.org age: 173 days qualifiers: 1.43 seattle, 1.43 development, 1.43 apple
It has now been over a month since the U.S. Commerce Department issued new rules that clamped down on the export of certain advanced chips—which have military or AI applications—to Chinese customers.
China has yet to respond—but Beijing has multiple options in its arsenal. It’s unlikely, experts say, that the U.S. actions will be the last fighting word in an industry that is becoming more geopolitically sensitive by the day.
This is not the first time that the U.S. government has constrained the flow of chips to its perceived adversaries. Previously, the United States hasblocked chip sales to individual Chinese customers. In response to the Russian invasion of Ukraine earlier this year, the United States (along with several other countries, including South Korea and Taiwan) placed Russia under a chip embargo.
But none of these prior U.S. chip bans were as broad as the new rules, issued on 7 October. “This announcement is perhaps the most expansive export control in decades,” says Sujai Shivakumar, an analyst at the Center for International and Strategic Studies, in Washington.
The rules prohibit the sale, to Chinese customers, of advanced chips with both high performance (at least 300 trillion operations per second, or 300 teraops) and fast interconnect speed (generally, at least 600 gigabytes per second). Nvidia’s A100, for comparison, is capable of over 600 teraops and matches the 600 Gb/s interconnect speed. Nvidia’s more-impressive H100 can reach nearly 4,000 trillion operations and 900 Gb/s. Both chips, intended for data centers and AI trainers, cannot be sold to Chinese customers under the new rules.
Additionally, the rules restrict the sale of fabrication equipment if it will knowingly be used to make certain classes of advanced logic or memory chips. This includes logic chips produced at nodes of 16 nanometers or less (which the likes of Intel, Samsung, and TSMC have done since the early 2010s); NAND long-term memory integrated circuits with at least 128 layers (the state of the art today); or DRAM short-term memory integrated circuits produced at 18 nanometers or less (which Samsung began making in 2016).
The rules restrict not just U.S. companies, but citizens and permanent residents as well. U.S. employees at Chinese semiconductor firms have had to pack up. ASML, a Dutch maker of fabrication equipment, has told U.S. employees to stop servicing Chinese customers.
Speaking of Chinese customers, most—including offices, gamers, designers of smaller chips—probably won’t feel the controls. “Most chip trade and chip production in China is unimpacted,” says Christopher Miller, a historian who studies the semiconductor trade at Tufts University.
The controlled sorts of chips instead go into supercomputers and large data centers, and they’re desirable for training and running large machine-learning models. Most of all, the United States hopes to stop Beijing from using chips to enhance its military—and potentially preempt an invasion of Taiwan, where the vast majority of the world’s semiconductors and microprocessors are produced.
In order to seal off one potential bypass, the controls also apply to non-U.S. firms that rely on U.S.-made equipment or software. For instance, Taiwanese or South Korean chipmakers can’t sell Chinese customers advanced chips that are fabricated with U.S.-made technology.
It’s possible to apply to the U.S. government for an exemption from at least some of the restrictions. Taiwanese fab juggernaut TSMC and South Korean chipmaker SK Hynix, for instance, have already acquired temporary exemptions—for a year. “What happens after that is difficult to say,” says Patrick Schröder, a researcher at Chatham House in London. And the Commerce Department has already stated that such licenses will be the exception, not the rule (although Commerce Department undersecretary Alan Estevez suggested that around two-thirds of licenses get approved).
More export controls may be en route. Estevez indicated that the government is considering placing restrictions on technologies in other sensitive fields—specifically mentioning quantum information science and biotechnology, both of which have seen China-based researchers forge major progress in the past decade.
The Chinese government has so far retorted with harsh words and little action. “We don’t know whether their response will be an immediate reaction or whether they have a longer-term approach to dealing with this,” says Shivakumar. “It’s speculation at this point.”
Beijing could work with foreign companies whose revenue in the lucrative Chinese market is now under threat. “I’m really not aware of a particular company that thinks it’s coming out a winner in this,” says Shivakumar. This week, in the eastern city of Hefei, the Chinese government hosted a chipmakers’ conference whose attendees included U.S. firms AMD, Intel, and Qualcomm.
Nvidia has already responded by introducing a China-specific chip, the A800, which appears to be a modified A100 cut down to meet the requirements. Analysts say that Nvidia’s approach could be a model for other companies to keep up Chinese sales.
There may be other tools the Chinese government can exploit. While China may be dependent on foreign semiconductors, foreign electronics manufacturers are in turn dependent on China for rare-earth metals—and China supplies the supermajority of the world’s rare earths.
There is precedent for China curtailing its rare-earth supply for geopolitical leverage. In 2010, a Chinese fishing boat collided with two Japanese Coast Guard vessels, triggering an international incident when Japanese authorities arrested the boat’s captain. In response, the Chinese government cut off rare-earth exports to Japan for several months.
Certainly, much of the conversation has focused on the U.S. action and the Chinese reaction. But for third parties, the entire dispute delivers constant reminders of just how tense and volatile the chip supply can be. In the European Union, home to less than 10 percent of the world’s microchips market, the debate has bolstered interest in the prospective European Chips Act, a plan to heavily invest in fabrication in Europe. “For Europe in particular, it’s important not to get caught up in this U.S.-China trade issue,” Schröder says.
“The way in which the semiconductor industry has evolved over the past few decades has predicated on a relatively stable geopolitical order,” says Shivakumar. “Obviously, the ground realities have shifted.”
Match ID: 168 Score: 3.57 source: spectrum.ieee.org age: 8 days qualifiers: 3.57 trade
A Destabilizing Hack-and-Leak Operation Hits Moldova Sat, 19 Nov 2022 14:00:00 +0000 Plus: Google’s location snooping ends in a $391 million settlement, Russian code sneaks into US government apps, and the World Cup apps set off alarms. Match ID: 169 Score: 3.57 source: www.wired.com age: 10 days qualifiers: 3.57 google
Are you looking for a way to create content that is both effective and efficient? If so, then you should consider using an AI content generator. AI content generators are a great way to create content that is both engaging and relevant to your audience.
There are a number of different AI content generator tools available on the market, and it can be difficult to know which one is right for you. To help you make the best decision, we have compiled a list of the top 10 AI content generator tools that you should use in 2022.
Jasper is a content writing and content generation tool that uses artificial intelligence to identify the best words and sentences for your writing style and medium in the most efficient, quick, and accessible way.
It's trusted by 50,000+ marketers for creating engaging marketing campaigns, ad copy, blog posts, and articles within minutes which would traditionally take hours or days. Special Features:
Blog posts have been optimized for search engines and rank high on Google and other search engines. This is a huge plus for online businesses that want to generate traffic to their website through content marketing.
99.9% Original Content and guarantees that all content it generates will be original, so businesses can focus on their online reputation rather than worrying about penalties from Google for duplicate content.
Long-Form Article Writing – Jasper.ai is also useful for long-form writing, allowing users to create articles of up to 10,000 words without any difficulty. This is ideal for businesses that want to produce in-depth content that will capture their audience’s attention.
Generates a wide variety of content types
Guarantees 100% unique and free-plagiarism content
Copy.ai is a content writing tool that enables its users to create marketing copy, social media posts, Facebook Ads, and many more formats by using more than 90 templates such as Bullet Points to Blogs, General Ads, Hook Text, etc.
The utility of this service can be used for short-term or format business purposes such as product descriptions, website copy, market copy, and sales reports.
Provides a large set of templates where you can input the data and the AI will generate Templates with around 10 or more options to make it easy for the user to choose.
Smooth and efficient user experience with chrome extension where one can easily transfer information from Copy.ai to a content management forum, Google docs, etc without having to switch tabs.
Generates content in 25 languages where your input and output language may differ if you are not a native English speaker.
The best option for short-length content generation such as market copy, sales reports, blogs, etc.
Facebook community and email support for users to understand the AI better and to interact with other users.
Beginner-friendly user experience with various templates to help the process of content generation.
Free plan and no credit card required.
The free plan from Copy AI is a welcome sight, however, it is just suitable for testing the software.
Free Trial – 7 days with 24/7 email support and 100 runs per day.
Pro Plan: $49 and yearly, it will cost you $420 i.e. $35 per month.
Wait! I've got a pretty sweet deal for you. Sign up through the link below, and you'll get (7,000 Free Words Plus 40% OFF) if you upgrade to the paid plan within four days.
Just like Outranking, Frase is an AI that helps you research, create and optimize your content to make it high quality within seconds. Frase works on SEO optimization where the content is made to the liking of search engines by optimizing keywords and keywords.
Generate full-length, optimized content briefs in seconds and review the main keywords, headers, and concepts in your SEO competitors’ content in one intuitive research panel.
Write high-converting, SEO-optimized copy and make writer’s block a thing of the past with automated outlines, blog introductions, product descriptions, FAQs, and more.
An intuitive text editor that uses a topic model to score your content Optimization against your competitors.
A dashboard that automatically identifies and categorizes your best content opportunities. Frase uses your Google Search Console data to serve up actionable insights about what you should work on next.
Unlike Outranking, the interface to Frase is very user-friendly and accessible.
Users who are content writers and have to research get a lot of time to write and ideate instead of juggling from one website to another as data can be easily accessed on Frase for research on a topic.
Optimizing content with keyword analysis and SEO optimization has been made easier with Frase's Content Optimization.
Reports on competitors' websites help in optimizing our own articles and websites.
Content briefs make research very easy and efficient.
The paid plans are a bit pricey because they include many tools for content optimization.
Frase provides two plans for all users and a customizable plan for an enterprise or business.
Solo Plan: $14.99/Month and $12/Month if billed yearly with 4 Document Credits for 1 user seat.
Basic Plan: $44.99/month and $39.99/month if billed yearly with 30 Document Credits for 1 user seat.
Team Plan: $114.99/month and $99.99/month if billed yearly for unlimited document credits for 3 users.
*SEO Add-ons and other premium features for $35/month irrespective of the plan.
4. Article Forge — Popular Blog Writing Software for Efficiency and Affordability
Article Forge is another content generator that operates quite differently from the others on this list. Unlike Jasper.ai, which requires you to provide a brief and some information on what you want it to write this tool only asks for a keyword. From there, it’ll generate a complete article for you.
Article Forge integrates with several other software, including WordAi, RankerX, SEnuke TNG, and SEO Autopilot.
The software takes information from high-ranking websites and then creates more credible articles to rank well in search engines.
If you want to generate content regularly, Article Forge can help. You can set it up to automatically generate articles based on your specific keyword or topic. Or, if you need a lot of content quickly, you can use the bulk content feature to get many articles in a short period.
Excellent for engaging with readers on multiple CMS platforms
No spinner content. Create multiple unique articles
Extremely quick and efficient
One of the cheapest options online
You need to pay attention to the content since it’s not always on point
Only ideal for decent-quality articles – if you’re lucky
What’s excellent about Article Forge is they provide a 30-day money-back guarantee. You can choose between a monthly or yearly subscription. Unfortunately, they offer a free trial and no free plan:
Basic Plan: $27/Month
This plan allows users to produce up to 25k words each month. This is excellent for smaller blogs or those who are just starting.
Standard Plan: $57/month)
This plan allows users to produce up to 250k words each month. This is excellent for smaller blogs or those who are just starting.
Unlimited Plan: $117/month
If you’re looking for an unlimited amount of content, this is the plan for you. You can create as many articles as you want, and there’s no word limit.
It’s important to note that Article Forge guarantees that all content generated through the platform passes Copyscape.
Rytr.me is a free AI content generator perfect for small businesses, bloggers, and students. The software is easy to use and can generate SEO-friendly blog posts, articles, and school papers in minutes.
Rytr can be used for various purposes, from writing blog posts to creating school papers. You can also generate captions for social media, product descriptions, and meta descriptions.
Rytr supports writing for over 30 languages, so you can easily create content in your native language.
The AI helps you write content in over 30 tones to find the perfect tone for your brand or project.
Rytr has a built-in plagiarism checker that ensures all your content is original and plagiarism free.
Easy to use
Creates unique content
It supports over 30 languages
Multi-tone writing capabilities
It can be slow at times
Grammar and flow could use improvement
Rytr offers a free plan that comes with limited features. It covers up to 5,000 characters generated each month and has access to the built-in plagiarism checker. If you want to use all the features of the software, you can purchase one of the following plans:
Saver Plan: $9/month, $90/year
Generate 100k characters per month
Access 40+ use-cases
Write in 30+ languages
Access 20+ tones
Built-in plagiarism checker
Generate up to 20 images per month with AI
Access to premium community
Create your own custom use-case
Unlimited Plan: $29/month, $290/year
Generate UNLIMITED* characters per month
Access 40+ use-cases
Write in 30+ languages
Access 20+ tones
Built-in plagiarism checker
Generate up to 100 images per month with AI
Access to premium community
Create your own custom use-case
Dedicated account manager
Priority email & chat support
6. Writesonic — Best AI Article Writing Software with a Grammar and Plagiarism Checker
Writesonic is a free, easy-to-use AI content generator. The software is designed to help you create copy for marketing content, websites, and blogs. It's also helpful for small businesses or solopreneurs who need to produce content on a budget.
The tone checker, is a great feature that helps you ensure that your content is consistent with your brand’s voice. This is excellent for crafting cohesive and on-brand content.
The grammar checker is another valuable tool that helps you produce error-free content.
The plagiarism checker is a great way to ensure that your content is original.
Writesonic is free with limited features. The free plan is more like a free trial, providing ten credits. After that, you’d need to upgrade to a paid plan. Here are your options:
Access to all the short-form content templates like Facebook ads, product descriptions, paragraphs, and more.
Awesome tools to help you write short and long-form content like blog posts, ebooks, and more.
7. CopySmith — Produces Quality Content in Seconds
CopySmith is an AI content generator that can be used to create personal and professional documents, blogs, and presentations. It offers a wide range of features including the ability to easily create documents and presentations.
CopySmith also has several templates that you can use to get started quickly.
This software allows you to create product descriptions, landing pages, and more in minutes.
Offers rewritten content that is both unique and plagiarism free.
This feature helps you create product descriptions for your Shopify store that are SEO-friendly and attractive to customers.
This is an excellent tool for new content ideas.
Excellent for generating eCommerce-ready content
No credit card is required for the free trial
The blog content isn’t the best
Better suited for short copy
CopySmith offers a free trial with no credit card required. After the free trial, the paid plans are as follows:
Starter Plan: $19/month
Get 50 credits monthly with up to 20 plagiarism checks.
Professional Plan: $59/month
Upgrade to 400 credits per month with up to 100 plagiarism checks.
Enterprise – Create a custom-tailored plan by contacting the sales team.
8. Hypotenuse.ai — Best AI Writing Software for E-Commerce and Product Descriptions
Hypotenuse.ai is a free online tool that can help you create AI content. It's great for beginners because it allows you to create videos, articles, and infographics with ease. The software has a simple and easy-to-use interface that makes it perfect for new people looking for AI content generation.
You can create custom-tailored copy specific to your audience’s needs. This is impressive since most free AI content generators do not offer this feature.
Hypotenuse takes data from social media sites, websites, and more sources to provide accurate information for your content.
If you’re selling a product online, you can use Hypotenuse to create automated product descriptions that are of high quality and will help you sell more products.
Excellent research capabilities
Automated product descriptions
No free plan
Hypotenuse doesn’t offer a free plan. Instead, it offers a free trial period where you can take the software for a run before deciding whether it’s the right choice for you or not. Other than that, here are its paid options:
Starter Plan: $29/month
This plan comes with 100 credits/month with 25k Words with one user seat. It’s an excellent option for individuals or small businesses.
Growth Plan: $59/month
This plan comes with 350 credits/month with 87.5k words and 1 user seat. It’s perfect for larger businesses or agencies.
Enterprise – pricing is custom, so don’t hesitate to contact the company for more information.
9. Kafkai — Leading AI Writing Tool for SEOs and Marketers
Kafkai is an AI content generator and writing software that produces niche-specific content on a wide variety of topics. It offers a user-friendly interface, as well as a high degree of personalization.
Kafkai offers a host of features that make it SEO-ready, including the ability to add keywords and tags to your content.
Kafkai is designed explicitly for creating niche-specific content, which can be a significant advantage for businesses or bloggers looking to target a specific audience.
Kafkai produces high-quality content, a significant advantage for businesses or bloggers looking to set themselves apart from the competition.
Kafkai offers a unique feature that allows you to seed content from other sources, which can be a significant time-saver when creating content.
Quick results with high efficiency
You can add seed content and phrases
It can be used to craft complete articles
Its long-form-content generator isn’t very high quality
Kafkai comes with a free trial to help you understand whether it’s the right choice for you or not. Additionally, you can also take a look at its paid plans:
Writer Plan: $29/month Create 100 articles per month. $0.29/article
Newsroom Plan $49/month – Generate 250 articles a month at $0.20 per article.
Printing Press Plan: $129 /month Create up to 1000 articles a month at roughly $0.13/article.
Industrial Printer Plan: ($199 a month) – Generate 2500 articles each month for $0.08/article.
Peppertype.ai is an online AI content generator that’s easy to use and best for small business owners looking for a powerful copy and content writing tool to help them craft and generate various content for many purposes.
You can choose from various pre-trained templates to create your content. This can save you a lot of time since you don’t have to spend time designing your templates or starting entirely from scratch.
Peppertype offers various copywriting frameworks to help you write better content.
Peppertype is lightweight and easy to use. This makes it perfect for beginners who want to get started with AI content generation.
Peppertype’s autocorrect feature automatically corrects your grammar and spelling mistakes as you type. This ensures that your content is free of errors.
Peppertype tracks user engagement data to help you create content that resonates with your audience.
It doesn’t have a steep learning curve
It helps users to create entirely original content
The basic plan comes with access to all of their frameworks and templates
Built-in style editor
More hits than misses on content generated
Tons of typos and grammatical errors
Unfortunately, Peppertype.ai isn’t free. However, it does have a free trial to try out the software before deciding whether it’s the right choice for you. Here are its paid plans:
50,000 words included
40+ content types
Notes and Text Editor
Access to templates
Active customer support
Team Plan: $199/month
Everything included in the Personal
Collaborate & share results
Request custom content types
Enterprise – pricing is custom, so please contact the company for more information.
It is no longer a secret that humans are getting overwhelmed with the daily task of creating content. Our lives are busy, and the process of writing blog posts, video scripts, or other types of content is not our day job. In comparison, AI writers are not only cheaper to hire, but also perform tasks at a high level of excellence. This article explores 10 writing tools that used AI to create better content choose the one which meets your requirements and budget but in my opinion Jasper ai is one of the best tools to use to make high-quality content.
If you have any questions ask in the comments section
Note: Don't post links in your comments
Note: This article contains affiliate links which means we make a small commission if you buy any premium plan from our link.
Match ID: 170 Score: 3.57 source: www.crunchhype.com age: 14 days qualifiers: 3.57 google
The Quiet Invasion of 'Big Information' Wed, 09 Nov 2022 14:00:00 +0000 Google and Facebook's privacy violations are common knowledge. But the decisions of a less-known company, Relx, are also impacting people's everyday lives. Match ID: 171 Score: 3.57 source: www.wired.com age: 20 days qualifiers: 3.57 google
Update 4 Nov. 2:45 p.m. EDT:Rocket Lab says its launch was successful, but booster recovery was not. It says it lost telemetry signals from the descending first stage during reentry.
“As standard procedure, we pull the helicopter from the recovery zone if this happens,” a company spokesperson said.
“If at first you don’t succeed….” Rocket Lab, the space launch company with two launchpads on the New Zealand coast, almost did succeed in May at something very difficult: To make its Electron booster reusable (and therefore far less expensive to fly), it tried catching the used first stage—in midair—with a helicopter as it descended by parachute toward the Pacific Ocean.
It came oh-so-close. On its first try, Rocket Lab’s helicopter successfully snagged the parachute with a hook at the end of a long cable—a remarkable piece of planning and flying. But the pilot, in the company’s words, “detected different load characteristics than previously experienced in testing,” and let the rocket fall in the water for a ship to recover it.
So try, try again. Rocket Lab is now making a new recovery attempt, this time with a rocket carrying an atmospheric-research satellite for the Swedish National Space Agency. If the helicopter can catch and hold onto the booster, it will fly it back to Rocket Lab’s production complex near Auckland for possible reuse.
“We’re eager to get the helicopter back out there,” said Peter Beck, Rocket Lab’s CEO and founder.
During Rocket Lab’s launch on 2 May 2022, the helicopter was able to catch the Electron rocket booster, but load issues forced the pilot to let the rocket fall to the water.Rocket Lab
“No changes since the May recovery,” said Morgan Bailey of Rocket Lab in an email to IEEE Spectrum, “but our team has carried out a number of capture rehearsals with test stages in preparation for this launch.”
Satellite operators are watching closely because, after Elon Musk’s SpaceX, Rocket Lab has established itself as a contender in space launches, especially for companies and government agencies with smaller payloads. This is its 32nd Electron launch since 2017. “They’ve become a major player,” said Chad Anderson of Space Capital, a venture capital firm.
Many of the world’s launch bases have historically been near the ocean for good reason: If rockets failed, open water is a relatively safe place for debris to fall. That’s why the United States uses coastal Florida and California, and the European Space Agency uses Kourou, French Guiana, on the northern coast of South America. Rocket Lab started in New Zealand and is expanding to the Virginia coast.
The downside is that saltwater and rocket hardware don’t mix very well; the water is corrosive, and cleanup is expensive. SpaceX goes to great lengths to land its boosters on barges or back at Cape Canaveral; Rocket Lab, whose boosters are smaller, can change its commercial space business dramatically if helicopter recoveries become routine.
The name of the mission for its first booster-recovery attempt was a playful “There and Back Again”; the second, suggested by an American space enthusiast, is “Catch Me if You Can.”
Here’s the plan: The Electron rocket, 18 meters tall, lifts off over the southern Pacific, aiming to place the satellite in a sun-synchronous orbit 585 kilometers high. The first stage, which made up 80 percent of the vehicle’s mass at launch, burns out after the first 70 km. Two minutes and 32 seconds into the flight, it drops off, following a long arc that, on past flights, would have sent it crashing into the ocean, about 280 km downrange.
This artist's conception envisions the helicopter, having successfully snagged the booster's parachute, carrying it back to dry land. A recovery ship is on standby. Rocket Lab
But Rocket Lab has equipped it with heat shielding, a guidance computer and control thrusters, protecting and steering it as it falls tailfirst at up to 8,300 kilometers per hour. Temperatures reach 2,400 °C as it’s slowed by the thickening air around it.
At an altitude of 13 km a small drogue parachute is deployed, followed by a main chute less than a minute later. They slow the booster’s descent to about 36 km/h.
The helicopter, a Sikorsky S-92, is waiting in the landing zone, trailing a grappling hook on a long cable. If all goes well, the helicopter flies over the descending rocket and snags the parachute cables about 2,000 meters above the ocean’s surface. Then it flies back to land with the rocket hanging underneath.
“The main advantage of air capture is that we’re not cleaning salt water out of it,” said Rocket Lab’s Bailey in an earlier interview. “We’re still in the test phase part of the program, and in terms of time and cost savings, that’ll be determined.”
But engines recovered from the ocean after previous launches have been refurbished and test-fired successfully, says Rocket Lab. Like many engineering efforts, it’s a step at a time.
“Being able to refly Electron without too much rework is the aim of the game,” says the company. “If we can achieve high level performance with engine parts recovered from the ocean, imagine what we can do with returned dry engines.”
Match ID: 172 Score: 3.57 source: spectrum.ieee.org age: 25 days qualifiers: 2.14 musk, 1.43 california
Is natural gas renewable? Is it a fossil fuel? A casual google search for natural gas gives the impression that these questions are somehow up for debate. And while natural gas has helped reduce carbon emissions as it was widely adopted as a replacement for coal, it is now up against zero-emission energy such as wind and solar. So how did natural gas end up in the same bracket as renewables? Josh Toussaint-Strauss explores the lengths fossil fuel companies have gone to in order to try to convince consumers, voters and lawmakers that natural gas is somehow a clean energy source
Continue reading... Match ID: 173 Score: 3.57 source: www.theguardian.com age: 26 days qualifiers: 3.57 google
In this CJ Affiliate guide, I will share with you everything you need to get started on the platform, I will give you an in-depth look at the network and how it works.
You will learn how to earn money with the platform. If you're not interested, I'll share some of the best CJ affiliate programs and alternatives. By the end of this post, I will also answer some of the FAQs on the platform and give my quick CJ review.
Sounds Good So let’s start
What is CJ Affiliate?
Commission Junction is an online advertising company that offers affiliate programs for various retailers. Since 1998, it has been known as one of the oldest and most popular affiliate networks.
Commission Junction has consistently ranked among the top 10 affiliate networks
With in-depth data analysis and an unmatched understanding of clients needs, CJ has established itself as a leader in performance marketing.
CJ provides advertisers with a variety of tracking, management, and payment options. As an affiliate network, CJ can help you launch multiple affiliate programs from a centralized network.
CJ's experienced team of account managers is available to help at every step—from program set-up to optimization.
CJ offers a variety of well-paying affiliate programs. You can find affiliate programs in almost every niche at CJ. With CJ, you can also find promotional tools such as banners and product feeds, which help you promote your website.
The reporting tools are unparalleled and provide granular data that can assist you in fine-tuning your campaigns for maximum results.
First things first, CJ is free to join! If you are new to the world of affiliate marketing, don't worry—you'll be able to join right away.
The requirements for joining CJ are almost similar to other networks. For example, you must have a blog or social media follower.
Isn't it obvious? Let's explore the details.
High-quality, unique content.
Non-gated content, of course.
No software, coupon/deal, or incentive models
Your traffic must be from US and Canada.
The main traffic source must not be paid
10K+ monthly trafic on your website.
How Does CJ Affiliate Work?
Between advertisers and publishers, CJ Affiliate acts as a middleman.. Advertisers sign up on CJ to promote their products or services, while publishers sign up on CJ to find and join affiliate programs to make money. CJ then tracks the sales or leads generated by the publisher and pays them a commission according to the terms of the affiliate program.
CJ provides a win-win situation for both sides: advertisers get more sales and publishers make money.
In order to free up both parties to concentrate on their job, CJ also handles payments and other technical issues. Now that you know how CJ works, let's learn more about how to sign up and start making money with it.
How to Start Making Money Online Using CJ Affiliate
To get started using CJ's affiliate network, you'll need to register for an account. To do this, you must have a website or social media profile with relevant content and an audience from the US or Canada.
Create a CJ account, complete the application process, and then wait for approval. You shouldn't worry CJ is not strict as other network in approving applications.
Here's how you can signup for Cj
Visit cj.com to register as a publisher.
Fill in information such as your nationality, email, password, and more.
Verify your email address now.
You will be transferred right away to your CJ Account Manager, which contains crucial data including network statistics and performance summaries.
Now from the Account menu, head to Network Profile. And to sign up for any affiliate programme offered on CJ, complete this profile separately.
When applying for CJ, you need to share these two pieces of information
Description of the website (include statistics for your site and more)
Promotion methods (Traffic sources)
The process of setting up a CJ Affiliate account is a way for you to prove to CJ and the merchants that you're a serious affiliate marketer. Your CJ account is complete once you've added or edited your payment information; now you need to add or edit your tax certificates if required. You are now prepared to start making money on CJ Affiliate program apply for product when approved start promting and earn commissions on every sale.
Click on "Advertisers" and then select a category to go to your niche advertiser area. You can apply for it by clicking the 'Join the Program' button and analysing three months' earnings per click and overall earnings! After you're approved, you'll get links from all over the Internet.
After you've completed the steps above, you can share your affiliate links in your blog post. You can view performance reports for your affiliate links by visiting the CJ account dashboard. Click "Clients" to see details about clicks, sales, and commissions earned by each client.
With CJ, you can make money promoting great products and services in any niche imaginable! So start joining CJ programs now and watch your business grow.
Best CJ Affiliate Programs in 2022
On CJ, you can find thousands of affiliate programs in almost any niche. Some of the top affiliate programs enlisted on CJ include:
You'll find a lot of programs to join at CJ, depending on your niche. Just enter your keywords in the search bar, and CJ will show you all the relevant programs that match your criteria. You can further filter the results by commission type, category, or country.
A Quick CJ Affiliate Review: Is It Good Enough?
CJ Affiliate is one of the oldest and most well-known affiliate networks. The platform has been around for over 20 years and has a massive network of advertisers and publishers. The features on CJ Affiliate are easy to use, and it offers advertisers a wide range of tracking, management, and payment options.
CJ offers some great features for publishers too—promotional tools like banners, links, and social media are available to help boost your site's visibility. The only downside is that CJ has a bit of a learning curve, and the approval process can be strict. But overall, CJ Affiliate is an excellent platform for advertisers and publishers.
Top Alternatives and Competitors
CJ Affiliate is a great place to earn an income from affiliate marketing. It offers a wide range of features and options for advertisers and publishers. But if CJ doesn't work for you, plenty of other options are available. Here are some of the top competitors and alternatives in the market today:
Here are some of the best CJ Affiliate alternatives that you can try. Each platform has its own set of features, so make sure to choose one that best suits your needs. Regardless of which CJ alternative you choose, remember that quality content is key to success as a publisher, so ensure to focus on providing high-value, engaging content to your readers.
Frequently Asked Questions About Cj Affiliate Marketplace
Is the Cj AffilIs late Network legit?
CJ Affiliate is a legitimate affiliate platform that has earned the trust of many marketers because of its vast network of advertisers and publishers.
How much do CJ affiliates make?
It's not just about CJ; it's about how much effort you put into making money. It is possible to earn a few dollars to a few thousand dollars
How much does it cost to join Cj?
Joining CJ is free of charge. There are no monthly or annual fees. You only pay when you make a sale, and CJ takes a commission of 5-10%.
What are the payment methods accepted by Cj?
You can receive payment via direct deposit or check, as well as through Payoneer. CJ pays out within 20 days of the end of the month if your account has at least $50 worth of deposits ($100 for those outside America).
How to get approved for CJ affiliate?
CJ is friendly to both beginners and advanced affiliates. You need a website or social media profile with a solid organic traffic source and make yourself known using your profile description. Be honest, and you'll get approved for CJ's affiliate network.
How to find programs on CJ affiliates?
CJ affiliate offers a straightforward and user-friendly interface. All you need is to log in to your CJ account and click on ‘Advertisers' from the menu. Depending on your niche, you can then search for any affiliate program on CJ
What are the Pros of CJ Affiliate for advertisers?
CJ Affiliate is one of the most advanced affiliate programs available, providing advertisers with a range of features and options including advanced tracking, management, and payment options.
The platform is also easy to use and provides promotional tools like coupons, banners, and widgets that can help increase our sales.
Choosing Commission Junction as your affiliate program isn't easy. CJ is a big company and they have a wide range of affiliates, big and small. They offer everything from banner ads to text links and so much more. The sheer amount of choices can seem intimidating at first, especially to new Affiliates, which is why we've put together this simple guide for people looking for a successful CJ affiliate program to join. If you have any questions feel free to ask in the comments.
Match ID: 174 Score: 3.57 source: www.crunchhype.com age: 80 days qualifiers: 3.57 google
There are lots of questions floating around about how affiliate marketing works, what to do and what not to do when it comes to setting up a business. With so much uncertainty surrounding both personal and business aspects of affiliate marketing. In this post, we will answer the most frequently asked question about affiliate marketing
1. What is affiliate marketing?
Affiliate marketing is a way to make money by promoting the products and services of other people and companies. You don't need to create your product or service, just promote existing ones. That's why it's so easy to get started with affiliate marketing. You can even get started with no budget at all!
2. What is an affiliate program?
An affiliate program is a package of information you create for your product, which is then made available to potential publishers. The program will typically include details about the product and its retail value, commission levels, and promotional materials. Many affiliate programs are managed via an affiliate network like ShareASale, which acts as a platform to connect publishers and advertisers, but it is also possible to offer your program directly.
3. What is an affiliate network and how do affiliate networks make money?
Affiliate networks connect publishers to advertisers. Affiliate networks make money by charging fees to the merchants who advertise with them; these merchants are known as advertisers. The percentage of each sale that the advertiser pays is negotiated between the merchant and the affiliate network.
4. What's the difference between affiliate marketing and dropshipping?
Dropshipping is a method of selling that allows you to run an online store without having to stock products. You advertise the products as if you owned them, but when someone makes an order, you create a duplicate order with the distributor at a reduced price. The distributor takes care of the post and packaging on your behalf. As affiliate marketing is based on referrals and this type of drop shipping requires no investment in inventory when a customer buys through the affiliate link, no money exchanges hands.
5. Can affiliate marketing and performance marketing be considered the same thing?
Performance marketing is a method of marketing that pays for performance, like when a sale is made or an ad is clicked This can include methods like PPC (pay-per-click) or display advertising. Affiliate marketing is one form of performance marketing where commissions are paid out to affiliates on a performance basis when they click on their affiliate link and make a purchase or action.
6. Is it possible to promote affiliate offers on mobile devices?
Smartphones are essentially miniature computers, so publishers can display the same websites and offers that are available on a PC. But mobiles also offer specific tools not available on computers, and these can be used to good effect for publishers. Publishers can optimize their ads for mobile users by making them easy to access by this audience. Publishers can also make good use of text and instant messaging to promote their offers. As the mobile market is predicted to make up 80% of traffic in the future, publishers who do not promote on mobile devices are missing out on a big opportunity.
7. Where do I find qualified publishers?
The best way to find affiliate publishers is on reputable networks like ShareASale Cj(Commission Junction), Awin, and Impact radius. These networks have a strict application process and compliance checks, which means that all affiliates are trustworthy.
8. What is an affiliate disclosure statement?
An affiliate disclosure statement discloses to the reader that there may be affiliate links on a website, for which a commission may be paid to the publisher if visitors follow these links and make purchases.
9. Does social media activity play a significant role in affiliate marketing?
Publishers promote their programs through a variety of means, including blogs, websites, email marketing, and pay-per-click ads. Social media has a huge interactive audience, making this platform a good source of potential traffic.
10. What is a super affiliate?
A super affiliate is an affiliate partner who consistently drives a large majority of sales from any program they promote, compared to other affiliate partners involved in that program. Affiliates make a lot of money from affiliate marketing Pat Flynn earned more than $50000 in 2013 from affiliate marketing.
11. How do we track publisher sales activity?
Publishers can be identified by their publisher ID, which is used in tracking cookies to determine which publishers generate sales. The activity is then viewed within a network's dashboard.
12. Could we set up an affiliate program in multiple countries?
Because the Internet is so widespread, affiliate programs can be promoted in any country. Affiliate strategies that are set internationally need to be tailored to the language of the targeted country.
13. How can affiliate marketing help my business?
Affiliate marketing can help you grow your business in the following ways:
It allows you to save time and money on marketing, which frees you up to focus on other aspects of your business.
You get access to friendly marketers who are eager to help you succeed.
It also helps you to promote your products by sharing links and banners with a new audience.
It offers high ROI(Return on investment) and is cost-effective.
14. How do I find quality publishers?
One of the best ways to work with qualified affiliates is to hire an affiliate marketing agency that works with all the networks. Affiliates are carefully selected and go through a rigorous application process to be included in the network.
15. How Can we Promote Affiliate Links?
Affiliate marketing is generally associated with websites, but there are other ways to promote your affiliate links, including:
A website or blog
Through email marketing and newsletter
Social media, like Facebook, Instagram, or Twitter.
Leave a comment on blogs or forums.
Write an e-book or other digital product.
16. Do you have to pay to sign up for an affiliate program?
To build your affiliate marketing business, you don't have to invest money in the beginning. You can sign up for free with any affiliate network and start promoting their brands right away.
17. What is a commission rate?
Commission rates are typically based on a percentage of the total sale and in some cases can also be a flat fee for each transaction. The rates are set by the merchant.
Who manages your affiliate program?
Some merchants run their affiliate programs internally, while others choose to contract out management to a network or an external agency.
18. What is a cookie?
Cookies are small pieces of data that work with web browsers to store information such as user preferences, login or registration data, and shopping cart contents. When someone clicks on your affiliate link, a cookie is placed on the user's computer or mobile device. That cookie is used to remember the link or ad that the visitor clicked on. Even if the user leaves your site and comes back a week later to make a purchase, you will still get credit for the sale and receive a commission it depends on the site cookies duration
19. How long do cookies last?
The merchant determines the duration of a cookie, also known as its “cookie life.” The most common length for an affiliate program is 30 days. If someone clicks on your affiliate link, you’ll be paid a commission if they purchase within 30 days of the click.
Most new affiliates are eager to begin their affiliate marketing business. Unfortunately, there is a lot of bad information out there that can lead inexperienced affiliates astray. Hopefully, the answer to your question will provide clarity on how affiliate marketing works and the pitfalls you can avoid. Most importantly, keep in mind that success in affiliate marketing takes some time. Don't be discouraged if you're not immediately making sales or earning money. It takes most new affiliates months to make a full-time income.
Match ID: 175 Score: 3.57 source: www.crunchhype.com age: 181 days qualifiers: 3.57 google
If you want to pay online, you need to register an account and provide credit card information. If you don't have a credit card, you can pay with bank transfer. With the rise of cryptocurrencies, these methods may become old.
Imagine a world in which you can do transactions and many other things without having to give your personal information. A world in which you don’t need to rely on banks or governments anymore. Sounds amazing, right? That’s exactly what blockchain technology allows us to do.
It’s like your computer’s hard drive. blockchain is a technology that lets you store data in digital blocks, which are connected together like links in a chain.
Blockchain technology was originally invented in 1991 by two mathematicians, Stuart Haber and W. Scot Stornetta. They first proposed the system to ensure that timestamps could not be tampered with.
A few years later, in 1998, software developer Nick Szabo proposed using a similar kind of technology to secure a digital payments system he called “Bit Gold.” However, this innovation was not adopted until Satoshi Nakamoto claimed to have invented the first Blockchain and Bitcoin.
So, What is Blockchain?
A blockchain is a distributed database shared between the nodes of a computer network. It saves information in digital format. Many people first heard of blockchain technology when they started to look up information about bitcoin.
Blockchain is used in cryptocurrency systems to ensure secure, decentralized records of transactions.
Blockchain allowed people to guarantee the fidelity and security of a record of data without the need for a third party to ensure accuracy.
To understand how a blockchain works, Consider these basic steps:
Blockchain collects information in “blocks”.
A block has a storage capacity, and once it's used up, it can be closed and linked to a previously served block.
Blocks form chains, which are called “Blockchains.”
More information will be added to the block with the most content until its capacity is full. The process repeats itself.
Each block in the chain has an exact timestamp and can't be changed.
Let’s get to know more about the blockchain.
How does blockchain work?
Blockchain records digital information and distributes it across the network without changing it. The information is distributed among many users and stored in an immutable, permanent ledger that can't be changed or destroyed. That's why blockchain is also called "Distributed Ledger Technology" or DLT.
Here’s how it works:
Someone or a computer will transacts
The transaction is transmitted throughout the network.
A network of computers can confirm the transaction.
When it is confirmed a transaction is added to a block
The blocks are linked together to create a history.
And that’s the beauty of it! The process may seem complicated, but it’s done in minutes with modern technology. And because technology is advancing rapidly, I expect things to move even more quickly than ever.
A new transaction is added to the system. It is then relayed to a network of computers located around the world. The computers then solve equations to ensure the authenticity of the transaction.
Once a transaction is confirmed, it is placed in a block after the confirmation. All of the blocks are chained together to create a permanent history of every transaction.
How are Blockchains used?
Even though blockchain is integral to cryptocurrency, it has other applications. For example, blockchain can be used for storing reliable data about transactions. Many people confuse blockchain with cryptocurrencies like bitcoin and ethereum.
Blockchain already being adopted by some big-name companies, such as Walmart, AIG, Siemens, Pfizer, and Unilever. For example, IBM's Food Trust uses blockchain to track food's journey before reaching its final destination.
Although some of you may consider this practice excessive, food suppliers and manufacturers adhere to the policy of tracing their products because bacteria such as E. coli and Salmonella have been found in packaged foods. In addition, there have been isolated cases where dangerous allergens such as peanuts have accidentally been introduced into certain products.
Tracing and identifying the sources of an outbreak is a challenging task that can take months or years. Thanks to the Blockchain, however, companies now know exactly where their food has been—so they can trace its location and prevent future outbreaks.
Blockchain technology allows systems to react much faster in the event of a hazard. It also has many other uses in the modern world.
What is Blockchain Decentralization?
Blockchain technology is safe, even if it’s public. People can access the technology using an internet connection.
Have you ever been in a situation where you had all your data stored at one place and that one secure place got compromised? Wouldn't it be great if there was a way to prevent your data from leaking out even when the security of your storage systems is compromised?
Blockchain technology provides a way of avoiding this situation by using multiple computers at different locations to store information about transactions. If one computer experiences problems with a transaction, it will not affect the other nodes.
Instead, other nodes will use the correct information to cross-reference your incorrect node. This is called “Decentralization,” meaning all the information is stored in multiple places.
Blockchain guarantees your data's authenticity—not just its accuracy, but also its irreversibility. It can also be used to store data that are difficult to register, like legal contracts, state identifications, or a company's product inventory.
Pros and Cons of Blockchain
Blockchain has many advantages and disadvantages.
Accuracy is increased because there is no human involvement in the verification process.
One of the great things about decentralization is that it makes information harder to tamper with.
Safe, private, and easy transactions
Provides a banking alternative and safe storage of personal information
Data storage has limits.
The regulations are always changing, as they differ from place to place.
It has a risk of being used for illicit activities
Frequently Asked Questions About Blockchain
I’ll answer the most frequently asked questions about blockchain in this section.
Is Blockchain a cryptocurrency?
Blockchain is not a cryptocurrency but a technology that makes cryptocurrencies possible. It's a digital ledger that records every transaction seamlessly.
Is it possible for Blockchain to be hacked?
Yes, blockchain can be theoretically hacked, but it is a complicated task to be achieved. A network of users constantly reviews it, which makes hacking the blockchain difficult.
What is the most prominent blockchain company?
Coinbase Global is currently the biggest blockchain company in the world. The company runs a commendable infrastructure, services, and technology for the digital currency economy.
Who owns Blockchain?
Blockchain is a decentralized technology. It’s a chain of distributed ledgers connected with nodes. Each node can be any electronic device. Thus, one owns blockhain.
What is the difference between Bitcoin and Blockchain technology?
Bitcoin is a cryptocurrency, which is powered by Blockchain technology while Blockchain is a distributed ledger of cryptocurrency
What is the difference between Blockchain and a Database?
Generally a database is a collection of data which can be stored and organized using a database management system. The people who have access to the database can view or edit the information stored there. The client-server network architecture is used to implement databases. whereas a blockchain is a growing list of records, called blocks, stored in a distributed system. Each block contains a cryptographic hash of the previous block, timestamp and transaction information. Modification of data is not allowed due to the design of the blockchain. The technology allows decentralized control and eliminates risks of data modification by other parties.
Blockchain has a wide spectrum of applications and, over the next 5-10 years, we will likely see it being integrated into all sorts of industries. From finance to healthcare, blockchain could revolutionize the way we store and share data. Although there is some hesitation to adopt blockchain systems right now, that won't be the case in 2022-2023 (and even less so in 2026). Once people become more comfortable with the technology and understand how it can work for them, owners, CEOs and entrepreneurs alike will be quick to leverage blockchain technology for their own gain. Hope you like this article if you have any question let me know in the comments section
FOLLOW US ON TWITTER
Follow @AdilAhmad_c Match ID: 176 Score: 3.57 source: www.crunchhype.com age: 225 days qualifiers: 3.57 google
Are you searching for an ecomerce platform to help you build an online store and sell products?
In this Sellfy review, we'll talk about how this eCommerce platform can let you sell digital products while keeping full control of your marketing.
And the best part? Starting your business can be done in just five minutes.
Let us then talk about the Sellfy platform and all the benefits it can bring to your business.
What is Sellfy?
Sellfy is an eCommerce solution that allows digital content creators, including writers, illustrators, designers, musicians, and filmmakers, to sell their products online. Sellfy provides a customizable storefront where users can display their digital products and embed "Buy Now" buttons on their website or blog. Sellfy product pages enable users to showcase their products from different angles with multiple images and previews from Soundcloud, Vimeo, and YouTube. Files of up to 2GB can be uploaded to Sellfy, and the company offers unlimited bandwidth and secure file storage. Users can also embed their entire store or individual project widgets in their site, with the ability to preview how widgets will appear before they are displayed.
Sellfy is a powerful e-commerce platform that helps you personalize your online storefront. You can add your logo, change colors, revise navigation, and edit the layout of your store. Sellfy also allows you to create a full shopping cart so customers can purchase multiple items. And Sellfy gives you the ability to set your language or let customers see a translated version of your store based on their location.
Sellfy gives you the option to host your store directly on its platform, add a custom domain to your store, and use it as an embedded storefront on your website. Sellfy also optimizes its store offerings for mobile devices, allowing for a seamless checkout experience.
Sellfy allows creators to host all their products and sell all of their digital products on one platform. Sellfy also does not place storage limits on your store but recommends that files be no larger than 5GB. Creators can sell both standard and subscription-based products in any file format that is supported by the online marketplace. Customers can purchase products instantly after making a purchase – there is no waiting period.
You can organize your store by creating your product categories, sorting by any characteristic you choose. Your title, description, and the image will be included on each product page. In this way, customers can immediately evaluate all of your products. You can offer different pricing options for all of your products, including "pay what you want," in which the price is entirely up to the customer. This option allows you to give customers control over the cost of individual items (without a minimum price) or to set pricing minimums—a good option if you're in a competitive market or when you have higher-end products. You can also offer set prices per product as well as free products to help build your store's popularity.
Sellfy is ideal for selling digital content, such as ebooks. But it does not allow you to copyrighted material (that you don't have rights to distribute).
Sellfy offers several ways to share your store, enabling you to promote your business on different platforms. Sellfy lets you integrate it with your existing website using "buy now" buttons, embed your entire storefront, or embed certain products so you can reach more people. Sellfy also enables you to connect with your Facebook page and YouTube channel, maximizing your visibility.
Payments and security
Sellfy is a simple online platform that allows customers to buy your products directly through your store. Sellfy has two payment processing options: PayPal and Stripe. You will receive instant payments with both of these processors, and your customer data is protected by Sellfy's secure (PCI-compliant) payment security measures. In addition to payment security, Sellfy provides anti-fraud tools to help protect your products including PDF stamping, unique download links, and limited download attempts.
Marketing and analytics tools
The Sellfy platform includes marketing and analytics tools to help you manage your online store. You can send email product updates and collect newsletter subscribers through the platform. With Sellfy, you can also offer discount codes and product upsells, as well as create and track Facebook and Twitter ads for your store. The software's analytics dashboard will help you track your best-performing products, generated revenue, traffic channels, top locations, and overall store performance.
To expand functionality and make your e-commerce store run more efficiently, Sellfy offers several integrations. Google Analytics and Webhooks, as well as integrations with Patreon and Facebook Live Chat, are just a few of the options available. Sellfy allows you to connect to Zapier, which gives you access to hundreds of third-party apps, including tools like Mailchimp, Trello, Salesforce, and more.
Sellfy has its benefits and downsides, but fortunately, the pros outweigh the cons.
It takes only a few minutes to set up an online store and begin selling products.
You can sell your products on a single storefront, even if you are selling multiple product types.
Sellfy supports selling a variety of product types, including physical items, digital goods, subscriptions, and print-on-demand products.
Sellfy offers a free plan for those who want to test out the features before committing to a paid plan.
You get paid the same day you make a sale. Sellfy doesn't delay your funds as some other payment processors do.
Print-on-demand services are available directly from your store, so you can sell merchandise to fans without setting up an integration.
You can conduct all store-related activities via the mobile app and all online stores have mobile responsive designs.
Everything you need to make your website is included, including a custom domain name hosting, security for your files, and the ability to customize your store
The file security features can help you protect your digital property by allowing you to put PDF stamps, set download limits, and SSL encryption.
Sellfy provides unlimited support.
Sellfy provides simple and intuitive tax and VAT configuration settings.
Marketing strategies include coupons, email marketing, upselling, tracking pixels, and cart abandonment.
Although the free plan is helpful, but it limits you to only 10 products.
Payment plans often require an upgrade if you exceed a certain sales amount per year.
The storefront designs are clean, but they're not unique templates for creating a completely different brand image.
Sellfy's branding is removed from your hosted product when you upgrade to the $49 per month Business plan.
The free plan does not allow for selling digital or subscription products.
In this article, we have taken a look at some of the biggest benefits associated with using sellfy for eCommerce. Once you compare these benefits to what you get with other platforms such as Shopify, you should find that it is worth your time to consider sellfy for your business. After reading this article all of your questions will be solved but if you have still some questions let me know in the comment section below, I will be happy to answer your questions.
Note: This article contains affiliate links which means we make a small commission if you buy sellfy premium plan from our link.
Match ID: 177 Score: 3.57 source: www.crunchhype.com age: 262 days qualifiers: 3.57 google
Content creation is one of the biggest struggles for many marketers and business owners. It often requires both time and financial resources, especially if you plan to hire a writer. Today, we have a fantastic opportunity to use other people's products by purchasing Private Label Rights.
To find a good PLR website, first, determine the type of products you want to acquire. One way to do this is to choose among membership sites or PLR product stores. Following are 10 great sites that offer products in both categories.
What are PLR websites?
Private Label Rights (PLR) products are digital products that can be in the form of an ebook, software, online course videos, value-packed articles, etc. You can use these products with some adjustments to sell as your own under your own brand and keep all the money and profit yourself without wasting your time on product creation. The truth is that locating the best website for PLR materials can be a time-consuming and expensive exercise. That’s why we have researched, analyzed, and ranked the best 10 websites:
PLR.me is of the best places to get PLR content in 2021-2022. It offers a content marketing system that comes with courses, brandable tools, and more. It is the most trusted PLR website, among other PLR sites. The PLR.me platform features smart digital caching PLR tools for health and wellness professionals. The PLR.me platform, which was built on advanced caching technology, has been well-received by big brands such as Toronto Sun and Entrepreneur. The best thing about this website is its content marketing automation tools.
Pay-as-you-go Plan – $22
100 Monthly Plan – $99/month
400 Annual Plan – $379/year
800 Annual Plan – $579/year
2500 Annual Plan – $990/year
Access over 15,940+ ready-to-use PLR coaching resources.
Content marketing and sliding tools are provided by the site.
You can create courses, products, webinars, emails, and nearly anything else you can dream of.
You can cancel your subscription anytime.
Compared to other top PLR sites, this one is a bit more expensive.
InDigitalWorks is a leading private label rights membership website established in 2008. As of now, it has more than 100,000 members from around the globe have joined the platform. The site offers thousands of ready-to-be-sold digital products for online businesses in every single niche possible. InDigitalWorks features hundreds of electronic books, software applications, templates, graphics, videos that you can sell right away.
3 Months Plan – $39
1 Year Plan – $69
Lifetime Plan – $79
IndigitalWorks promotes new authors by providing them with 200 free products for download.
Largest and most reputable private label rights membership site.
20000+ digital products
137 training videos provided by experts to help beginners set up and grow their online presence for free.
10 GB of web hosting will be available on a reliable server.
Fewer people are experiencing the frustration of not getting the help they need.
BuyQualityPLR’s website is a Top PLR of 2021-2022! It's a source for major Internet Marketing Products and Resources. Whether you’re an Affiliate Marketer, Product Creator, Course Seller, BuyQualityPLR can assist you in the right direction. You will find several eBooks and digital products related to the Health and Fitness niche, along with a series of Security-based products. If you search for digital products, Resell Rights Products, Private Label Rights Products, or Internet Marketing Products, BuyQualityPLR is among the best websites for your needs.
Free PLR articles packs, ebooks, and other digital products are available
Price ranges from 3.99$ to 99.9$
Everything on this site is written by professionals
The quick download features available
Doesn't provide membership.
Offers thousand of PLR content in many niches
Valuable courses available
You can't buy all content because it doesn't provide membership
The IDPLR website has helped thousands of internet marketers since 2008. This website follows a membership approach and allows you to gain access to thousands of PLR products in different niches. The best thing about this site is the quality of the products, which is extremely impressive. This is the best PLR website of 2021-2022, offering over 200k+ high-quality articles. It also gives you graphics, templates, ebooks, and audio.
3 Months ACCESS: $39
1 YEAR ACCESS: $69
LIFETIME ACCESS: $79
You will have access to over 12,590 PLR products.
You will get access to training tutorials and Courses in a Gold membership.
10 GB of web hosting will be available on a reliable server.
You will receive 3D eCover Software
It offers an unlimited download limit
Most important, you will get a 30 day money-back guarantee
A few products are available for free membership.
PLRmines is a leading digital product library for private label rights products. The site provides useful information on products that you can use to grow your business, as well as licenses for reselling the content. You can either purchase a membership or get access through a free trial, and you can find unlimited high-quality resources via the site's paid or free membership. Overall, the site is an excellent resource for finding outstanding private label rights content.
Lifetime membership: $97
4000+ ebooks from top categories
Members have access to more than 660 instructional videos covering all kinds of topics in a membership area.
You will receive outstanding graphics that are ready to use.
They also offer a variety of helpful resources and tools, such as PLR blogs, WordPress themes, and plugins
The free membership won't give you much value.
Super-Resell is another remarkable provider of PLR material. The platform was established in 2009 and offers valuable PLR content to users. Currently, the platform offers standard lifetime memberships and monthly plans at an affordable price. Interested users can purchase up to 10,000 products with digital rights or rights of re-sale. Super-Resell offers a wide range of products such as readymade websites, article packs, videos, ebooks, software, templates, and graphics, etc.
6 Months Membership: $49.90
Lifetime membership: $129
It offers you products that come with sales pages and those without sales pages.
You'll find thousands of digital products that will help your business grow.
Daily News update
The company has set up an automatic renewal system. This can result in costs for you even though you are not using the service.
7. Unstoppable PLR
UnStoppablePLR was launched in 2006 by Aurelius Tjin, an internet marketer. Over the last 15 years, UnStoppablePLR has provided massive value to users by offering high-quality PLR content. The site is one of the best PLR sites because of its affordability and flexibility.
Regular Price: $29/Month
You’ll get 30 PLR articles in various niches for free.
100% money-back guarantee.
Members get access to community
It gives you access to professionally designed graphics and much more.
People often complain that not enough PLR products are released each month.
8. Resell Rights Weekly
Resell Rights Weekly, a private label rights (PLR) website, provides exceptional PLR content. It is among the top free PLR websites that provide free membership. You will get 728+ PLR products completely free and new products every single week. The Resell Rights Weekly gives you free instant access to all products and downloads the ones you require.
Gold Membership: $19.95/Month
Lots of products available free of cost
Free access to the members forum
The prices for the products at this PLR site are very low quality compared to other websites that sell the same items.
MasterResellRights was established in 2006, and it has helped many successful entrepreneurs. Once you join MasterResellRights, you will get access to more than 10,000 products and services from other members. It is one of the top PLR sites that provide high-quality PLR products to members across the globe. You will be able to access a lot of other membership privileges at no extra price. The website also provides PLR, MRR, and RR license products.
⦁Access more than 10,000 high-quality, PLR articles in different niches. ⦁Get daily fresh new updates ⦁Users get 8 GB of hosting space ⦁You can pay using PayPal
⦁Only members have access to the features of this site.
BigProductStore is a popular private label rights website that offers tens of thousands of digital products. These include software, videos, video courses, eBooks, and many others that you can resell, use as you want, or sell and keep 100% of the profit. The PLR website updates its product list daily. It currently offers over 10,000 products. The site offers original content for almost every niche and when you register as a member, you can access the exclusive products section where you can download a variety of high-quality, unique, and exclusive products.
Monthly Plan: $19.90/Month 27% off
One-Time-Payment: $98.50 50% off
Monthly Ultimate: $29.90/Month 36% off
One-Time-Payment Ultimate: $198.50 50% off
You can use PLR products to generate profits, give them as bonuses for your affiliate promotion campaign, or rebrand them and create new unique products.
Lifetime memberships for PLR products can save you money if you’re looking for a long-term solution to bulk goods.
The website is updated regularly with fresh, quality content.
Product descriptions may not provide much detail, so it can be difficult to know just what you’re downloading.
Some product categories such as WP Themes and articles are outdated.
Match ID: 178 Score: 3.57 source: www.crunchhype.com age: 276 days qualifiers: 3.57 google
If you are looking for the best wordpress plugins, then you are at the right place. Here is the list of best wordpress plugins that you should use in your blog to boost SEO, strong your security and know every aspects of your blog . Although creating a good content is one factor but there are many wordpress plugins that perform different actions and add on to your success. So let's start
Those users who are serious about SEO, Yoast SEO will do the work for them to reach their goals. All they need to do is select a keyword, and the plugin will then optimize your page according to the specified keyword
Yoast offers many popular SEO WordPress plugin functions. It gives you real-time page analysis to optimize your content, images, meta descriptions, titles, and kewords. Yoast also checks the length of your sentences and paragraphs, whether you’re using enough transition words or subheadings, how often you use passive voice, and so on. Yoast tells Google whether or not to index a page or a set of pages too.
Let me summarize these points in bullets:
Enhance the readability of your article to reduce bounce rate
Optimize your articles with targetted keywords
Let Google know who you are and what your site is about
Improve your on-page SEO with advanced, real-time guidance and advice on keyword usage, linking, and external linking.
Keep your focus keywords consistent to help rank better on Google.
Preview how your page would appear in the search engine results page (SERP)
Crawl your site daily to ensure Google indexes it as quickly as possible.
Rate your article informing you of any mistakes you might have made so that you can fix them before publishing.
Stay up-to-date with Google’s latest algorithm changes and adapt your on-page SEO as needed with smartsuggestionss from the Yoast SEO plugin. This plugin is always up-to-date.
Free Version is available
Premium version=$89/year that comes with extra functions, allowing you to optimize your content up to five keywords, among other benefits.
2. WP Rocket
A website running WordPress can put a lot of strain on a server, which increases the chances that the website will crash and harm your business. To avoid such an unfortunate situation and ensure that all your pages load quickly, you need a caching plugin like WP Rocket.
WP Rocket plugin designed to increases your website speed. Instead of waiting for pages to be saved to cache, WP Rocket turns on desired caching settings, like page cache and gzip compression. The plugin also activates other features, such as CDN support and llazy image loadding, to enhance your site speed.
Features in bullets:
Preloading the cache of pages
Reducing the number of HTTP requests allows websites to load more quickly.
Decreasing bandwidth usage with GZIP compression
Apply optimal browser caching headers (expires)
Remove Unused CSS
Deferred loading of images (LazyLoad)
Critical Path CSS generation and deferred loading of CSS files
WordPress Heartbeat API control
Easy import/export of settings
Easy roll back to a previous version
Single License =$49/year for one website
Plus License =$99/year for 3 websites
Infinite License =$249/year for unlimited websites
Wordfence Security is a WordPress firewall and security scanner that keeps your site safe from malicious hackers, spam, and other online threats. This Plugin comes with a web application firewall (WAF) called tthread Defence Feed that helps to prevents brute force attacks by ensuring you set stronger passwords and limiting login attempts. It searches for malware and compares code, theme, and plugin files with the records in the WordPress.org repository to verify their integrity and reports changes to you.
Wordfence security scanner provides you with actionable insights into your website's security status and will alert you to any potential threats, keeping it safe and secure. It also includes login security features that let you activate reCAPTCHA and two-factor authentication for your website.
Features in Bullets.
Scans your site for vulnerabilities.
Alerts you by email when new threats are detected.
Supports advanced login security measures.
IP addresses may be blocked automatically if suspicious activity is detected.
Premium Plan= $99/Year that comes with extra security features like the real time IP backlist and country blocking option and also support from highly qualified experts.
Akismet can help prevent spam from appearing on your site. Every day, it automatically checks every comment against a global database of spam to block malicious content. With Akismet, you also won’t have to worry about innocent comments being caught by the filter or false positives. You can simply tell Akismet about those and it will get better over time. It also checks your contact form submissions against its global spam database and weed out unnecessary fake information.
Features in Bullets:
The program automatically checks comments and filters out spam.
Hidden or misleading links are often revealed in the comment body.
Akismet tracks the status of each comment, allowing you to see which ones were caught by Akismet and which ones were cleared by a moderator.
A spam-blocking feature that saves disk space and makes your site run faster.
Moderators can view a list of comments approved by each user.
Free to use for personal blog
5. Contact Form 7
Contact Form 7 is a plug-in that allows you to create contact forms that make it easy for your users to send messages to your site. The plug-in was developed by Takayuki Miyoshi and lets you create multiple contact forms on the same site; it also integrates Akismet spam filtering and lets you customize the styling and fields that you want to use in the form. The plug-in provides CAPTCHA and Ajax submitting.
Features in bullets:
Create and manage multiple contact forms
Easily customize form fields
Use simple markup to alter mail content
Add Lots of third-party extensions for additional functionality
Shortcode offers a way to insert content into pages or posts.
Akismet spam filtering, Ajax-powered submitting, and CAPTCHA are all features of this plugin.
Free to use
6. Monster Insights
When you’re looking for an easy way to manage your Google Analytics-related web tracking services, Monster Insights can help. You can add, customize, and integrate Google Analytics data with ease so you’ll be able to see how every webpage performs, which online campaigns bring in the most traffic, and which content readers engage with the most. It’s same as Google Analytics
It is a powerful tool to keep track of your traffic stats. With it, you can view stats for your active sessions, conversions, and bounce rates. You’ll also be able to see your total revenue, the products you sell, and how your site is performing when it comes to referrals.
MonsterInsights offers a free plan that includes basic Google Analytics integration, data insights, and user activity metrics.
Features in bullets:
Demographics and interest reports:
Anonymize the IPs of visitor
See the results of how far visitors Scroll down
Show the insights of multiple links to the same page and show you which links get more clicks
See sessions of two related sites as a single session
Google AdSense tracking
Send you weekly analytics report of your blog you can download it as pdf
Premium plan= $99.50/year that comes with extra features like page and post tracking, Adsense tracking, custom tracking and reports.
7. Pretty Links
Pretty Links is a powerful WordPress plugin that enables you to easily cloak affiliate links on your websiteIt even allows you to easily redirect visitors based on a specific request, including permanent 301 and temporary 302/307 redirects.
Pretty links also helps you to automatically shorten your url for your post and pages.
You can also enable auto-linking feature to automatically add affiliate links for certain keywords
Create clean, easy-to-remember URLs on your website (301, 302, and 307 redirects only)
Random-generator or custom URL slugs
Track the number of clicks
Easy to understand reports
View click details including ip address, remote host, browser, operating system, and referring site
You can pass custom parameters to your scripts when using pretty permalinks, and still have full tracking capability.
Exclude IP Addresses from Stats
Cookie-based system to track your activity across clicks
Create nofollow/noindex links
Toggle tracking on / off on each link.
Pretty Link Bookmarklet
Update redirected links easily to new URLs!
Beginner Plan=$79/year that can be used on 1 site
Marketer Plan: $99/year – that can be used on upto 2 sites
Super Affiliate Plan: $149/year – that can be use on upto 5 sites
We hope you’ve found this article useful. We appreciate you reading and welcome your feedback if you have it.
Match ID: 179 Score: 3.57 source: www.crunchhype.com age: 291 days qualifiers: 3.57 google
Ginger VS Grammarly: When it comes to grammar checkers, Ginger and Grammarly are two of the most popular choices on the market. This article aims to highlight the specifics of each one so that you can make a more informed decision about the one you'll use.
What is Grammarly?
If you are a writer, you must have heard of Grammarly before. Grammarly has over 10M users across the globe, it's probably the most popular AI writing enhancement tool, without a doubt. That's why there's a high chance that you already know about Grammarly.
But today we are going to do a comparison between Ginger and Grammarly, So let's define Grammarly here. Like Ginger, Grammarly is an AI writing assistant that checks for grammatical errors, spellings, and punctuation. The free version covers the basics like identifying grammar and spelling mistakes
While the Premium version offers a lot more functionality, it detects plagiarism in your content, suggests word choice, or adds fluency to it.
Features of Grammarly
Grammarly detects basic to advance grammatical errors and also help you why this is an error and suggest to you how you can improve it
Create a personal dictionary
Check to spell for American, British, Canadian, and Australian English.
Detect unclear structure.
Explore overuse of words and wordiness.
Get to know about the improper tones.
Discover the insensitive language aligns with your intent, audience, style, emotion, and more.
What is Ginger
Ginger is a writing enhancement tool that not only catches typos and grammatical mistakes but also suggests content improvements. As you type, it picks up on errors then shows you what’s wrong, and suggests a fix. It also provides you with synonyms and definitions of words and allows you to translate your text into dozens of languages.
Ginger Software: Features & Benefits
Ginger's software helps you identify and correct common grammatical mistakes, such as consecutive nouns, or contextual spelling correction.
The sentence rephrasing feature can help you convey your meaning perfectly.
Ginger acts like a personal coach that helps you practice certain exercises based on your mistakes.
The dictionary feature helps users understand the meanings of words.
In addition, the program provides a text reader, so you can gauge your writing’s conversational tone.
Ginger vs Grammarly
Grammarly and Ginger are two popular grammar checker software brands that help you to become a better writer. But if you’re undecided about which software to use, consider these differences:
Grammarly only supports the English language while Ginger supports 40+ languages.
Grammarly offers a wordiness feature while Ginger lacks a Wordiness feature.
Grammarly shows an accuracy score while Ginger lacks an accuracy score feature.
Grammarly has a plagiarism checker while ginger doesn't have such a feature.
Grammarly can recognize an incorrect use of numbers while Ginger can’t recognize an incorrect use of numbers.
Grammarly and Ginger both have mobile apps.
Ginger and Grammarly offer monthly, quarterly, and annual plans.
Grammarly allows you to check uploaded documents. while Ginger doesn't check uploaded documents.
Grammarly Offers a tone suggestion feature while Ginger doesn't offer a tone suggestion feature.
Ginger helps to translate documents into 40+ languages while Grammarly doesn't have a translation feature.
Ginger Offers text to speech features while Grammarly doesn't have such features.
Grammarly Score: 7/10
So Grammarly wins here.
Ginger VS Grammarly: Pricing Difference
Ginger offers a Premium subscription for 13.99$/month. it comes at $11.19/month for quarterly and $7.49/month for an annual subscription with 40$ off.
On the other hand, Grammarly offers a Premium subscription for $30/month for a monthly plan $20/month for quarterly, and $12/month for an annual subscription.
For companies with three or more employees, the Business plan costs $12.50/month for each member of your team.
Affordable Subscription plans (Additionals discounts are available)
Active and passive voice changer
Translates documents in 40+ languages
Browser extension available
Personal trainers help clients develop their knowledge of grammar.
Text-to-speech feature reads work out loud
Get a full refund within 7 days
Mobile apps aren't free
Limited monthly corrections for free users
No style checker
No plagiarism checker
Not as user-friendly as Grammarly
You are unable to upload or download documents; however, you may copy and paste files as needed.
Doesn't offer a free trial
Summarizing the Ginger VS Grammarly: My Recommendation
While both writing assistants are fantastic in their ways, you need to choose the one you want.
For example, go for Grammarly if you want a plagiarism tool included.
Choose Ginger if you want to write in languages other than English. I will to the differences for you in order to make the distinctions clearer.
Grammarly offers a plagiarism checking tool
Ginger provides text to speech tool
Grammarly helps you check uploaded documents
Ginger supports over 40 languages
Grammarly has a more friendly UI/UX
Both Ginger and Grammarly are awesome writing tools, without a doubt. Depending on your needs, you might want to use Ginger over Grammarly. As per my experience, I found Grammarly easier to use than Ginger.
Which one you like let me know in the comments section also give your opinions in the comments section below.
Match ID: 180 Score: 3.57 source: www.crunchhype.com age: 292 days qualifiers: 3.57 google
But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.
How is AI currently being used to design the next generation of chips?
Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.
Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.
What are the benefits of using AI for chip design?
Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.
So it’s like having a digital twin in a sense?
Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.
So, it’s going to be more efficient and, as you said, cheaper?
Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.
We’ve talked about the benefits. How about the drawbacks?
Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years.
Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together.
One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.
How can engineers use AI to better prepare and extract insights from hardware or sensor data?
Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.
One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.
What should engineers and designers consider when using AI for chip design?
Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.
How do you think AI will affect chip designers’ jobs?
Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.
How do you envision the future of AI and chip design?
Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
Match ID: 181 Score: 3.57 source: spectrum.ieee.org age: 294 days qualifiers: 3.57 google
Do you have the desire to become a content creator, but not have the money to start? Here are 7 free websites every content creator needs to know.
1.Exploding Topics (Trending Topics)
(Photo Credit:- Exploding Topics)
If you're a content creator, you might be wondering what better way to find new topic ideas than to see what people are searching for? This tool gives you this data without anyone else's explanation. It provides related hashtags and tips on how to use them effectively in your posts. It's a great tool for anyone who wants to keep up to date with what's most relevant in their niche. You can also see the most popular hashtags by country, making it easier to understand cross-border and demographic trends. This site makes your search for content easier than ever! There are countless ways to use explosive topics to your advantage as a content creator.
Some examples can be:
Use the most popular hashtags and keywords to get inspiration for ideas.
Find out what people are talking about in real-time.
Find new audiences you may not have known were interested in your topic.
There’s no excuse not to try this website — it’s free and easy to use!
Answer The public is an excellent tool for content creators. It gives you insight into what people are asking on social media sites and communities and lets you guess about topics that matter to your audience. Answer the public allows you to enter a keyword or topic related to your niche and it will show results with popular questions and keywords related to your topic. It's an amazing way to get insights into what people are searching online and allows you to identify topics driven by new blog posts or social media content on platforms like Facebook, Instagram, Youtube, and Twitter as well as the types of questions they ask and also want answers.
With this tool, content creators can quickly and easily check the ranking of their websites and those of other competitors. This tool allows you to see how your website compares to others in different categories, including:
Organic Search Ranking
Surfer Seo is free and the interface is very friendly. It's a great tool for anyone who wants to do quick competitor research or check their site's rankings at any time.
Canva is a free graphic design platform that makes it easy to create invitations, business cards, mobile videos, Instagram posts, Instagram stories, flyers, and more with professionally designed templates. You can even upload your photos and drag and drop them into Canva templates. It's like having a basic version of Photoshop. You can also remove background from images with one click.
Canva offers thousands of free, professionally designed templates that can be customized with just a few clicks. Simply upload your photos to Canva, drag them into the template of your choice, and save the file to your computer.
It is free to use for basic use but if you want access to different fonts or more features, then you need to buy a premium plan.
Facebook Audience Insights is a powerful tool for content creators when researching their target market. This can help you understand the demographics, interests, and behaviors of your target audience. This information helps determine the direction of your content so that it resonates with them. The most important tools to consider in Facebook Audience Insights are Demographics and Behavior. These two sections provide you with valuable information about your target market, such as their age and from where they belong, how much time they spend on social media per day, what devices they use to access it, etc.
There is another section of Facebook Audits that is very helpful. This will let you know the interests, hobbies, and activities that people in your target market are most interested in. You can use this information to create content for them about things they will be about as opposed to topics they may not be so keen on.
Pexels is a warehouse for any content creator with millions of free royalty images who wants to find high-quality images that can be used freely without having to worry about permissions or licensing so you are free to use the photos in your content and also there is no watermark on photos
The only cons are that some photos contain people, and Pexels doesn't allow you to remove people from photos. Search your keyword and download as many as you want!
So there you have it. We hope that these specially curated websites will come in handy for content creators and small businesses alike. If you've got a site that should be on this list, let us know! And if you're looking for more content creator resources, then let us know in the comments section below
Match ID: 182 Score: 3.57 source: www.crunchhype.com age: 306 days qualifiers: 3.57 google
Stocks to Watch: DuPont, Nike, KB Home are stocks to watch Fri, 27 Jun 2014 10:48:27 GMT Among the companies whose shares are expected to see active trade in Friday’s session are DuPont, Nike, and KB Home. Match ID: 183 Score: 3.57 source: www.marketwatch.com age: 3077 days qualifiers: 3.57 trade
Stocks to Watch: FedEx, Jabil, Red Hat are stocks to watch Wed, 18 Jun 2014 10:30:21 GMT Among the companies whose shares are expected to see active trade in Wednesday’s session are FedEx, Jabil Circuit, and Red Hat. Match ID: 187 Score: 3.57 source: www.marketwatch.com age: 3086 days qualifiers: 3.57 trade
Stocks to Watch: Covidien, Medtronic, are stocks to watch Mon, 16 Jun 2014 13:05:28 GMT Among the companies whose shares are expected to see active trade in Monday’s session are Covidien, Medtronic and Layne Christensen and Korn/Ferry International. Match ID: 188 Score: 3.57 source: www.marketwatch.com age: 3088 days qualifiers: 3.57 trade
Stocks to Watch: Lululemon, Finisar, Target are stocks to watch Thu, 12 Jun 2014 11:24:41 GMT Among the companies whose shares are expected to see active trade in Thursday’s session are Lululemon Athletica, Finisar, and Target. Match ID: 189 Score: 3.57 source: www.marketwatch.com age: 3092 days qualifiers: 3.57 trade
Researchers claim that supposedly anonymous device analytics information can identify users:
On Twitter, security researchers Tommy Mysk and Talal Haj Bakry have found that Apple’s device analytics data includes an iCloud account and can be linked directly to a specific user, including their name, date of birth, email, and associated information stored on iCloud.
Apple has long claimed otherwise:
On Apple’s device analytics and privacy legal page, the company says no information collected from a device for analytics purposes is traceable back to a specific user. “iPhone Analytics may include details about hardware and operating system specifications, performance statistics, and data about how you use your devices and applications. None of the collected information identifies you personally,” the company claims...
Match ID: 190 Score: 2.86 source: www.schneier.com age: 7 days qualifiers: 2.86 apple
The longest journey begins with a single step, and that step gets expensive when you’re in the space business. Take, for example, the Electron booster made by
Rocket Lab, a company with two launch pads on the New Zealand coast and another awaiting use in Virginia. Earth’s gravity is so stubborn that, by necessity, two-thirds of the rocket is its first stage—and it has historically ended up as trash on the ocean floor after less than 3 minutes of flight.
Making those boosters reusable—saving them from a saltwater grave, and therefore saving a lot of money—has been a goal of aerospace engineers since the early space age. Elon Musk’s
SpaceX has famously been landing its Falcon 9 boosters on drone ships off the Florida coast—mind-bending to watch but very hard to pull off.
Rocket Lab says it has another way. Iits next flight will carry 34 commercial satellites—and instead of being dropped in the Pacific, the spent first stage will be snared in midair by a helicopter as it descends by parachute. It will then be brought back to base, seared by the heat of reentry but inwardly intact, for possible refurbishment and reuse. The team, in its determination to minimize its odds of dropping the ball, so to speak, has pushed back the launch several times in order to wait out inclement weather. They reason that because this isn’t a game of horseshoes, close is not good enough.
“It’s a very complex thing to do,” says
Morgan Bailey of Rocket Lab. “You have to position the helicopter in exactly the right spot, you have to know exactly where the stage is going to be coming down, you have to be able to slow it enough,” she says. “We’ve practiced and practiced all of the individual puzzle pieces, and now it’s putting them together. It’s not a foregone conclusion that the first capture attempt will be a success.”
Still, people in the space business will be watching, since Rocket Lab has established a niche for itself as a viable space company. This will be its 26th Electron launch. The company says it has launched 112 satellites so far, many of them
so-called smallsats that are relatively inexpensive to fly. “Right now, there are two companies taking payloads to orbit: SpaceX and Rocket Lab,” says Chad Anderson, CEO of Space Capital, a firm that funds space startups.
Here's the flight profile. The Electron is 18 meters tall; the bottom 12 meters are the first stage. For this mission it will lift off from New Zealand on its way to a sun-synchronous orbit 520 kilometers high. The first stage burns out after the first 70 km. Two minutes and 32 seconds into the flight, it drops off, following a long arc that in the past would have sent it crashing into the ocean, about 280 km downrange.
But Rocket Lab has now equipped its booster with heat shielding, protecting it as it falls tail-first at up to 8,300 kilometers per hour. Temperatures should reach 2,400 °C as the booster is slowed by the air around it.
At an altitude of 13 km, a small
drogue parachute is deployed from the top end of the rocket stage, followed by a main chute at about 6 km, less than a minute later. The parachute slows the rocket substantially, so that it is soon descending at only about 36 km/h.
An artist’s conception shows the helicopter after catching the spent Electron rocket’s first stage in midair.Rocket Lab
But even that would make for a hard splashdown—which is why a Sikorsky S-92 helicopter hovers over the landing zone, trailing a grappling hook on a long cable. The plan is for the helicopter to fly over the descending rocket and snag the parachute cables. The rocket never gets wet; the chopper secures it and either lowers it onto a ship or carries it back to land. Meanwhile—let’s not lose sight of the prime mission—the second stage of the rocket should reach orbit about 10 minutes after launch.
“You have to keep the booster out of the water,” says Anderson. “If they can do that, it’s a big deal.” Many space people will recall NASA’s
solid rocket boosters, which helped launch the space shuttles and then parachuted into the Atlantic; towing them back to port and cleaning them up for reuse was slow and expensive. NASA’s giant SLS rocket uses the same boosters, but there are no plans to recover them.
So midair recovery is far better, though it’s not new. As long ago as 1960, the U.S. Air Force snagged a returning capsule from a mission called
Discoverer 14. But that had nothing to do with economy; the Discoverers were actually Corona reconnaissance satellites, and they were sending back film of the Soviet Union—priceless for Cold War intelligence.
Rocket Lab tries to sound more playful about its missions: It gives them names like “A Data With Destiny” or “Without Mission a Beat.” This newest flight, with its booster-recovery attempt, is called “There and Back Again.”
tweeted to CEO Peter Beck: “It would have been cool if the mission was called ‘Catch Me If You Can.’”
“Oh…that’s good!” Beck
replied. “Congratulations, you have just named the very next recovery mission.”
Update 22 April 2022: In a tweet, Rocket Lab announced that due to weather, the planned launch and recovery would be rescheduled for 27 April at the earliest.
This article appears in the July 2022 print issue as “Rocket Lab Catches Rocket Booster in Midair.”
Match ID: 192 Score: 2.86 source: spectrum.ieee.org age: 225 days qualifiers: 2.14 musk, 0.71 startup
Here’s How Bad a Twitter Mega-Breach Would Be Fri, 18 Nov 2022 01:41:44 +0000 Elon Musk laid off half the staff, and mass resignations seem likely. If nobody’s there to protect the fort, what’s the worst that could happen? Match ID: 193 Score: 2.14 source: www.wired.com age: 12 days qualifiers: 2.14 musk
Not all users are having problems receiving SMS authentication codes, and those who rely on an authenticator app or physical authentication token to secure their Twitter account may not have reason to test the mechanism. But users have been self-reporting issues on Twitter since the weekend, and WIRED confirmed that on at least some accounts, authentication texts are hours delayed or not coming at all. The meltdown comes less than two weeks after Twitter laid off about half of its workers...
Match ID: 194 Score: 2.14 source: www.schneier.com age: 12 days qualifiers: 2.14 musk
Twitter’s SMS Two-Factor Authentication Is Melting Down Tue, 15 Nov 2022 01:08:07 +0000 Problems with the important security feature may be some of the first signs that Elon Musk’s social network is fraying at the edges. Match ID: 195 Score: 2.14 source: www.wired.com age: 15 days qualifiers: 2.14 musk
Elon Musk Introduces Twitter Mayhem Mode Sat, 12 Nov 2022 14:00:00 +0000 Plus: US midterms survive disinformation efforts, the government names the alleged Lockbit ransomware attacker, and the Powerball drawing hits a security snag. Match ID: 196 Score: 2.14 source: www.wired.com age: 17 days qualifiers: 2.14 musk
Elon Musk’s Reckless Plan to Make Sex Pay on Twitter Mon, 07 Nov 2022 11:37:00 +0000 A plan to monetize adult content could make sense from a business and social standpoint. In practice, Twitter won’t be able to pull it off. Match ID: 197 Score: 2.14 source: www.wired.com age: 22 days qualifiers: 2.14 musk
The 100th anniversary of the invention of the transistor will happen in 2047. What will transistors be like then? Will they even be the critical computing element they are today? IEEE Spectrum asked experts from around the world for their predictions.
What will transistors be like in 2047?
Expect transistors to be even more varied than they are now, says one expert. Just as processors have evolved from CPUs to include GPUs, network processors, AI accelerators, and other specialized computing chips, transistors will evolve to fit a variety of purposes. “Device technology will become application domain–specific in the same way that computing architecture has become application domain–specific,” says H.-S. Philip Wong, an IEEE Fellow, professor of electrical engineering at Stanford University, and former vice president of corporate research at TSMC.
Despite the variety, the fundamental operating principle—the field effect that switches transistors on and off—will likely remain the same, suggests Suman Datta, an IEEE Fellow, professor of electrical and computer at Georgia Tech, and director of the multi-university nanotech research center ASCENT. This device will likely have minimum critical dimensions of 1 nanometer or less, enabling device densities of 10 trillion per square centimeter, says Tsu-Jae King Liu, an IEEE Fellow, dean of the college of engineering at the University of California, Berkeley, and a member of Intel’s board of directors.
"It is safe to assume that the transistor or switch architectures of 2047 have already been demonstrated on a lab scale"—Sri Samavedam
Experts seem to agree that the transistor of 2047 will need new materials and probably a stacked or 3D architecture, expanding on the planned complementary field-effect transistor (CFET, or 3D-stacked CMOS). [For more on the CFET, see "Taking Moore's Law to New Heights."] And the transistor channel, which now runs parallel to the plane of the silicon, may need to become vertical in order to continue to increase in density, says Datta.
AMD senior fellow Richard Schultz, suggests that the main aim in developing these new devices will be power. “The focus will be on reducing power and the need for advanced cooling solutions,” he says. “Significant focus on devices that work at lower voltages is required.”
Will transistors still be the heart of most computing in 25 years?
It’s hard to imagine a world where computing is not done with transistors, but, of course, vacuum tubes were once the digital switch of choice. Startup funding for quantum computing, which does not directly rely on transistors, reached US $1.4 billion in 2021, according to McKinsey & Co.
But advances in quantum computing won’t happen fast enough to challenge the transistor by 2047, experts in electron devices say. “Transistors will remain the most important computing element,” says Sayeef Salahuddin, an IEEE Fellow and professor of electrical engineering and computer science at the University of California, Berkeley. “Currently, even with an ideal quantum computer, the potential areas of application seem to be rather limited compared to classical computers.”
Sri Samavedam, senior vice president of CMOS technologies at the European chip R&D center Imec, agrees. “Transistors will still be very important computing elements for a majority of the general-purpose compute applications,” says Samavedam. “One cannot ignore the efficiencies realized from decades of continuous optimization of transistors.”
Has the transistor of 2047 already been invented?
Twenty-five years is a long time, but in the world of semiconductor R&D, it’s not that long. “In this industry, it usually takes about 20 years from [demonstrating a concept] to introduction into manufacturing,” says Samavedam. “It is safe to assume that the transistor or switch architectures of 2047 have already been demonstrated on a lab scale” even if the materials involved won’t be exactly the same. King Liu, who demonstrated the modern FinFET about 25 years ago with colleagues at Berkeley, agrees.
But the idea that the transistor of 2047 is already sitting in a lab somewhere isn’t universally shared. Salahuddin, for one, doesn’t think it’s been invented yet. “But just like the FinFET in the 1990s, it is possible to make a reasonable prediction for the geometric structure” of future transistors, he says.
AMD’s Schultz says you can glimpse this structure in proposed 3D-stacked devices made of 2D semiconductors or carbon-based semiconductors. “Device materials that have not yet been invented could also be in scope in this time frame,” he adds.
Will silicon still be the active part of most transistors in 2047?
Experts say that the heart of most devices, the transistor channel region, will still be silicon, or possibly silicon-germanium—which is already making inroads—or germanium. But in 2047 many chips may use semiconductors that are considered exotic today. These could include oxide semiconductors like indium gallium zinc oxide; 2D semiconductors, such as the metal dichalcogenide tungsten disulfide; and one-dimensional semiconductors, such as carbon nanotubes. Or even “others yet to be invented,” says Imec’s Samavedam.
"Transistors will remain the most important computing element"—Sayeef Salahuddin
Silicon-based chips may be integrated in the same package with chips that rely on newer materials, just as processor makers are today integrating chips using different silicon manufacturing technologies into the same package, notes IEEE Fellow Gabriel Loh, a senior fellow at AMD.
Which semiconductor material is at the heart of the device may not even be the central issue in 2047. “The choice of channel material will essentially be dictated by which material is the most compatible with many other materials that form other parts of the device,” says Salahuddin. And we know a lot about integrating materials with silicon.
In 2047, where will transistors be common where they are not found today?
Everywhere. No, seriously. Experts really do expect some amount of intelligence and sensing to creep into every aspect of our lives. That means devices will be attached to our bodies and implanted inside them; embedded in all kinds of infrastructure, including roads, walls, and houses; woven into our clothing; stuck to our food; swaying in the breeze in grain fields; watching just about every step in every supply chain; and doing many other things in places nobody has thought of yet.
Transistors will be “everywhere that needs computation, command and control, communications, data collection, storage and analysis, intelligence, sensing and actuation, interaction with humans, or an entrance portal to the virtual and mixed reality world,” sums up Stanford’s Wong.
This article appears in the December 2022 print issue as “The Transistor of 2047.”
Match ID: 198 Score: 2.14 source: spectrum.ieee.org age: 8 days qualifiers: 1.43 california, 0.71 startup
With the deadly devastation fresh in the world’s mind, Pakistan pushed for damage funds with other frontline countries
In early September, after unprecedented rainfall had left a third of Pakistan under water, its climate change minister set out the country’s stall for Cop27. “We are on the frontline and intend to keep loss and damage and adapting to climate catastrophes at the core of our arguments and negotiations. There will be no moving away from that,” Sherry Rehman said.
Pakistan brought that resolve to the negotiations in Sharm el-Sheikh and, as president of the G77 plus China negotiating bloc, succeeded in keeping developing countries united on loss and damage – despite efforts by some rich countries to divide them. Its chief negotiator, Nabeel Munir, a career diplomat, was backed by a team of savvy veteran negotiators who had witnessed the devastation and suffering from the floods, which caused $30bn (£25bn) of damage and economic losses. Every day, Munir repeated the same message: “Loss and damage is not charity, it’s about climate justice.”
Continue reading... Match ID: 200 Score: 1.43 source: www.theguardian.com age: 9 days qualifiers: 1.43 development
Researchers from the FAS Center for Systems Biology describe how they used a new live-imaging technique to watch neurons being created in the embryo in almost real-time. They were then able to track those cells through the development of the nervous system in the retina. What they saw surprised them.
The neural stem cells they tracked behaved eerily similar to the way these cells behave in vertebrates during the development of their nervous system.
It suggests that vertebrates and cephalopods, despite diverging from each other 500 million years ago, not only are using similar mechanisms to make their big brains but that this process and the way the cells act, divide, and are shaped may essentially layout the blueprint required develop this kind of nervous system...
Match ID: 201 Score: 1.43 source: www.schneier.com age: 11 days qualifiers: 1.43 development
NASA Awards Commercial Small Satellite Data Acquisition Agreement Mon, 14 Nov 2022 15:15 EST NASA has selected GeoOptics Inc. of Pasadena, California, to provide commercial small constellation satellite data products that may augment NASA-collected data in the future. Match ID: 202 Score: 1.43 source: www.nasa.gov age: 15 days qualifiers: 1.43 california
NASA, ULA Successfully Launch Weather Satellite, Re-entry Tech Demo Thu, 10 Nov 2022 19:28 EST NASA successfully launched the third in a series of polar-orbiting weather satellites for the National Oceanic and Atmospheric Administration (NOAA) at 1:49 a.m. PST Thursday, as well as an agency technology demonstration on a United Launch Alliance Atlas V rocket from Vandenberg Space Force Base in California. Match ID: 203 Score: 1.43 source: www.nasa.gov age: 19 days qualifiers: 1.43 california
NASA to Brief Media on First Earth Water-Monitoring Satellite Mission Thu, 10 Nov 2022 14:22 EST NASA will host a virtual media briefing at 10:30 a.m. EST (7:30 a.m. PST) Nov. 14, at the agency’s Jet Propulsion Laboratory in Southern California, to discuss the upcoming launch of the Surface Water and Ocean Topography (SWOT) satellite. Match ID: 204 Score: 1.43 source: www.nasa.gov age: 19 days qualifiers: 1.43 california
This is a sponsored article brought to you by BAE Systems.
No one sets out to put together half a puzzle. Similarly, researchers and engineers in the defense industry want to see the whole picture – seeing their innovations make it into the hands of warfighters and commercial customers.
BAE Systems formed its FAST Labs R&D organization with the goal of cultivating innovation and creating disruptive technology. It is unique in the defense industry because it is an in-house, customer-focused R&D organization collaborating internally across the BAE Systems enterprise to develop and evolve technologies in advanced electronics, autonomy, cyber, electromagnetic warfare, sensors and processing, and more.
FAST Labs: The heart of revolutionary research and development at BAE Systems
FAST Labs is an in-house, customer-focused R&D organization collaborating internally across the BAE Systems enterprise to develop and evolve technologies in advanced electronics, autonomy, cyber, electromagnetic warfare, sensors and processing, and more.
Today, the FAST Labs R&D organization is a place where research teams can invent and see their work come to life. While there are many examples we cannot publicly report, word has gotten out to the research and engineering community – especially those who have grown frustrated with the traditional defense R&D process or have been working on compartmentalized projects in the industry.
At FAST Labs, engineers get to turn their breakthrough technology innovations into real-life impact.
Want to see your research come to life? Learn more about innovation, the culture, and career opportunities at FAST Labs.
Match ID: 205 Score: 1.43 source: spectrum.ieee.org age: 21 days qualifiers: 1.43 development
NASA, USAID Partnership Strengthens Global Development Fri, 04 Nov 2022 16:08 EDT NASA and the U.S. Agency for International Development (USAID) signed an agreement Friday strengthening the collaboration between the two agencies, including efforts that advance the federal response to climate change. Match ID: 206 Score: 1.43 source: www.nasa.gov age: 25 days qualifiers: 1.43 development
The designer of a groundbreaking text-reading app, that started life as a Microsoft side-project, explains how it’s now a lifeline for blind people and those with literacy issues
Is it a can of beans or tomatoes? It’s easy if you can see the label – but imagine if you were blind or partially sighted – could you tell the difference? And now consider, how would you distinguish between painkillers or other tablets without being able to see or read? Nearly one in five people have mistakenly taken a wrong dose because they haven’t been able to read the packaging, research by the consumer health company Haleon suggests.
“As humans, we like to be able to understand the world around us,” says Saqib Shaikh, an engineering manager and product leader at Microsoft who has created technology that helps decipher the visual and written world for people who are blind or have low vision.
Continue reading... Match ID: 207 Score: 1.43 source: www.theguardian.com age: 25 days qualifiers: 1.43 microsoft
The Guardian's Pete Pattisson looks at the exploitation of migrant workers in Qatar ahead of the World Cup and explains why any reforms are 'too little, too late'. Pattisson speaks of his own first-hand experience with workers in the country and describes the very poor living and working conditions he saw. In the runup to the tournament, the Qatari authorities claim they have made significant progress with their human rights laws. Migrant workers, however, who make up 95% of the working population, are still suffering 12 years after hosting rights were awarded by Fifa.
Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.
“Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.
Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.
DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.
“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.
“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”
So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.
The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.
The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA
But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.
NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.
The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.
The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA
What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”
So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.
“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”
There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.
“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”
Match ID: 209 Score: 1.43 source: spectrum.ieee.org age: 67 days qualifiers: 1.43 california
NASA to Host Briefing on Perseverance Mars Rover Mission Operations Mon, 12 Sep 2022 09:49 EDT NASA will host a briefing at 11:30 a.m. EDT (8:30 a.m. PDT) on Thursday, Sept. 15, at the agency’s Jet Propulsion Laboratory in Southern California to provide highlights from the first year and a half of the Perseverance rover’s exploration of Mars. Match ID: 210 Score: 1.43 source: www.nasa.gov age: 78 days qualifiers: 1.43 california
Each contender is taking a different approach to space-based cellular service. The Apple offering uses the existing satellite bandwidth Globalstar once used for messaging offerings, but without the need for a satellite-specific handset. The AST project and another company, Lynk Global, would use a dedicated network of satellites with larger-than-normal antennas to produce a 4G, 5G, and someday 6G cellular signal compatible with any existing 4G-compatible phone (as detailed in other recent IEEESpectrum coverage of space-based 5G offerings). Assuming regulatory approval is forthcoming, the technology would work first in equatorial regions and then across more of the planet as these providers expand their satellite constellations. T-Mobile and Starlink’s offering would work in the former PCS band in the United States. SpaceX, like AST and Lynk, would need to negotiate access to spectrum on a country-by-country basis.
Apple’s competitors are unlikely to see commercial operations before 2024.
“Regulators have not decided on the power limits from space, what concerns there are about interference, especially across national borders. There’s a whole bunch of regulatory issues that simply haven’t been thought about to date.” —Tim Farrar, telecommunications consultant
The T-Mobile–Starlink announcement is “in some ways an endorsement” of AST and Lynk’s proposition, and “in other ways a great threat,” says telecommunications consultant Tim Farrar of Tim Farrar Associates in Menlo Park, Calif. AST and Lynk have so far told investors they expect their national mobile network operator partners to charge per use or per day, but T-Mobile announced that they plan to include satellite messaging in the 1,900-megahertz range in their existing services. Apple said their Emergency SOS via Satellite service would be free the first two years for U.S. and Canadian iPhone 14 buyers, but did not say what it would cost after that. For now, the Globalstar satellites it is using cannot offer the kind of broadband bandwidth AST has promised, but Globalstar has reported to investors orders for new satellites that might offer new capabilities, including new gateways.
Even under the best conditions—a clear view of the sky—users will need 15 seconds to send a message via Apple’s service. They will also have to follow onscreen guidance to keep the device pointed at the satellites they are using. Light foliage can cause the same message to take more than a minute to send. Ashley Williams, a satellite engineer at Apple who recorded the service’s announcement, also mentioned a data-compression algorithm and a series of rescue-related suggested auto-replies intended to minimize the amount of data that users would need to send during a rescue.
Meanwhile, AST SpaceMobile says it aims to launch an experimental satellite Saturday, 10 September, to test its cellular broadband offering.
Last month’s T-Mobile-SpaceX announcement “helped the world focus attention on the huge market opportunity for SpaceMobile, the only planned space-based cellular broadband network. BlueWalker 3, which has a 693 sq ft array, is scheduled for launch within weeks!” tweeted AST SpaceMobile CEO Abel Avellan on 25 August. The size of the array matters because AST SpaceMobile has so far indicated in its applications for experimental satellite licenses that it intends to use lower radio frequencies (700–900 MHz) with less propagation loss but that require antennas much larger than conventional satellites carry.
So far government agencies have issued licenses for thousands of low-Earth-orbiting satellites, which have the biggest impact on astronomers. Even with the constellations starting to form, satellite-cellular telecommunications companies are still open to big regulatory risks. “Regulators have not decided on the power limits from space, what concerns there are about interference, especially across national borders. There’s a whole bunch of regulatory issues that simply haven’t been thought about to date,” Farrar says.
James Webb Space Telescope (JWST) reveals its first images on 12 July, they will be the by-product of carefully crafted mirrors and scientific instruments. But all of its data-collecting prowess would be moot without the spacecraft’s communications subsystem.
The Webb’s comms aren’t flashy. Rather, the data and communication systems are designed to be incredibly, unquestionably dependable and reliable. And while some aspects of them are relatively new—it’s the first mission to use
Ka-band frequencies for such high data rates so far from Earth, for example—above all else, JWST’s comms provide the foundation upon which JWST’s scientific endeavors sit.
As previous articles in this series have noted, JWST is parked at
Lagrange point L2. It’s a point of gravitational equilibrium located about 1.5 million kilometers beyond Earth on a straight line between the planet and the sun. It’s an ideal location for JWST to observe the universe without obstruction and with minimal orbital adjustments.
Being so far away from Earth, however, means that data has farther to travel to make it back in one piece. It also means the communications subsystem needs to be reliable, because the prospect of a repair mission being sent to address a problem is, for the near term at least, highly unlikely. Given the cost and time involved, says
Michael Menzel, the mission systems engineer for JWST, “I would not encourage a rendezvous and servicing mission unless something went wildly wrong.”
According to Menzel, who has worked on JWST in some capacity for over 20 years, the plan has always been to use well-understood K
a-band frequencies for the bulky transmissions of scientific data. Specifically, JWST is transmitting data back to Earth on a 25.9-gigahertz channel at up to 28 megabits per second. The Ka-band is a portion of the broader K-band (another portion, the Ku-band, was also considered).
The Lagrange points are equilibrium locations where competing gravitational tugs on an object net out to zero. JWST is one of three craft currently occupying L2 (Shown here at an exaggerated distance from Earth). IEEE Spectrum
Both the data-collection and transmission rates of JWST dwarf those of the older
Hubble Space Telescope. Compared to Hubble, which is still active and generates 1 to 2 gigabytes of data daily, JWST can produce up to 57 GB each day (although that amount is dependent on what observations are scheduled).
Menzel says he first saw the frequency selection proposals for JWST around 2000, when he was working at
Northrop Grumman. He became the mission systems engineer in 2004. “I knew where the risks were in this mission. And I wanted to make sure that we didn’t get any new risks,” he says.
a-band frequencies can transmit more data than X-band (7 to 11.2 GHz) or S-band (2 to 4 GHz), common choices for craft in deep space. A high data rate is a necessity for the scientific work JWST will be undertaking. In addition, according to Carl Hansen, a flight systems engineer at the Space Telescope Science Institute (the science operations center for JWST), a comparable X-band antenna would be so large that the spacecraft would have trouble remaining steady for imaging.
Although the 25.9-GHz K
a-band frequency is the telescope’s workhorse communication channel, it also employs two channels in the S-band. One is the 2.09-GHz uplink that ferries future transmission and scientific observation schedules to the telescope at 16 kilobits per second. The other is the 2.27-GHz, 40-kb/s downlink over which the telescope transmits engineering data—including its operational status, systems health, and other information concerning the telescope’s day-to-day activities.
Any scientific data the JWST collects during its lifetime will need to be stored on board, because the spacecraft doesn’t maintain round-the-clock contact with Earth. Data gathered from its scientific instruments, once collected, is stored within the spacecraft’s 68-GB solid-state drive (3 percent is reserved for engineering and telemetry data).
Alex Hunter, also a flight systems engineer at the Space Telescope Science Institute, says that by the end of JWST’s 10-year mission life, they expect to be down to about 60 GB because of deep-space radiation and wear and tear.
The onboard storage is enough to collect data for about 24 hours before it runs out of room. Well before that becomes an issue, JWST will have scheduled opportunities to beam that invaluable data to Earth.
Sandy Kwan, a DSN systems engineer, says that contact windows with spacecraft are scheduled 12 to 20 weeks in advance. JWST had a greater number of scheduled contact windows during its commissioning phase, as instruments were brought on line, checked, and calibrated. Most of that process required real-time communication with Earth.
All of the communications channels use the
Reed-Solomonerror-correction protocol—the same error-correction standard as used in DVDs and Blu-ray discs as well as QR codes. The lower data-rate S-band channels use binary phase-shift key modulation—involving phase shifting of a signal’s carrier wave. The K-band channel, however, uses a quadrature phase-shift key modulation. Quadrature phase-shift keying can double a channel’s data rate, at the cost of more complicated transmitters and receivers.
JWST’s communications with Earth incorporate an acknowledgement protocol—only after the JWST gets confirmation that a file has been successfully received will it go ahead and delete its copy of the data to clear up space.
The communications subsystem was assembled along with the rest of the spacecraft bus by
Northrop Grumman, using off-the-shelf components sourced from multiple manufacturers.
JWST has had a long and
often-delayed development, but its communications system has always been a bedrock for the rest of the project. Keeping at least one system dependable means it’s one less thing to worry about. Menzel can remember, for instance, ideas for laser-based optical systems that were invariably rejected. “I can count at least two times where I had been approached by people who wanted to experiment with optical communications,” says Menzel. “Each time they came to me, I sent them away with the old ‘Thank you, but I don’t need it. And I don’t want it.’”
Match ID: 212 Score: 1.43 source: spectrum.ieee.org age: 144 days qualifiers: 1.43 development
Win the race to design and deploy satellite technologies and systems. Learn how new digital engineering techniques can accelerate development and reduce your risk and costs. Download this free whitepaper now!
Our white paper covers:
Software-based digital twin models to reduce costly satellite system re-design
Ways to improve models throughout the product lifecycle, increase confidence, and reduce risks
Match ID: 213 Score: 1.43 source: connectlp.keysight.com age: 210 days qualifiers: 1.43 development
NASA Awards Contracts for Aerospace Testing and Facilities Operations Mon, 11 Apr 2022 17:44 EDT NASA has awarded a contract to Jacobs Technology Inc. of Tullahoma, Tennessee, to provide the agency’s Ames Research Center in Silicon Valley, California with support services for ground-based aerospace test facilities at the center. Match ID: 214 Score: 1.43 source: www.nasa.gov age: 232 days qualifiers: 1.43 california
This presents some unique challenges for Gateway. On the ISS, astronauts spend a substantial amount of time on station upkeep, but Gateway will have to keep itself functional for extended periods without any direct human assistance.
“The things that the crew does on the International Space Station will need to be handled by Gateway on its own,” explains
Julia Badger, Gateway autonomy system manager at NASA’s Johnson Space Center. “There’s also a big difference in the operational paradigm. Right now, ISS has a mission control that’s full time. With Gateway, we’re eventually expecting to have just 8 hours a week of ground operations.” The hundreds of commands that the ISS receives every day to keep it running will still be necessary on Gateway—they’ll just have to come from Gateway itself, rather than from humans back on Earth.
“It’s a new way of thinking compared to ISS. If something breaks on Gateway, we either have to be able to live with it for a certain amount of time, or we’ve got to have the ability to remotely or autonomously fix it.” —Julia Badger, NASA JSC
To make this happen, NASA is developing a
vehicle system manager, or VSM, that will act like the omnipresent computer system found on virtually every science-fiction starship. The VSM will autonomously manage all of Gateway’s functionality, taking care of any problems that come up, to the extent that they can be managed with clever software and occasional input from a distant human. “It’s a new way of thinking compared to ISS,” explains Badger. “If something breaks on Gateway, we either have to be able to live with it for a certain amount of time, or we’ve got to have the ability to remotely or autonomously fix it.”
While Gateway itself can be thought of as a robot of sorts, there’s a limited amount that can be reasonably and efficiently done through dedicated automated systems, and NASA had to find a compromise between redundancy and both complexity and mass. For example, there was some discussion about whether Gateway’s hatches should open and close on their own, and NASA ultimately decided to leave the hatches manually operated. But that doesn’t necessarily mean that Gateway won’t be able to open its hatches without human assistance; it just means that there will be a need for robotic hands rather than human ones.
“I hope eventually we have robots up there that can open the hatches,” Badger tells us. She explains that Gateway is being designed with potential intravehicular robots (IVRs) in mind, including things like adding visual markers to important locations, placing convenient charging ports around the station interior, and designing the hatches such that the force required to open them is compatible with the capabilities of robotic limbs. Parts of Gateway’s systems may be modular as well, able to be removed and replaced by robots if necessary. “What we’re trying to do,” Badger says, “is make smart choices about Gateway’s design that don’t add a lot of mass but that will make it easier for a robot to work within the station.”
Robonaut at its test station in front of a manipulation task board on the ISS.JSC/NASA
NASA already has a substantial amount of experience with IVR.
Robonaut 2, a full-size humanoid robot, spent several years on the International Space Station starting in 2011, learning how to perform tasks that would otherwise have to be done by human astronauts. More recently, a trio of cubical, toaster-size, free-flying robots called Astrobees have taken up residence on the ISS, where they’ve been experimenting with autonomous sensing and navigation. A NASA project called ISAAC (Integrated System for Autonomous and Adaptive Caretaking) is currently exploring how robots like Astrobee could be used for a variety of tasks on Gateway, from monitoring station health to autonomously transferring cargo, although at least in the near term, in Badger’s opinion, “maintenance of Gateway, like using robots that can switch out broken components, is going to be more important than logistics types of tasks.”
Badger believes that a combination of a generalized mobile manipulator like Robonaut 2 and a free flyer like Astrobee make for a good team, and this combination is currently the general concept for Gateway IVR. This is not to say that the intravehicular robots that end up on Gateway will look like the robots that have been working on the ISS, but they’ll be inspired by them, and will leverage all of the experience that NASA has gained with its robots on ISS so far. It might also be useful to have a limited number of specialized robots, Badger says. “For example, if there was a reason to get behind a rack, you may want a snake-type of robot for that.”
An Astrobee robot (this one is named Bumble) on the ISS.JSC/NASA
While NASA is actively preparing for intravehicular robots on Gateway, such robots do not yet exist, and the agency may not be building these robots itself, instead relying on industry partners to deliver designs that meet NASA’s requirements. At launch, and likely for the first several years at least, Gateway will have to take care of itself without internal robotic assistants. However, one of the goals of Gateway is to operate itself completely autonomously for up to three weeks without any contact with Earth at all, mimicking the three-week solar conjunction between Earth and Mars where the sun blocks any communications between the two planets. “I think that we will get IVR on board,” Badger says. “If we really want Gateway to be able to take care of itself for 21 days, IVR is going to be a very important part of that. And having a robot is absolutely something that I think is going to be necessary as we move on to Mars.”
“Having a robot is absolutely something that I think is going to be necessary as we move on to Mars.” —Julia Badger, NASA JSC
Intravehicular robots are just half of the robotic team that will be necessary to keep Gateway running autonomously long-term. Space stations rely on complex external infrastructure for power, propulsion, thermal control, and much more. Since 2001, the ISS has been home to Canadarm2, a 17.6-meter robotic arm, which is able to move around the station to grasp and manipulate objects while under human control from either inside the station or from the ground.
The Canadian Space Agency, in partnership with space technology company MDA, is developing a new robotic-arm system for Gateway, called
Canadarm3, scheduled to launch in 2027. Canadarm3 will include an 8.5-meter-long arm for grappling spacecraft and moving large objects, as well as a smaller, more dexterous robotic arm that can be used for delicate tasks. The smaller arm can even repair the larger arm if necessary. But what really sets Canadarm3 apart from its predecessors is how it’s controlled, according to Daniel Rey, Gateway chief engineer and systems manager at CSA. “One of the very novel things about Canadarm3 is its ability to operate autonomously, without any crew required,” Rey says. This capability relies on a new generation of software and hardware that gives the arm a sense of touch as well as the ability to react to its environment without direct human supervision.
“With Canadarm3, we realize that if we want to get ready for Mars, more autonomy will be required.” —Daniel Rey, CSA
Even though Gateway will be a thousand times farther away from Earth than the ISS, Rey explains that the added distance (about 400,000 kilometers) isn’t what really necessitates Canadarm3’s added autonomy. “Surprisingly, the location of Gateway in its orbit around the moon has a time delay to Earth that is not all that different from the time delay in low Earth orbit when you factor in various ground stations that signals have to pass through,” says Rey. “With Canadarm3, we realize that if we want to get ready for Mars, where that will no longer be the case, more autonomy will be required.”
Canadarm3’s autonomous tasks on Gateway will include external inspection, unloading logistics vehicles, deploying science payloads, and repairing Gateway by swapping damaged components with spares. Rey tells us that there will also be a science logistics airlock, with a moving table that can be used to pass equipment in and out of Gateway. “It’ll be possible to deploy external science, or to bring external systems inside for repair, and for future internal robotic systems to cooperate with Canadarm3. I think that’ll be a really exciting thing to see.”
Even though it’s going to take a couple of extra years for Gateway’s robotic residents to arrive, the station will be operating mostly autonomously (by necessity) as soon as the Power and Propulsion Element and the Habitation and Logistics Outpost begin their journey to lunar orbit in November o2024. Several science payloads will be along for the ride, including heliophysics and space weather experiments.
Gateway itself, though, is arguably the most important experiment of all. Its autonomous systems, whether embodied in internal and external robots or not, will be undergoing continual testing, and Gateway will need to prove itself before we’re ready to trust its technology to take us into deep space. In addition to being able to operate for 21 days without communications, one of Gateway’s eventual requirements is to be able to function for up to three years without any crew visits. This is the level of autonomy and reliability that we’ll need to be prepared for our exploration of Mars, and beyond.
Match ID: 215 Score: 1.43 source: spectrum.ieee.org age: 236 days qualifiers: 1.43 development
NASA Awards SETI Institute Contract for Planetary Protection Support Fri, 10 Jul 2020 12:04 EDT NASA has awarded the SETI Institute in Mountain View, California, a contract to support all phases of current and future planetary protection missions to ensure compliance with planetary protection standards. Match ID: 216 Score: 1.43 source: www.nasa.gov age: 872 days qualifiers: 1.43 california
Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.
To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup
InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.
The fundamental technology that these companies rely on is not new:
Electroencephalography (EEG) has been around for about a century, and it’s commonly used today in both medicine and neuroscience research. For those applications, the subject may have up to 256 electrodes attached to their scalp with conductive gel to record electrical signals from neurons in different parts of the brain. More electrodes, or “channels,” mean that doctors and scientists can get better spatial resolution in their readouts—they can better tell which neurons are associated with which electrical signals.
is new is that EEG has recently broken out of clinics and labs and has entered the consumer marketplace. This move has been driven by a new class of “dry” electrodes that can operate without conductive gel, a substantial reduction in the number of electrodes necessary to collect useful data, and advances in artificial intelligence that make it far easier to interpret the data. Some EEG headsets are even available directly to consumers for a few hundred dollars.
While the public may not have gotten the memo, experts say the neurotechnology is mature and ready for commercial applications. “This is not sci-fi,” says
James Giordano, chief of neuroethics studies at Georgetown University Medical Center. “This is quite real.”
In an office in Herzliya, Israel,
Sergey Vaisman sits in front of a computer. He’s relaxed but focused, silent and unmoving, and not at all distracted by the seven-channel EEG headset he’s wearing. On the computer screen, images rapidly appear and disappear, one after another. At a rate of three images per second, it’s just possible to tell that they come from an airport X-ray scanner. It’s essentially impossible to see anything beyond fleeting impressions of ghostly bags and their contents.
“Our brain is an amazing machine,” Vaisman tells us as the stream of images ends. The screen now shows an album of selected X-ray images that were just flagged by Vaisman’s brain, most of which are now revealed to have hidden firearms. No one can knowingly identify and flag firearms among the jumbled contents of bags when three images are flitting by every second, but Vaisman’s brain has no problem doing so behind the scenes, with no action required on his part. The brain processes visual imagery very quickly. According to Vaisman, the decision-making process to determine whether there’s a gun in complex images like these takes just 300 milliseconds.
Brain data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier.
What takes much more time are the cognitive and motor processes that occur after the decision making—planning a response (such as saying something or pushing a button) and then executing that response. If you can skip these planning and execution phases and instead use EEG to directly access the output of the brain’s visual processing and decision-making systems, you can perform image-recognition tasks far faster. The user no longer has to actively think: For an expert, just that fleeting first impression is enough for their brain to make an accurate determination of what’s in the image.
InnerEye’s image-classification system operates at high speed by providing a shortcut to the brain of an expert human. As an expert focuses on a continuous stream of images (from three to 10 images per second, depending on complexity), a commercial EEG system combined with InnerEye’s software can distinguish the characteristic response the expert’s brain produces when it recognizes a target. In this example, the target is a weapon in an X-ray image of a suitcase, representing an airport-security application.Chris Philpot
Vaisman is the vice president of R&D of
InnerEye, an Israel-based startup that recently came out of stealth mode. InnerEye uses deep learning to classify EEG signals into responses that indicate “targets” and “nontargets.” Targets can be anything that a trained human brain can recognize. In addition to developing security screening, InnerEye has worked with doctors to detect tumors in medical images, with farmers to identify diseased plants, and with manufacturing experts to spot product defects. For simple cases, InnerEye has found that our brains can handle image recognition at rates of up to 10 images per second. And, Vaisman says, the company’s system produces results just as accurate as a human would when recognizing and tagging images manually—InnerEye is merely using EEG as a shortcut to that person’s brain to drastically speed up the process.
While using the InnerEye technology doesn’t require active decision making, it does require training and focus. Users must be experts at the task, well trained in identifying a given type of target, whether that’s firearms or tumors. They must also pay close attention to what they’re seeing—they can’t just zone out and let images flash past. InnerEye’s system measures focus very accurately, and if the user blinks or stops concentrating momentarily, the system detects it and shows the missed images again.
Can you spot the manufacturing defects?
Examine the sample images below, and then try to spot the target among the nontargets.
Ten images are displayed every second for five seconds on loop. There are three targets.
Can you spot the weapon?
Three images are displayed every second for five seconds on loop. There is one weapon.
Having a human brain in the loop is especially important for classifying data that may be open to interpretation. For example, a well-trained image classifier may be able to determine with reasonable accuracy whether an X-ray image of a suitcase shows a gun, but if you want to determine whether that X-ray image shows something else that’s vaguely suspicious, you need human experience. People are capable of detecting something unusual even if they don’t know quite what it is.
“We can see that uncertainty in the brain waves,” says InnerEye founder and chief technology officer
Amir Geva. “We know when they aren’t sure.” Humans have a unique ability to recognize and contextualize novelty, a substantial advantage that InnerEye’s system has over AI image classifiers. InnerEye then feeds that nuance back into its AI models. “When a human isn’t sure, we can teach AI systems to be not sure, which is better training than teaching the AI system just one or zero,” says Geva. “There is a need to combine human expertise with AI.” InnerEye’s system enables this combination, as every image can be classified by both computer vision and a human brain.
Using InnerEye’s system is a positive experience for its users, the company claims. “When we start working with new users, the first experience is a bit overwhelming,” Vaisman says. “But in one or two sessions, people get used to it, and they start to like it.” Geva says some users do find it challenging to maintain constant focus throughout a session, which lasts up to 20 minutes, but once they get used to working at three images per second, even two images per second feels “too slow.”
In a security-screening application, three images per second is approximately an order of magnitude faster than an expert can manually achieve. InnerEye says their system allows far fewer humans to handle far more data, with just two human experts redundantly overseeing 15 security scanners at once, supported by an AI image-recognition system that is being trained at the same time, using the output from the humans’ brains.
InnerEye is currently partnering with a handful of airports around the world on pilot projects. And it’s not the only company working to bring neurotech into the workplace.
How Emotiv’s brain-tracking technology works
Emotiv’s MN8 earbuds collect two channels of EEG brain data. The earbuds can also be used for phone calls and music.
When it comes to neural monitoring for productivity and well-being in the workplace, the San Francisco–based company
Emotiv is leading the charge. Since its founding 11 years ago, Emotiv has released three models of lightweight brain-scanning headsets. Until now the company had mainly sold its hardware to neuroscientists, with a sideline business aimed at developers of brain-controlled apps or games. Emotiv started advertising its technology as an enterprise solution only this year, when it released its fourth model, the MN8 system, which tucks brain-scanning sensors into a pair of discreet Bluetooth earbuds.
Tan Le, Emotiv’s CEO and cofounder, sees neurotech as the next trend in wearables, a way for people to get objective “brain metrics” of mental states, enabling them to track and understand their cognitive and mental well-being. “I think it’s reasonable to imagine that five years from now this [brain tracking] will be quite ubiquitous,” she says. When a company uses the MN8 system, workers get insight into their individual levels of focus and stress, and managers get aggregated and anonymous data about their teams.
The Emotiv Experience
The Emotiv Experience
Emotiv’s MN8 system uses earbuds to capture two channels of EEG data, from which the company’s proprietary algorithms derive performance metrics for attention and cognitive stress. It’s very difficult to draw conclusions from raw EEG signals [top], especially with only two channels of data. The MN8 system relies on machine-learning models that Emotiv developed using a decade’s worth of data from its earlier headsets, which have more electrodes.
To determine a worker’s level of attention and cognitive stress, the MN8 system uses a variety of analyses. One shown here [middle, bar graphs] reveals increased activity in the low-frequency ranges (theta and alpha) when a worker’s attention is high and cognitive stress is low; when the worker has low attention and high stress, there’s more activity in the higher-frequency ranges (beta and gamma). This analysis and many others feed into the models that present simplified metrics of attention and cognitive stress [bottom] to the worker.
Emotiv launched its enterprise technology into a world that is fiercely debating the future of the workplace. Workers are feuding with their employers about return-to-office plans following the pandemic, and companies are increasingly using “
bossware” to keep tabs on employees—whether staffers or gig workers, working in the office or remotely. Le says Emotiv is aware of these trends and is carefully considering which companies to work with as it debuts its new gear. “The dystopian potential of this technology is not lost on us,” she says. “So we are very cognizant of choosing partners that want to introduce this technology in a responsible way—they have to have a genuine desire to help and empower employees,” she says.
Lee Daniels, a consultant who works for the global real estate services company JLL, has spoken with a lot of C-suite executives lately. “They’re worried,” says Daniels. “There aren’t as many people coming back to the office as originally anticipated—the hybrid model is here to stay, and it’s highly complex.” Executives come to Daniels asking how to manage a hybrid workforce. “This is where the neuroscience comes in,” he says.
Emotiv has partnered with JLL, which has begun to use the MN8 earbuds to help its clients collect “true scientific data,” Daniels says, about workers’ attention, distraction, and stress, and how those factors influence both productivity and well-being. Daniels says JLL is currently helping its clients run short-term experiments using the MN8 system to track workers’ responses to new collaboration tools and various work settings; for example, employers could compare the productivity of in-office and remote workers.
“The dystopian potential of this technology is not lost on us.” —Tan Le, Emotiv CEO
Emotiv CTO Geoff Mackellar believes the new MN8 system will succeed because of its convenient and comfortable form factor: The multipurpose earbuds also let the user listen to music and answer phone calls. The downside of earbuds is that they provide only two channels of brain data. When the company first considered this project, Mackellar says, his engineering team looked at the rich data set they’d collected from Emotiv’s other headsets over the past decade. The company boasts that academics have conducted more than 4,000 studies using Emotiv tech. From that trove of data—from headsets with 5, 14, or 32 channels—Emotiv isolated the data from the two channels the earbuds could pick up. “Obviously, there’s less information in the two sensors, but we were able to extract quite a lot of things that were very relevant,” Mackellar says.
Once the Emotiv engineers had a hardware prototype, they had volunteers wear the earbuds and a 14-channel headset at the same time. By recording data from the two systems in unison, the engineers trained a machine-learning algorithm to identify the signatures of attention and cognitive stress from the relatively sparse MN8 data. The brain signals associated with attention and stress have been well studied, Mackellar says, and are relatively easy to track. Although everyday activities such as talking and moving around also register on EEG, the Emotiv software filters out those artifacts.
The app that’s paired with the MN8 earbuds doesn’t display raw EEG data. Instead, it processes that data and shows workers two simple metrics relating to their individual performance. One squiggly line shows the rise and fall of workers’ attention to their tasks—the degree of focus and the dips that come when they switch tasks or get distracted—while another line represents their cognitive stress. Although short periods of stress can be motivating, too much for too long can erode productivity and well-being. The MN8 system will therefore sometimes suggest that the worker take a break. Workers can run their own experiments to see what kind of break activity best restores their mood and focus—maybe taking a walk, or getting a cup of coffee, or chatting with a colleague.
What neuroethicists think about neurotech in the workplace
While MN8 users can easily access data from their own brains, employers don’t see individual workers’ brain data. Instead, they receive aggregated data to get a sense of a team or department’s attention and stress levels. With that data, companies can see, for example, on which days and at which times of day their workers are most productive, or how a big announcement affects the overall level of worker stress.
Emotiv emphasizes the importance of anonymizing the data to protect individual privacy and prevent people from being promoted or fired based on their brain metrics. “The data belongs to you,” says Emotiv’s Le. “You have to explicitly allow a copy of it to be shared anonymously with your employer.” If a group is too small for real anonymity, Le says, the system will not share that data with employers. She also predicts that the device will be used only if workers opt in, perhaps as part of an employee wellness program that offers discounts on medical insurance in return for using the MN8 system regularly.
However, workers may still be worried that employers will somehow use the data against them.
Karen Rommelfanger, founder of the Institute of Neuroethics, shares that concern. “I think there is significant interest from employers” in using such technologies, she says. “I don’t know if there’s significant interest from employees.”
Both she and Georgetown’s Giordano doubt that such tools will become commonplace anytime soon. “I think there will be pushback” from employees on issues such as privacy and worker rights, says Giordano. Even if the technology providers and the companies that deploy the technology take a responsible approach, he expects questions to be raised about who owns the brain data and how it’s used. “Perceived threats must be addressed early and explicitly,” he says.
Giordano says he expects workers in the United States and other western countries to object to routine brain scanning. In China, he says, workers have reportedly been more receptive to experiments with such technologies. He also believes that brain-monitoring devices will really take off first in industrial settings, where a momentary lack of attention can lead to accidents that injure workers and hurt a company’s bottom line. “It will probably work very well under some rubric of occupational safety,” Giordano says. It’s easy to imagine such devices being used by companies involved in
trucking, construction, warehouse operations, and the like. Indeed, at least one such product, an EEG headband that measures fatigue, is already on the market for truck drivers and miners.
Giordano says that using brain-tracking devices for safety and wellness programs could be a slippery slope in any workplace setting. Even if a company focuses initially on workers’ well-being, it may soon find other uses for the metrics of productivity and performance that devices like the MN8 provide. “Metrics are meaningless unless those metrics are standardized, and then they very quickly become comparative,” he says.
Rommelfanger adds that no one can foresee how workplace neurotech will play out. “I think most companies creating neurotechnology aren’t prepared for the society that they’re creating,” she says. “They don’t know the possibilities yet.”
This article appears in the December 2022 print issue.
Match ID: 217 Score: 0.71 source: spectrum.ieee.org age: 10 days qualifiers: 0.71 startup
A rocket built by Indian startup Skyroot has become the country’s first privately developed launch vehicle to reach space, following a successful maiden flight earlier today. The suborbital mission is a major milestone for India’s private space industry, say experts, though more needs to be done to nurture the fledgling sector.
In the longer run, India’s space industry has ambitions of capturing a significant chunk of the global launch market.
Pawan Kumar Chandana, cofounder of the Hyderabad-based startup, says the success of the launch is a major victory for India’s nascent space industry, but the buildup to the mission was nerve-racking. “We were pretty confident on the vehicle, but, as you know, rockets are very notorious for failure,” he says. “Especially in the last 10 seconds of countdown, the heartbeat was racing up. But once the vehicle had crossed the launcher and then went into the stable trajectory, I think that was the moment of celebration.”
At just 6 meters (20 feet) long and weighing only around 550 kilograms (0.6 tonnes), the Vikram-S is not designed for commercial use. Today’s mission, called Prarambh, which means “the beginning” in Sanskrit, was designed to test key technologies that will be used to build the startup’s first orbital rocket, the Vikram I. The rocket will reportedly be capable of lofting as much as 480 kg up to an 500-km altitude and is slated for a maiden launch next October.
Skyroot cofounder Pawan Kumar Chandana standing in front of the Vikram-S rocket at the Satish Dhawan Space Centre, on the east coast of India.Skyroot
In particular, the mission has validated Skyroot’s decision to go with a novel all-carbon fiber structure to cut down on weight, says Chandana. It also allowed the company to test 3D-printed thrusters, which were used for spin stabilization in Vikram-S but will power the upper stages of its later rockets. Perhaps the most valuable lesson, though, says Chandana, was the complexity of interfacing Skyroot's vehicle with ISRO’s launch infrastructure. “You can manufacture the rocket, but launching it is a different ball game,” he says. “That was a great learning experience for us and will really help us accelerate our orbital vehicle.”
Skyroot is one of several Indian space startups looking to capitalize on recent efforts by the Indian government to liberalize its highly regulated space sector. Due to the dual-use nature of space technology, ISRO has historically had a government-sanctioned monopoly on most space activities, says Rajeswari Pillai Rajagopalan, director of the Centre for Security, Strategy and Technology at the Observer Research Foundation think tank, in New Delhi. While major Indian engineering players like Larsen & Toubro and Godrej Aerospace have long supplied ISRO with components and even entire space systems, the relationship has been one of a supplier and vendor, she says.
But in 2020, Finance Minister Nirmala Sitharaman announced a series of reforms to allow private players to build satellites and launch vehicles, carry out launches, and provide space-based services. The government also created the Indian National Space Promotion and Authorisation Centre (InSpace), a new agency designed to act as a link between ISRO and the private sector, and affirmed that private companies would be able to take advantage of ISRO’s facilities.
The first launch of a private rocket from an ISRO spaceport is a major milestone for the Indian space industry, says Rajagopalan. “This step itself is pretty crucial, and it’s encouraging to other companies who are looking at this with a lot of enthusiasm and excitement,” she says. But more needs to be done to realize the government’s promised reforms, she adds. The Space Activities Bill that is designed to enshrine the country’s space policy in legislation has been languishing in draft form for years, and without regulatory clarity, it’s hard for the private sector to justify significant investments. “These are big, bold statements, but these need to be translated into actual policy and regulatory mechanisms,” says Rajagopalan.
Skyroot’s launch undoubtedly signals the growing maturity of India’s space industry, says Saurabh Kapil, associate director in PwC’s space practice. “It’s a critical message to the Indian space ecosystem, that we can do it, we have the necessary skill set, we have those engineering capabilities, we have those manufacturing or industrialization capabilities,” he says.
The Vikram-S rocket blasting off from the Satish Dhawan Space Centre, on the east coast of India.Skyroot
However, crossing this technical milestone is only part of the challenge, he says. The industry also needs to demonstrate a clear market for the kind of launch vehicles that companies like Skyroot are building. While private players are showing interest in launching small satellites for applications like agriculture and infrastructure monitoring, he says, these companies will be able to build sustainable businesses only if they are allowed to compete for more lucrative government and defense-sector contacts.
In the longer run, though, India’s space industry has ambitions of capturing a significant chunk of the global launch market, says Kapil. ISRO has already developed a reputation for both reliability and low cost—its 2014 mission to Mars cost just US $74 million, one-ninth the cost of a NASA Mars mission launched the same week. That is likely to translate to India’s private space industry, too, thanks to a considerably lower cost of skilled labor, land, and materials compared with those of other spacefaring nations, says Kapil. “The optimism is definitely there that because we are low on cost and high on reliability, whoever wants to build and launch small satellites is largely going to come to India,” he says.
Match ID: 218 Score: 0.71 source: spectrum.ieee.org age: 11 days qualifiers: 0.71 startup
The phrase “from a single drop of blood” is full of both promise and peril for researchers trying to integrate clinical-quality medical testing technology with consumer devices like smartphones. While university researchers and commercial startups worldwide continue to introduce innovative new consumer-friendly takes on tests that have resided in laboratories for decades, the collective memory of the fraud perpetrated by those behind Theranos’s discredited blood-testing platform is still pervasive.
“What are you claiming from a single drop of blood?” says Shyamnath Gollakota, director of the mobile intelligence lab at the University of Washington’s Paul G. Allen School of Computer Science and Engineering. Gollakota and colleagues have developed a proof-of-concept test that is able to analyze how quickly a person’s blood clots using a single drop of blood by utilizing a smartphone’s camera, haptic motor, a small attached cup, and a floating piece of copper about the size of a ballpoint pen’s writing tip.
To activate the system, the user adds a drop of blood from a finger prick to a small cup attached to a bracket that fits over the phone. Then the phone’s motor shakes the cup while the camera monitors the movement of the copper particle, which slows down and eventually stops as the clot forms. To calculate the time it takes the blood to clot, the phone collects two time stamps. The first is when the user inserts the blood, and second is when the particle stops moving. The technology performed is in line with commercial coagulation tests in the original study (published in Nature Communications) in a medical facility; Gollakota’s team is now studying how it works in at-home environments.
If the technology ever enters the commercial realm, those with conditions such as atrial fibrillation or who have mechanical heart valves might be able to test their coagulation times quickly and simply themselves instead of making frequent trips to doctors’ offices or going without testing at all—they would have to visit a doctor only when their home tests are out of range. Gollakota is careful not to claim the technology can do too much, but he is also dedicated to making its potentially lifesaving capabilities available to anyone with a smartphone.
“We are not trying to say we can do miracles from a single drop of blood, but we are trying to say the devices that exist in hospitals to test for this haven’t changed much for 20 or 30 years,” Gollakota said. “But smartphones have been changing a lot. They have vibration motors, they have a camera, and these sensors exist on almost any smartphone.”
Ron Paulus, executive in residence at venture capital firm General Catalyst, said the Gollakota team’s technology hews to a trio of ongoing trends he sees with smartphones in health care. The first is the ability to interact with current lab infrastructure for things like ordering and scheduling tests and receiving results directly instead of relying on a doctor as middleman. The second trend is using the phone in the field as a power source for a separate plug-in or bridge to a wireless module with the analyzing intelligence built into that. The third trend is using the phone as both a power source and an analyzing platform.
There is no shortage of devices that inhabit the second category in Paulus’s triumvirate; one example he cited was a dongle that plugged into a phone’s headphone jack and performed tests for HIV and syphilis, returning results in 15 minutes, but the project’s senior author, Columbia University vice provost and professor of biomedical engineering Samuel Sia, said it did not advance to commercialization.
Another similar device is being developed by Sudbury, Ontario–based Verv Technologies, which is perfecting a platform that uses a drop of blood from a finger prick, a disposable test cartridge, a Bluetooth-enabled analyzer, and a connected smartphone app that will give the user results in 15 minutes. The company recently received C$3.8 million seed funding from Crumlin, Northern Ireland–based Randox Laboratories, and a C$314,000 grant with McMaster University from the Natural Sciences and Engineering Research Council of Canada; the grant will allow the McMaster research team to validate and derisk the technologies, according to Canadian Healthcare Technology.
Paulus said consumer-ready smartphone-enabled tests are promising but not ready for mass market adoption yet.
“We’re getting closer, but we’re still not there,” he said. “People can’t go through an eight-step process that requires any kind of technology expertise. It has to be made so any normal, regular person can just do it and can’t really make an error, and it has to be a reliable test. But there is no reason why in three to seven years, people should have to go out for a routine test, the kind of things people go to urgent care for. There is going to be a relentless push into this democratization.”
Ironically, both Paulus and Gollakota think the widespread at-home testing precipitated by the COVID pandemic made the idea of user tests requiring swabbing and dipping indicators and reading results commonplace to a large audience while developers perfect more streamlined devices.
“With COVID tests there were a lot of things we ended up doing ourselves and people are used to it in the home scenario now,” Gollakota said. “So I don’t think it’s completely far-fetched to expect people to be able to do testing themselves with multipart tests. But I also think the idea of going forward is to roll the whole thing into one simple attachment.”
Match ID: 219 Score: 0.71 source: spectrum.ieee.org age: 12 days qualifiers: 0.71 startup
Startups Have a Sellout Problem. There's a Better Way Sat, 29 Oct 2022 13:35:46 +0000 Startups like Meta and Twitter serve as digital infrastructure, but aren't accountable to users. Some startups are trying to chart a new way to exit that focuses on community—not shareholders. Match ID: 220 Score: 0.71 source: www.wired.com age: 31 days qualifiers: 0.71 startup