Country’s reforms meant there were as many women in green shirts at Education City as there were supporting Poland
Saudi Arabia have been bringing the noise in Qatar. Fans have travelled in numbers greater than any other country, with only Argentina coming close. The emerald green shirt is a common sight across Doha. They’re on the corniche and in the metro and, in their first two Group C fixtures, they have generated a fearsome atmosphere within the ground.
It may seem an observation that ought not to have to be made, but the Saudi fanbase in Qatar is made up of both men and women. At Education City on Saturday afternoon, perhaps one in 20 of those making their way into the stadium were female, making them equivalent to the number of women there to support Poland. This is a first.
Continue reading... Match ID: 0 Score: 35.00 source: www.theguardian.com age: 0 days qualifiers: 35.00 travel(|ing)
To mark the park’s birthday, we take a look at its best attractions, beyond its fabulously wild moors and valleys
Aside from its vast expanse of heather, undulating hills and ancient woodlands, there’s so much to love about the North York Moors national park, which celebrates its 70th anniversary this month. It’s where sheep really do roam on village greens, and you can pass through serene valleys on a heritage train. The skies are so clear here that the region provides some of the best stargazing in the country. From ancient archaeological sites and abbeys to some of England’s finest views, here are its top attractions.
Continue reading... Match ID: 1 Score: 35.00 source: www.theguardian.com age: 0 days qualifiers: 35.00 travel(|ing)
The Magnum photographer’s image of a family in Sicily recalls Fellini and Visconti in its romantic depiction of everyday Italian life
Bruno Barbey chanced upon this family defying gravity on their dad’s scooter in Palermo in 1963. The French-Moroccan photographer had been travelling in Italy for a couple of years by then, restless for exactly this kind of image, with its seductive mix of humour and authenticity. Has there ever been a better articulation of contrasting roles in the patriarchal family? Father sitting comfortably in his jacket and cap and smiling for the camera, while behind him his possibly pregnant wife sees trouble ahead, as she and their three kids and their big checked bag compete for precarious discomfort.
Barbey, then 22, had gone to Italy to try to find pictures that captured “a national spirit” as the country sought to rediscover the dolce vita in cities still recovering from war. He travelled in an old VW van and in Palermo in particular he located scenes that might have been choreographed for the working-class heroes of the Italian neorealist films, the self-absorbed dreamers of Fellini and Visconti (The Leopard, the latter’s Hollywood epic set in Sicily was released in the same year). Barbey’s camera with its wide angle lens picked up the detail of vigorous crowd scenes among street children and barflies and religious processions. His book, The Italians, now republished, is a time capsule of that already disappearing black-and-white world of priests and mafiosi and nightclub girls and nuns.
Richard Drax reported to have visited Caribbean island for meeting on next steps, including plans for former sugar plantation
The government of Barbados is considering plans to make a wealthy Conservative MP the first individual to pay reparations for his ancestor’s pivotal role in slavery.
The Observer understands that Richard Drax, MP for South Dorset, recently travelled to the Caribbean island for a private meeting with the country’s prime minister, Mia Mottley. A report is now before Mottley’s cabinet laying out the next steps, which include legal action in the event that no agreement is reached with Drax.
Continue reading... Match ID: 4 Score: 35.00 source: www.theguardian.com age: 1 day qualifiers: 35.00 travel(|ing)
Home Office argues people trafficked to Syria were exposed to extreme violence which poses ‘almighty problem’
People trafficked to Syria and radicalised remain threats to national security as they may be desensitised after exposure to extreme violence, the Home Office has argued, in contesting Shamima Begum’s appeal against the removal of her British citizenship.
Begum was 15 when she travelled from her home in Bethnal Green, east London, through Turkey and into territory controlled by Islamic State (IS). After she was found, nine months pregnant in a Syrian refugee camp in February 2019, the then home secretary, Sajid Javid, revoked her British citizenship on national security grounds.
Continue reading... Match ID: 6 Score: 30.00 source: www.theguardian.com age: 3 days qualifiers: 30.00 travel(|ing)
He began his career in 1980 as a management trainee at the National Dairy Development Board, in Anand, India. A year later he joined Milma, a state government marketing cooperative for the dairy industry, in Thiruvananthapuram, as a manager of planning and systems. After 15 years with Milma, he joined IBM in Tokyo as a manager of technology services.
In 2000 he helped found InApp, a company in Palo Alto, Calif., that provides software development services. He served as its CEO and executive chairman until he died.
Raja was the 2011–2012 chair of the IEEE Humanitarian Activities Committee. He wanted to find a way to mobilize engineers to apply their expertise to develop sustainable solutions that help their local community. To achieve the goal, in 2011 he founded IEEE SIGHT. Today there are more than 150 SIGHT groups in 50 countries that are working on projects such as sustainable irrigation and photovoltaic systems.
For the past two years, Rajah chaired the IEEE Admission and Advancement Review Panel, which approves applications for new members and elevations to higher membership grades.
He was a member of the International Centre for Free and Open Source Software’s advisory board. The organization was established by the government of Kerala, India, to facilitate the development and distribution of free, open-source software. Raja also served on the board of directors at Bedroc, an IT staffing and support firm in Nashville.
Terry was a computer engineer at Hewlett-Packard in Fort Collins, Colo., for 18 years.
He joined HP in 1978 as a software developer, and he chaired the Portable Operating System Interface (POSIX) working group. POSIX is a family of standards specified by the IEEE Computer Society for maintaining compatibility among operating systems. While there, he also developed software for the Motorola 68000 microprocessor.
Terry left HP in 1997 to join Softway Solutions, also in Fort Collins, where he developed tools for Interix, a Unix subsystem of the Windows NT operating system. After Microsoft acquired Softway in 1999, he stayed on as a senior software development engineer at its Seattle location. There he worked on static analysis, a method of computer-program debugging that is done by examining the code without executing the program. He also helped to create SAL, a Microsoft source-code annotation language, which was developed to make code design easier to understand and analyze.
Terry retired in 2014. He loved science fiction, boating, cooking, and spending time with his family, according to his daughter, Kristin.
He earned a bachelor’s degree in electrical engineering in 1970 and a Ph.D. in computer science in 1978, both from the University of Washington in Seattle.
Signal processing engineer
Life senior member, 70; died 25 August
Sandham applied his signal processing expertise to a wide variety of disciplines including medical imaging, biomedical data analysis, and geophysics.
He began his career in 1974 as a physicist at the University of Glasgow. While working there, he pursued a Ph.D. in geophysics. He earned his degree in 1981 at the University of Birmingham in England. He then joined the British National Oil Corp. (now Britoil) as a geophysicist.
In 1986 he left to join the University of Strathclyde, in Glasgow, as a lecturer in the signal processing department. During his time at the university, he published more than 200 journal papers and five books that addressed blood glucose measurement, electrocardiography data analysis and compression, medical ultrasound, MRI segmentation, prosthetic limb fitting, and sleep apnea detection.
Sandham left the university in 2003 and founded Scotsig, a signal processing consulting and research business, also in Glasgow.
Sandham earned his bachelor’s degree in electrical engineering in 1974 from the University of Glasgow.
Stephen M. Brustoski
Life member, 69; died 6 January
For 40 years, Brustoski worked as a loss-prevention engineer for insurance company FM Global. He retired from the company, which was headquartered in Johnston, R.I., in 2014.
He was an elder at his church, CrossPoint Alliance, in Akron, Ohio, where he oversaw administrative work and led Bible studies and prayer meetings. He was an assistant scoutmaster for 12 years, and he enjoyed hiking and traveling the world with his family, according to his wife, Sharon.
Brustoski earned a bachelor’s degree in electrical engineering in 1973 from the University of Akron.
President and CEO of Essex Corp.
Life senior member, 96; died 7 May 2020
As president and CEO of Essex Corp., in Columbia, Md., Letaw handled the development and commercialization of optoelectronic and signal processing solutions for defense, intelligence, and commercial customers. He retired in 1995.
He had served in World War II as an aviation engineer for the U.S. Army. After he was discharged, he earned a bachelor’s degree in chemistry, then a master’s degree and Ph.D., all from the University of Florida in Gainesville, in 1949, 1951, and 1952.
After he graduated, he became a postdoctoral assistant at the University of Illinois at Urbana-Champaign. He left to become a researcher at Raytheon Technologies, an aerospace and defense manufacturer, in Wayland, Mass.
We’d like to find out the reasons that prevent people in the UK from working as much as they’d like – whether it’s childcare, health issues, housing or travel
We’re keen to hear from people in the UK who would like to work or work more than they currently do and find out what prevents them from doing so.
Whether it is your health, childcare, travel or being unable to find housing, or anything else that stands in the way, we’d like to hear from you. It doesn’t matter whether you’re actively looking for work or not.
Continue reading... Match ID: 8 Score: 20.00 source: www.theguardian.com age: 5 days qualifiers: 20.00 travel(|ing)
Kleiman plans to give a presentation next year about the programmers as part of the IEEE Industry Hub Initiative’s Impact Speaker series. The initiative aims to introduce industry professionals and academics to IEEE and its offerings.
Planning for the event, which is scheduled to be held in Silicon Valley, is underway. Details are to be announced before the end of the year.
The Institute spoke with Kleiman, who teaches Internet technology and governance for lawyers at American University, in Washington, D.C., about her mission to publicize the programmers’ contributions. The interview has been condensed and edited for clarity.
Kathy Kleiman delves into the ENIAC programmers’ lives and the pioneering work they did in her book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer.Kathy Kleiman
What inspired you to film the documentary?
Kathy Kleiman: The ENIAC was a secret project of the U.S. Army during World War II. It was the first general-purpose, programmable, all-electronic computer—the key to the development of our smartphones, laptops, and tablets today. The ENIAC was a highly experimental computer, with 18,000 vacuums, and some of the leading technologists at the time didn’t think it would work, but it did.
Six months after the war ended, the Army decided to reveal the existence of ENIAC and heavily publicize it. To do so, in February 1946 the Army took a lot of beautiful, formal photos of the computer and the team of engineers that developed it. I found these pictures while researching women in computer science as an undergraduate at
Harvard. At the time, I knew of only two women in computer science: Ada Lovelace and then U.S. Navy Capt. Grace Hopper. [Lovelace was the first computer programmer; Hopper co-developed COBOL, one of the earliest standardized computer languages.] But I was sure there were more women programmers throughout history, so I went looking for them and found the images taken of the ENIAC.
The pictures fascinated me because they had both men and women in them. Some of the photos had just women in front of the computer, but they weren’t named in any of the photos’ captions. I tracked them down after I found their identities, and four of six original ENIAC programmers responded. They were in their late 70s at the time, and over the course of many years they told me about their work during World War II and how they were recruited by the U.S. Army to be “human computers.”
Eckert and Mauchly promised the U.S. Army that the ENIAC could calculate artillery trajectories in seconds rather than the hours it took to do the calculations by hand. But after they built the 2.5-meter-tall by 24-meter-long computer, they couldn’t get it to work. Out of approximately 100 human computers working for the U.S. Army during World War II, six women were chosen to write a program for the computer to run differential calculus equations. It was hard because the program was complex, memory was very limited, and the direct programming interface that connected the programmers to the ENIAC was hard to use. But the women succeeded. The trajectory program was a great success. But Bartik, McNulty, Meltzer, Snyder, Spence, and Teitelbaum’s contributions to the technology were never recognized. Leading technologists and the public never knew of their work.
I was inspired by their story and wanted to share it. I raised funds, researched and recorded 20 hours of broadcast-quality oral histories with the ENIAC programmers—which eventually became the documentary. It allows others to see the women telling their story.
“If we open the doors to history, I think it would make it a lot easier to recruit the wonderful people we are trying to urge to enter engineering, computer science, and related fields.”
Why was the accomplishment of the six women important?
Kleiman: The ENIAC is considered by many to have launched the information age.
We generally think of women leaving the factory and farm jobs they held during World War II and giving them back to the men, but after ENIAC was completed, the six women continued to work for the U.S. Army. They helped world-class mathematicians program the ENIAC to complete “hundred-year problems” [problems that would take 100 years to solve by hand]. They also helped teach the next generation of ENIAC programmers, and some went on to create the foundations of modern programming.
What influenced you to continue telling the ENIAC programmers’ story in your book?
Kleiman: After my documentary premiered at the film festival, young women from tech companies who were in the audience came up to me to share why they were excited to learn the programmers’ story. They were excited to learn that women were an integral part of the history of early computing programming, and were inspired by their stories. Young men also came up to me and shared stories of their grandmothers and great-aunts who programmed computers in the 1960s and ’70s and inspired them to explore careers in computer science.
I met more women and men like the ones in Seattle all over the world, so it seemed like a good idea to tell the full story along with its historical context and background information about the lives of the ENIAC programmers, specifically what happened to them after the computer was completed.
What did you find most rewarding about sharing their story?
Kleiman: It was wonderful and rewarding to get to know the ENIAC programmers. They were incredible, wonderful, warm, brilliant, and exceptional people. Talking to the people who created the programming was inspiring and helped me to see that I could work at the cutting edge too. I entered Internet law as one of the first attorneys in the field because of them.
What I enjoy most is that the women’s experiences inspire young people today just as they inspired me when I was an undergraduate.
Clockwise from top left: Jean Bartik, Kathleen Antonelli, Betty Holberton, Ruth Teitelbaum, Marlyn Meltzer, Frances Spence.Clockwise from top left: The Bartik Family; Bill Mauchly, Priscilla Holberton, Teitelbaum Family, Meltzer Family, Spence Family
Is it important to highlight the contributions made throughout history by women in STEM?
Kleiman: [Actor] Geena Davis founded the Geena Davis Institute on Gender in Media, which works collaboratively with the entertainment industry to dramatically increase the presence of female characters in media. It’s based on the philosophy of “you can’t be what you can’t see.”
That philosophy is both right and wrong. I think you can be what you can’t see, and certainly every pioneer who has ever broken a racial, ethnic, religion, or gender barrier has done so. However, it’s certainly much easier to enter a field if there are role models who look like you. To that end, many computer scientists today are trying to diversify the field. Yet I know from my work in Internet policy and my recent travels across the country for my book tour that many students still feel locked out because of old stereotypes in computing and engineering. By sharing strong stories of pioneers in the fields who are women and people of color, I hope we can open the doors to computing and engineering. I hope history and herstory that is shared make it much easier to recruit young people to join engineering, computer science, and related fields.
Are you planning on writing more books or producing another documentary?
Kleiman: I would like to continue the story of the ENIAC programmers and write about what happened to them after the war ended. I hope that my next book will delve into the 1950s and uncover more about the history of the Universal Automatic Computer, the first modern commercial computer series, and the diverse group of people who built and programmed it.
Match ID: 9 Score: 15.00 source: spectrum.ieee.org age: 6 days qualifiers: 15.00 travel(|ing)
The vacuum-tube triode wasn’t quite 20 years old when physicists began trying to create its successor, and the stakes were huge. Not only had the triode made long-distance telephony and movie sound possible, it was driving the entire enterprise of commercial radio, an industry worth more than a billion dollars in 1929. But vacuum tubes were power-hungry and fragile. If a more rugged, reliable, and efficient alternative to the triode could be found, the rewards would be immense.
The goal was a three-terminal device made out of semiconductors that would accept a low-current signal into an input terminal and use it to control the flow of a larger current flowing between two other terminals, thereby amplifying the original signal. The underlying principle of such a device would be something called the field effect—the ability of electric fields to modulate the electrical conductivity of semiconductor materials. The field effect was already well known in those days, thanks to diodes and related research on semiconductors.
In the cutaway photo of a point-contact, two thin conductors are visible; these connect to the points that make contact with a tiny slab of germanium. One of these points is the emitter and the other is the collector. A third contact, the base, is attached to the reverse side of the germanium.AT&T ARCHIVES AND HISTORY CENTER
But building such a device had proved an insurmountable challenge to some of the world’s top physicists for more than two decades. Patents for transistor-like devices had been filed
starting in 1925, but the first recorded instance of a working transistor was the legendary point-contact device built at AT&T Bell Telephone Laboratories in the fall of 1947.
Though the point-contact transistor was the most important invention of the 20th century, there exists, surprisingly, no clear, complete, and authoritative account of how the thing actually worked. Modern, more robust junction and planar transistors rely on the physics in the bulk of a semiconductor, rather than the surface effects exploited in the first transistor. And relatively little attention has been paid to this gap in scholarship.
It was an ungainly looking assemblage of germanium, plastic, and gold foil, all topped by a squiggly spring. Its inventors were a soft-spoken Midwestern theoretician, John Bardeen, and a voluble and “
somewhat volatile” experimentalist, Walter Brattain. Both were working under William Shockley, a relationship that would later prove contentious. In November 1947, Bardeen and Brattain were stymied by a simple problem. In the germanium semiconductor they were using, a surface layer of electrons seemed to be blocking an applied electric field, preventing it from penetrating the semiconductor and modulating the flow of current. No modulation, no signal amplification.
Sometime late in 1947 they hit on a solution. It featured two pieces of barely separated gold foil gently pushed by that squiggly spring into the surface of a small slab of germanium.
Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate. Indeed, the current edition of that bible of undergraduate EEs,
The Art of Electronics by Horowitz and Hill, makes no mention of the point-contact transistor at all, glossing over its existence by erroneously stating that the junction transistor was a “Nobel Prize-winning invention in 1947.” But the transistor that was invented in 1947 was the point-contact; the junction transistor was invented by Shockley in 1948.
So it seems appropriate somehow that the most comprehensive explanation of the point-contact transistor is contained within
John Bardeen’s lecture for that Nobel Prize, in 1956. Even so, reading it gives you the sense that a few fine details probably eluded even the inventors themselves. “A lot of people were confused by the point-contact transistor,” says Thomas Misa, former director of the Charles Babbage Institute for the History of Science and Technology, at the University of Minnesota.
Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate.
A year after Bardeen’s lecture, R. D. Middlebrook, a professor of electrical engineering at Caltech who would go on to do pioneering work in power electronics,
wrote: “Because of the three-dimensional nature of the device, theoretical analysis is difficult and the internal operation is, in fact, not yet completely understood.”
Nevertheless, and with the benefit of 75 years of semiconductor theory, here we go. The point-contact transistor was built around a thumb-size slab of
n-type germanium, which has an excess of negatively charged electrons. This slab was treated to produce a very thin surface layer that was p-type, meaning it had an excess of positive charges. These positive charges are known as holes. They are actually localized deficiencies of electrons that move among the atoms of the semiconductor very much as a real particle would. An electrically grounded electrode was attached to the bottom of this slab, creating the base of the transistor. The two strips of gold foil touching the surface formed two more electrodes, known as the emitter and the collector.
That’s the setup. In operation, a small positive voltage—just a fraction of a volt—is applied to the emitter, while a much larger negative voltage—4 to 40 volts—is applied to the collector, all with reference to the grounded base. The interface between the
p-type layer and the n-type slab created a junction just like the one found in a diode: Essentially, the junction is a barrier that allows current to flow easily in only one direction, toward lower voltage. So current could flow from the positive emitter across the barrier, while no current could flow across that barrier into the collector.
The Western Electric Type-2 point-contact transistor was the first transistor to be manufactured in large quantities, in 1951, at Western Electric’s plant in Allentown, Pa. By 1960, when this photo was taken, the plant had switched to producing junction transistors.AT&T ARCHIVES AND HISTORY CENTER
Now, let’s look at what happens down among the atoms. First, we’ll disconnect the collector and see what happens around the emitter without it. The emitter injects positive charges—holes—into the
p-type layer, and they begin moving toward the base. But they don’t make a beeline toward it. The thin layer forces them to spread out laterally for some distance before passing through the barrier into the n-type slab. Think about slowly pouring a small amount of fine powder onto the surface of water. The powder eventually sinks, but first it spreads out in a rough circle.
Now we connect the collector. Even though it can’t draw current by itself through the barrier of the
p-n junction, its large negative voltage and pointed shape do result in a concentrated electric field that penetrates the germanium. Because the collector is so close to the emitter, and is also negatively charged, it begins sucking up many of the holes that are spreading out from the emitter. This charge flow results in a concentration of holes near the p-n barrier underneath the collector. This concentration effectively lowers the “height” of the barrier that would otherwise prevent current from flowing between the collector and the base. With the barrier lowered, current starts flowing from the base into the collector—much more current than what the emitter is putting into the transistor.
The amount of current depends on the height of the barrier. Small decreases or increases in the emitter’s voltage cause the barrier to fluctuate up and down, respectively. Thus very small changes in the the emitter current control very large changes at the collector, so voilà! Amplification. (EEs will notice that the functions of base and emitter are reversed compared with those in later transistors, where the base, not the emitter, controls the response of the transistor.)
Ungainly and fragile though it was, it
was a semiconductor amplifier, and its progeny would change the world. And its inventors knew it. The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil, with that tiny slit separating the emitter and collector contacts. This configuration gave reliable power gain, and the duo knew then that they had succeeded. In his carpool home that night, Brattain told his companions he’d just done “the most important experiment that I’d ever do in my life” and swore them to secrecy. The taciturn Bardeen, too, couldn’t resist sharing the news. As his wife, Jane, prepared dinner that night, he reportedly said, simply, “We discovered something today.” With their children scampering around the kitchen, she responded, “That’s nice, dear.”
It was a transistor, at last, but it was pretty rickety. The inventors later hit on the idea of electrically forming the collector by passing large currents through it during the transistor’s manufacturing. This technique enabled them to get somewhat larger current flows that weren’t so tightly confined within the surface layer. The electrical forming was a bit hit-or-miss, though. “They would just throw out the ones that didn’t work,” Misa notes.
The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil…
The Bell Labs group wasn’t alone in its successful pursuit of a transistor. In Aulnay-sous-Bois, a suburb northeast of Paris, two German physicists, Herbert Mataré and Heinrich Welker, were also trying to build a three-terminal semiconductor amplifier. Working for a French subsidiary of Westinghouse, they were following up on very
intriguing observations Mataré had made while developing germanium and silicon rectifiers for the German military in 1944. The two succeeded in creating a reliable point-contact transistor in June 1948.
They were astounded, a week or so later, when Bell Labs finally revealed the news of its own transistor, at a press conference on 30 June 1948. Though they were developed completely independently, and in secret, the two devices were more or less identical.
Here the story of the transistor takes a weird turn, breathtaking in its brilliance and also disturbing in its details. Bardeen’s and Brattain’s boss,
William Shockley, was furious that his name was not included with Bardeen’s and Brattain’s on the original patent application for the transistor. He was convinced that Bardeen and Brattain had merely spun his theories about using fields in semiconductors into their working device, and had failed to give him sufficient credit. Yet in 1945, Shockley had built a transistor based on those very theories, and it hadn’t worked.
In 1953, RCA engineer Gerald Herzog led a team that designed and built the first "all-transistor" television (although, yes, it had a cathode-ray tube). The team used point-contact transistors produced by RCA under a license from Bell Labs. TRANSISTOR MUSEUM JERRY HERZOG ORAL HISTORY
At the end of December, barely two weeks after the initial success of the point-contact transistor, Shockley traveled to Chicago for the annual meeting of the American Physical Society. On New Year’s Eve, holed up in his hotel room and fueled by a potent mix of jealousy and indignation, he began designing a transistor of his own. In three days he scribbled
some 30 pages of notes. By the end of the month, he had the basic design for what would become known as the bipolar junction transistor, or BJT, which would eventually supersede the point-contact transistor and reign as the dominant transistor until the late 1970s.
With insights gleaned from the Bell Labs work, RCA began developing its own point-contact transistors in 1948. The group included the seven shown here—four of which were used in RCA's experimental, 22-transistor television set built in 1953. These four were the TA153 [top row, second from left], the TA165 [top, far right], the TA156 [bottom row, middle] and the TA172 [bottom, right].TRANSISTOR MUSEUM JONATHAN HOPPE COLLECTION
The BJT was based on Shockley’s conviction that charges could, and should, flow through the bulk semiconductors rather than through a thin layer on their surface. The
device consisted of three semiconductor layers, like a sandwich: an emitter, a base in the middle, and a collector. They were alternately doped, so there were two versions: n-type/p-type/n-type, called “NPN,” and p-type/n-type/p-type, called “PNP.”
The BJT relies on essentially the same principles as the point-contact, but it uses two
p-n junctions instead of one. When used as an amplifier, a positive voltage applied to the base allows a small current to flow between it and the emitter, which in turn controls a large current between the collector and emitter.
Consider an NPN device. The base is
p-type, so it has excess holes. But it is very thin and lightly doped, so there are relatively few holes. A tiny fraction of the electrons flowing in combines with these holes and are removed from circulation, while the vast majority (more than 97 percent) of electrons keep flowing through the thin base and into the collector, setting up a strong current flow.
But those few electrons that do combine with holes must be drained from the base in order to maintain the
p-type nature of the base and the strong flow of current through it. That removal of the “trapped” electrons is accomplished by a relatively small flow of current through the base. That trickle of current enables the much stronger flow of current into the collector, and then out of the collector and into the collector circuit. So, in effect, the small base current is controlling the larger collector circuit.
Electric fields come into play, but they do not modulate the current flow, which the early theoreticians thought would have to happen for such a device to function. Here’s the gist: Both of the
p-n junctions in a BJT are straddled by depletion regions, in which electrons and holes combine and there are relatively few mobile charge carriers. Voltage applied across the junctions sets up electric fields at each, which push charges across those regions. These fields enable electrons to flow all the way from the emitter, across the base, and into the collector.
In the BJT, “the applied electric fields affect the carrier density, but because that effect is exponential, it only takes a little bit to create a lot of diffusion current,” explains Ioannis “John” Kymissis, chair of the department of electrical engineering at Columbia University.
The very first transistors were a type known as point contact, because they relied on metal contacts touching the surface of a semiconductor. They ramped up output current—labeled “Collector current” in the top diagram—by using an applied voltage to overcome a barrier to charge flow. Small changes to the input, or “emitter,” current modulate this barrier, thus controlling the output current.
The bipolar junction transistor accomplishes amplification using much the same principles but with two semiconductor interfaces, or junctions, rather than one. As with the point-contact transistor, an applied voltage overcomes a barrier and enables current flow that is modulated by a smaller input current. In particular, the semiconductor junctions are straddled by depletion regions, across which the charge carriers diffuse under the influence of an electric field.Chris Philpot
The BJT was more rugged and reliable than the point-contact transistor, and those features primed it for greatness. But it took a while for that to become obvious. The BJT was the technology used to make integrated circuits, from the first ones in the early 1960s all the way until the late 1970s, when metal-oxide-semiconductor field-effect transistors (MOSFETs) took over. In fact, it was these field-effect transistors, first the junction field-effect transistor and then MOSFETs, that finally realized the decades-old dream of a three-terminal semiconductor device whose operation was based on the field effect—Shockley’s original ambition.
Such a glorious future could scarcely be imagined in the early 1950s, when AT&T and others were struggling to come up with practical and efficient ways to manufacture the new BJTs. Shockley himself went on to literally put the silicon into Silicon Valley. He moved to Palo Alto and in 1956 founded a company that led the switch from germanium to silicon as the electronic semiconductor of choice. Employees from his company would go on to found Fairchild Semiconductor, and then Intel.
Later in his life, after losing his company because of his terrible management, he became a professor at Stanford and began promulgating ungrounded and unhinged theories about race, genetics, and intelligence. In 1951 Bardeen left Bell Labs to become a professor at the University of Illinois at Urbana-Champaign, where he won a second Nobel Prize for physics, for a theory of superconductivity. (He is the only person to have won two Nobel Prizes in physics.) Brattain stayed at Bell Labs until 1967, when he joined the faculty at Whitman College, in Walla Walla, Wash.
Shockley died a largely friendless pariah in 1989. But his transistor would change the world, though it was still not clear as late as 1953 that the BJT would be the future. In an interview that year,
Donald G. Fink, who would go on to help establish the IEEE a decade later, mused, “Is it a pimpled adolescent, now awkward, but promising future vigor? Or has it arrived at maturity, full of languor, surrounded by disappointments?”
It was the former, and all of our lives are so much the better because of it.
This article appears in the December 2022 print issue as “The First Transistor and How it Worked .”
Match ID: 10 Score: 10.00 source: spectrum.ieee.org age: 7 days qualifiers: 10.00 travel(|ing)
‘Twas the day before launch and all across the globe, people await liftoff for Artemis I with hope.
NASA’s Space Launch System (SLS) rocket and the Orion spacecraft with its European Service Module, is seen here on Launch Pad 39B at NASA's Kennedy Space Center in Florida, USA, on 12 November.
After much anticipation, NASA launch authorities have given the GO for the first opportunity for launch: tomorrow, 16 November with a two-hour launch window starting at 07:04 CET (06:04 GMT, 1:04 local time).
Artemis I is the first mission in a large programme to send astronauts around and on the Moon sustainably. This uncrewed first launch will see the Orion spacecraft travel to the Moon, enter an elongated orbit around our satellite and then return to Earth, powered by the European-built service module that supplies electricity, propulsion, fuel, water and air as well as keeping the spacecraft operating at the right temperature.
The European Service Modules are made from components supplied by over 20 companies in ten ESA Member States and USA. As the first European Service Module sits atop the SLS rocket on the launchpad, the second is only 8 km away being integrated with the Orion crew capsule for the first crewed mission – Artemis II. The third and fourth European Service Modules – that will power astronauts to a Moon landing – are in production in Bremen, Germany.
With a 16 November launch, the three-week Artemis I mission would end on 11 December with a splashdown in the Pacific Ocean. The European Service Module detaches from the Orion Crew Module before splashdown and burns up harmlessly in the atmosphere, its job complete after taking Orion to the Moon and back safely.
Backup Artemis I launch dates include 19 November. Check ESA’s Orion blog for updates and more details. Watch the launch live on ESA Web TV from 15 Nov, 20:30 GMT (21:30 CET) when the rocket fuelling starts, and from 16 November 00:00 GMT/01:00 CET for the launch coverage.
Match ID: 12 Score: 5.00 source: www.esa.int age: 12 days qualifiers: 5.00 travel(|ing)
From the outside, there is little to tell a basic Ford XL ICE
F-150 from the electric Ford PRO F-150 Lightning. Exterior changes could pass for a typical model-year refresh. While there are LED headlight and rear-light improvements along with a more streamlined profile, the Lightning’s cargo box is identical to that of an ICE F-150, complete with tailgate access steps and a jobsite ruler. The Lightning’s interior also has a familiar feel.
But when you pop the Lightning’s hood, you find that the internal combustion engine has gone missing. In its place is a
front trunk (“frunk”), while concealed beneath is the new skateboard frame with its dual electric motors (one for each axle) and a big 98-kilowatt-hour standard (and 131-kWh extended-range)battery pack. The combination permits the Lightning to travel 230 miles (370 kilometers) without recharging and go from 0 to 60 miles per hour in 4.5 seconds, making it the fastest F-150 available despite its much heavier weight.
Invisible, too, are the Lightning’s sophisticated computing and software systems. The 2016 ICE F-150 reportedly had about
150 million lines of code. The Lightning’s software suite may even be larger than its ICE counterpart (Ford will not confirm this). The Lightning replaces the Ford F-150 ICE-related software in the electronic control units (ECUs) with new “intelligent” software and systems that control the main motors, manage the battery system, and provide charging information to the driver.
The EV Transition Explained
This is the first in a series of articles presenting just some of the technological and social challenges in moving from vehicles with internal-combustion engines to electric vehicles. These must be addressed at scale before EVs can happen. Each challenge entails a multitude of interacting systems, subsystems, sub-subsystems, and so on. In reviewing each article, readers should bear in mind Nobel Prize–winning physicist Richard Feynman’s admonition: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”
says the Lightning’s software will identify nearby public charging stations and tell drivers when to recharge. To increase the accuracy of the range calculation, the software will draw upon similar operational data communicated from other Lightning owners that Ford will dynamically capture, analyze, and feed back to the truck.
For executives, however, Lightning’s software is not only a big consumer draw but also among the biggest threats to its success. Ford CEO Jim Farley
told the New York Times that software bugs worry him most. To mitigate the risk, Ford has incorporated an over-the-air (OTA) software-update capability for both bug fixes and feature upgrades. Yet with an incorrect setting in the Lightning’s tire pressure monitoring system requiring a software fix only a few weeks after its initial delivery, and with some new Ford Mustang Mach-Esrecalled because of misconfigured software caused by a “service update or as an over-the-air update,” Farley’s worries probably won’t be soothed for some time.
The F-150 Lightning's front trunk (also known as a frunk) helps this light-duty electric pickup haul even more.
However, long-term success is not guaranteed. “Ford is walking a tightrope, trying at the same time to convince everyone that EVs are the same as ICE vehicles yet different,” says
University of Michigan professor emeritus John Leslie King, who has long studied the auto industry. Ford and other automakers will need to convince tens of millions of customers to switch to EVs to meet the Biden Administration’s decarbonization goals of 50 percent new auto sales being non-ICE vehicles by 2030.
King points out that neither Ford nor other automakers can forever act like EVs are merely interchangeable with—but more ecofriendly than—their ICE counterparts. As EVs proliferate at scale, they operate in a vastly different technological, political, and social ecosystem than ICE vehicles. The core technologies and requisite expertise, supply-chain dependencies, and political alliances are different. The expectations of and about EV owners, and their agreement to change their lifestyles, also differ significantly.
Indeed, the challenges posed by the transition from ICE vehicles to EVs at scale are significantly larger in scope and more complex than the policymakers setting the regulatory timeline appreciate. The systems-engineering task alone is enormous, with countless interdependencies that are outside policymakers' control, and resting on optimistic assumptions about promising technologies and wished-for changes in human behavior. The risk of getting it wrong, and the resulting negative environmental and economic consequences created, are high. In this series, we will break down the myriad infrastructure, policy, and social challenges involved learned from discussions with numerous industry insiders and industry watchers. Let's take a look at some of the elemental challenges blocking the road ahead for EVs.
The soft car
For Ford and the other automakers that have shaped the ICE vehicle ecosystem for more than a century, ultimate success is beyond the reach of the traditional political, financial, and technological levers they once controlled.
Renault chief executive Luca de Meo, for example, is quoted in the Financial Times as saying that automakers must recognize that “the game has changed,” and they will “have to play by new rules” dictated by the likes of mining and energy companies.
One reason for the new rules, observes professor
Deepak Divan, the director of the Center for Distributed Energy at Georgia Tech, is that the EV transition is “a subset of the energy transition” away from fossil fuels. On the other hand, futurist Peter Schwartzcontends that the entire electric system is part of the EV supply chain. These alternative framings highlight the strong codependencies involved. Consequently, automakers will be competing against not only other EV manufacturers but also numerous players involved in the energy transition aiming to grab the same scarce resources and talent.
“Ford is walking a tightrope, trying at the same time to convince everyone that EVs are the same as ICE vehicles yet different.” —John Leslie King
EVs represent a new class of cyberphysical systems that unify the physical with information technology, allowing them to sense, process, act, and communicate in real time within a large transportation ecosystem, as I have
noted in detail elsewhere. While computing in ICE vehicles typically optimizes a car’s performance at the time of sale, EV-based cyberphysical systems are designed to evolve as they are updated and upgraded, postponing their obsolescence.
“As an automotive company, we’ve been trained to put vehicles out when they’re perfect,” Ford’s Farley told the
New York Times. “But with software, you can change it with over-the-air updates.” This allows new features to be introduced in existing models instead of waiting for next year’s model to appear. Farley sees Ford spending much less effort on changing vehicles’ physical properties and devoting more to upgrading their software capabilities in the future.
Systems engineering for holistic solutions
EV success at scale depends on as much, if not more, on political decisions as technical ones. Government decision-makers in the United States at both the state and federal level, for instance, have
created EV market incentives and set increasingly aggressive dates to sunset ICE vehicle sales, regardless of whether the technological infrastructure needed to support EVs at scale actually exists. While passing public policy can set a direction, it does not guarantee that engineering results will be available when needed.
“A systems-engineering approach towards managing the varied and often conflicting interests of the many stakeholders involved will be necessary to find a workable solution.” —Chris Paredis
$1.2 trillion through 2030 so far toward decarbonizing the planet, automakers are understandably wary not only of the fast reconfiguration of the auto industry but of the concurrent changes required in the energy, telecom, mining, recycling, and transportation industries that must succeed for their investments to pay off.
The EV transition is part of an unprecedented, planetary-wide, cyberphysical systems-engineering project with massive potential benefits as well as costs. Considering the sheer magnitude, interconnectedness, and uncertainties presented by the concurrent technological, political, and social changes necessary, the EV transition will undoubtedly be messy.
This chart from the
Global EV Outlook 2021, IEA, Paris shows 2020 EV sales in the first column; in the second column, projected sales under current climate-mitigation policies; in the third column, projected sales under accelerated climate-mitigation policies.
How many stumbles and how long the transition will take depend on whether the multitude of challenges involved are fully recognized and realistically addressed.
“Everyone needs to stop thinking in silos. It is the adjacency interactions that are going to kill you.” —Deepak Divan
“A systems-engineering approach towards managing the varied and often conflicting interests of the many stakeholders involved will be necessary to find a workable solution,” says
Chris Paredis, the BMW Endowed Chair in Automotive Systems Integration at Clemson University. The range of engineering-infrastructure improvements needed to support EVs, for instance, “will need to be coordinated at a national/international level beyond what can be achieved by individual companies,” he states.
If the nitty gritty but hard-to-solve issues are glossed over or ignored, or if EV expectations are
hyped beyond the market’s capability to deliver, no one should be surprised by a backlash against EVs, making the transition more difficult.
What has not yet been proven, but is widely assumed, is that BEVs can rapidly replace the majority of the
current 1.3 billion-plus light-duty ICE vehicles. The interrelated challenges involving EV engineering infrastructure, policy, and societal acceptance, however, will test how well this assumption holds true.
Therefore, the successful transition to EVs at scale demands a “holistic approach,” emphasizes Georgia Tech’s Deepak Divan. “Everyone needs to stop thinking in silos. It is the adjacency interactions that are going to kill you.”
“We cannot foresee all the details needed to make the EV transition successful,” John Leslie King says. “While there’s a reason to believe we will get there, there’s less reason to believe we know the way. It is going to be hard.”
In the next article in the series, we will look at the complexities introduced by trading our dependence on oil for our dependence on batteries.
Match ID: 13 Score: 5.00 source: spectrum.ieee.org age: 14 days qualifiers: 5.00 travel(|ing)
Collective Mental Time Travel Can Influence the Future Wed, 09 Nov 2022 13:00:00 +0000 The way people imagine the past and future of society can sway attitudes and behaviors. How might this be wielded for good? Match ID: 14 Score: 5.00 source: www.wired.com age: 18 days qualifiers: 5.00 travel(|ing)
Collisions with birds are a serious problem for commercial aircraft, costing the industry billions of dollars and killing thousands of animals every year. New research shows that a robotic imitation of a peregrine falcon could be an effective way to keep them out of flight paths.
Worldwide, so-called birdstrikes are estimated to cost the civil aviation industry almost US $1.4 billion annually. Nearby habitats are often deliberately made unattractive to birds, but airports also rely on a variety of deterrents designed to scare them away, such as loud pyrotechnics or speakers that play distress calls from common species.
However, the effectiveness of these approaches tends to decrease over time, as the birds get desensitized by repeated exposure, says Charlotte Hemelrijk, a professor on the faculty of science and engineering at the University of Groningen, in the Netherlands. Live hawks or blinding lasers are also sometimes used to disperse flocks, she says, but this is controversial as it can harm the animals, and keeping and training falcons is not cheap.
“The birds don’t distinguish [RobotFalcon] from a real falcon, it seems.” —Charlotte Hemelrijk, University of Groningen
In an effort to find a more practical and lasting solution, Hemelrijk and colleagues designed a robotic peregrine falcon that can be used to chase flocks away from airports. The device is the same size and shape as a real hawk, and its fiberglass and carbon-fiber body has been painted to mimic the markings of its real-life counterpart.
Rather than flapping like a bird, the RobotFalcon relies on two small battery-powered propellers on its wings, which allows it to travel at around 30 miles per hour for up to 15 minutes at a time. A human operator controls the machine remotely from a hawk’s-eye perspective via a camera perched above the robot’s head.
To see how effective the RobotFalcon was at scaring away birds, the researchers tested it against a conventional quadcopter drone over three months of field testing, near the Dutch city of Workum. They also compared their results to 15 years of data collected by the Royal Netherlands Air Force that assessed the effectiveness of conventional deterrence methods such as pyrotechnics and distress calls.
In a paper published in the Journal of the Royal Society Interface, the team showed that the RobotFalcon cleared fields of birds faster and more effectively than the drone. It also kept birds away from fields longer than distress calls, the most effective of the conventional approaches.
There was no evidence of birds getting habituated to the RobotFalcon over three months of testing, says Hemelrijk, and the researchers also found that the birds exhibited behavior patterns associated with escaping from predators much more frequently with the robot than with the drone. “The way of reacting to the RobotFalcon is very similar to the real falcon,” says Hemelrijk. “The birds don’t distinguish it from a real falcon, it seems.”
Other attempts to use hawk-imitating robots to disperse birds have had less promising results, though. Morgan Drabik-Hamshare, a research wildlife biologist at the DoA, and her colleagues published a paper in Scientific Reports last year that described how they pitted a robotic peregrine falcon with flapping wings against a quadcopter and a fixed-wing remote-controlled aircraft.
They found the robotic falcon was the least effective of the three at scaring away turkey vultures, with the quadcopter scaring the most birds off and the remote-controlled plane eliciting the quickest response. “Despite the predator silhouette, the vultures did not perceive the predator UAS [unmanned aircraft system] as a threat,” Drabik-Hamshare wrote in an email.
Zihao Wang, an associate lecturer at the University of Sydney, in Australia, who develops UAS for bird deterrence, says the RobotFalcon does seem to be effective at dispersing flocks. But he points out that its wingspan is nearly twice the diagonal length of the quadcopter it was compared with, which means it creates a much larger silhouette when viewed from the birds’ perspective. This means the birds could be reacting more to its size than its shape, and he would like to see the RobotFalcon compared with a similar size drone in the future.
The unique design also means the robot requires an experienced and specially trained operator, Wang adds, which could make it difficult to roll out widely. A potential solution could be to make the system autonomous, he says, but it’s unclear how easy this would be.
Hemelrijk says automating the RobotFalcon is probably not feasible, both due to strict regulations around the use of autonomous drones near airports as well as the sheer technical complexity. Their current operator is a falconer with significant experience in how hawks target their prey, she says, and creating an autonomous system that could recognize and target bird flocks in a similar way would be highly challenging.
But while the need for skilled operators is a limitation, Hemelrijk points out that most airports already have full-time staff dedicated to bird deterrence, who could be trained. And given the apparent lack of habituation and the ability to chase birds in a specific direction—so that they head away from runways—she thinks the robotic falcon could be a useful addition to their arsenal.
Match ID: 15 Score: 5.00 source: spectrum.ieee.org age: 21 days qualifiers: 5.00 travel(|ing)
From virtual showrooms to cutting-edge tech, the all-electric CUPRA Born is showing what the next generation of business travel looks like
Looking at a new company car online and checking one out in a showroom have, up until now, been two very separate experiences – neither of which are ideal. Sitting at home in front of your computer screen will allow you to spec a vehicle. You might be able to give it a 360-degree spin if the manufacturer’s website features all the bells and whistles, but you won’t really get much of a feel for your potential new car; and you’ll have to go digging through the rest of the website to find answers to any specific questions you may have. Visiting a showroom, on the other hand, will get you up close and personal to the vehicle, but you have to physically get to the dealership in the first place.
In a best-of-both worlds approach, CUPRA is combining the website and showroom experiences into one single process. In the market for a new company car, for example the Born all-electric vehicle? Then visit the new CUPRA Virtual Showroom and you’ll be able to get a live tour of the car online – through your computer or phone – with a product expert showing you around the vehicle’s exterior and interior, taking you through its numerous features and answering all the questions you can think of. No waiting around, no wasted time: click the link, set up an appointment and a CUPRA agent will send you a message, connect you to an audio and video session, and you’re ready to go.
You can direct the agent through the car as you wish, and sessions can be as brief or as detailed as you need, lasting from just a few minutes to an hour. It’s totally up to you. And the experience itself is impressive. Being able to guide the agent around the car, essentially via a video call, allows you to see what you want to see of the vehicle in clear, close-up detail, as well as witnessing the interior tech being put to use in real time. In the modern hybrid working landscape, where Zoom calls are now the norm, the CUPRA Virtual Showroom has successfully plugged itself into the zeitgeist.
“It’s pretty innovative,” says Martin Gray, CUPRA’s UK contract hire and leasing manager. “We’ve had great reactions from customers so far. It really works for the Born, as the car is so different from others in its class. Because of the way it looks, and because of its technology and the way the dashboard is set up, people really want to get a good look at it. And in a climate where supply of actual physical vehicles has become a real issue, this gives more people the opportunity to see the Born up close and personal.”
Continue reading... Match ID: 16 Score: 5.00 source: www.theguardian.com age: 37 days qualifiers: 5.00 travel(|ing)
As climate change edges from crisis to emergency, the aviation sector looks set to miss its 2050 goal of net-zero emissions. In the five years preceding the pandemic, the top four U.S. airlines—American, Delta, Southwest, and United—saw a 15 percent increase in the use of jet fuel. Despite continual improvements in engine efficiencies, that number is projected to keep rising.
A glimmer of hope, however, comes from solar fuels. For the first time, scientists and engineers at the Swiss Federal Institute of Technology (ETH) in Zurich have reported a successful demonstration of an integrated fuel-production plant for solar kerosene. Using concentrated solar energy, they were able to produce kerosene from water vapor and carbon dioxide directly from air. Fuel thus produced is a drop-in alternative to fossil-derived fuels and can be used with existing storage and distribution infrastructures, and engines.
Fuels derived from synthesis gas (or syngas)—an intermediate product that is a specific mixture of carbon monoxide and hydrogen—is a known alternative to conventional, fossil-derived fuels. Syngas is produced by Fischer-Tropsch (FT) synthesis, in which chemical reactions convert carbon monoxide and water vapor into hydrocarbons. The team of researchers at ETH found that a solar-driven thermochemical method to split water and carbon dioxide using a metal oxide redox cycle can produce renewable syngas. They demonstrated the process in a rooftop solar refinery at the ETH Machine Laboratory in 2019.
Reticulated porous structure made of ceria used in the solar reactor to thermochemically split CO2 and H2O and produce syngas, a specific mixture of H2 and CO.ETH Zurich
The current pilot-scale solar tower plant was set up at the IMDEA Energy Institute in Spain. It scales up the solar reactor of the 2019 experiment by a factor of 10, says Aldo Steinfeld, an engineering professor at ETH who led the study. The fuel plant brings together three subsystems—the solar tower concentrating facility, solar reactor, and gas-to-liquid unit.
First, a heliostat field made of mirrors that rotate to follow the sun concentrates solar irradiation into a reactor mounted on top of the tower. The reactor is a cavity receiver lined with reticulated porous ceramic structures made of ceria (or cerium(IV) oxide). Within the reactor, the concentrated sunlight creates a high-temperature environment of about 1,500 °C which is hot enough to split captured carbon dioxide and water from the atmosphere to produce syngas. Finally, the syngas is processed to kerosene in the gas-to-liquid unit. A centralized control room operates the whole system.
Fuel produced using this method closes the fuel carbon cycle as it only produces as much carbon dioxide as has gone into its manufacture. “The present pilot fuel plant is still a demonstration facility for research purposes,” says Steinfeld, “but it is a fully integrated plant and uses a solar-tower configuration at a scale that is relevant for industrial implementation.”
“The solar reactor produced syngas with selectivity, purity, and quality suitable for FT synthesis,” the authors noted in their paper. They also reported good material stability for multiple consecutive cycles. They observed a value of 4.1 percent solar-to-syngas energy efficiency, which Steinfeld says is a record value for thermochemical fuel production, even though better efficiencies are required to make the technology economically competitive.
A heliostat field concentrates solar radiation onto a solar reactor mounted on top of the solar tower. The solar reactor cosplits water and carbon dioxide and produces a mixture of molecular hydrogen and carbon monoxide, which in turn is processed to drop-in fuels such as kerosene.ETH Zurich
“The measured value of energy conversion efficiency was obtained without any implementation of heat recovery,” he says. The heat rejected during the redox cycle of the reactor accounted for more than 50 percent of the solar-energy input. “This fraction can be partially recovered via thermocline heat storage. Thermodynamic analyses indicate that sensible heat recovery could potentially boost the energy efficiency to values exceeding 20 percent.”
To do so, more work is needed to optimize the ceramic structures lining the reactor, something the ETH team is actively working on, by looking at 3D-printed structures for improved volumetric radiative absorption. “In addition, alternative material compositions, that is, perovskites or aluminates, may yield improved redox capacity, and consequently higher specific fuel output per mass of redox material,” Steinfeld adds.
The next challenge for the researchers, he says, is the scale-up of their technology for higher solar-radiative power inputs, possibly using an array of solar cavity-receiver modules on top of the solar tower.
To bring solar kerosene into the market, Steinfeld envisages a quota-based system. “Airlines and airports would be required to have a minimum share of sustainable aviation fuels in the total volume of jet fuel that they put in their aircraft,” he says. This is possible as solar kerosene can be mixed with fossil-based kerosene. This would start out small, as little as 1 or 2 percent, which would raise the total fuel costs at first, though minimally—adding “only a few euros to the cost of a typical flight,” as Steinfeld puts it
Meanwhile, rising quotas would lead to investment, and to falling costs, eventually replacing fossil-derived kerosene with solar kerosene. “By the time solar jet fuel reaches 10 to 15 percent of the total jet-fuel volume, we ought to see the costs for solar kerosene nearing those of fossil-derived kerosene,” he adds.
However, we may not have to wait too long for flights to operate solely on solar fuel. A commercial spin-off of Steinfeld’s laboratory, Synhelion, is working on commissioning the first industrial-scale solar fuel plant in 2023. The company has also collaborated with the airline SWISS to conduct a flight solely using its solar kerosene.
Match ID: 17 Score: 5.00 source: spectrum.ieee.org age: 116 days qualifiers: 5.00 travel(|ing)
Quantum signals may possess a number of advantages over regular forms of communication, leading scientists to wonder if humanity was not alone in discovering such benefits. Now a new study suggests that, for hypothetical extraterrestrial civilizations, quantum transmissions using X-rays may be possible across interstellar distances.
Quantum communication relies on a quantum phenomenon known as
entanglement. Essentially, two or more particles such as photons that get “linked” via entanglement can, in theory, influence each other instantly no matter how far apart they are.
Entanglement is essential to
quantum teleportation, in which data can essentially disappear one place and reappear someplace else. Since this information does not travel across the intervening space, there is no chance the information will be lost.
To accomplish quantum teleportation, one would first entangle two photons. Then, one of the photons—the one to be teleported—is kept at one location while the other is beamed to whatever destination is desired.
Next, the photon at the destination's quantum state—which defines its key characteristics—is analyzed, an act that also destroys its quantum state. Entanglement will lead the destination photon to prove identical to its partner. For all intents and purposes, the photon at the origin point “teleported” to the destination point—no physical matter moved, but the two photons are physically indistinguishable.
And to be clear, quantum teleportation cannot send information faster than the speed of light, because the destination photon must still be transmitted via conventional means.
One weakness of quantum communication is that entanglement is fragile. Still, researchers have successfully transmitted entangled photons that remained stable or “coherent” enough for quantum teleportation across distances as great as 1,400 kilometers.
“If photons in Earth’s atmosphere don’t decohere to 100 km, then in interstellar space where the medium is much less dense then our atmosphere, photons won’t decohere up to even the size of the galaxy,” Berera says.
In the new study, the researchers investigated whether and how well quantum communication might survive interstellar distances. Quantum signals might face disruption from a number of factors, such as the gravitational pull of interstellar bodies, they note.
The scientists discovered the best quantum communication channels for interstellar messages are X-rays. Such frequencies are easier to focus and detect across interstellar distances. (NASA has tested deep-space X-ray communication with its
XCOM experiment.) The researchers also found that the optical and microwave bands could enable communication across large distances as well, albeit less effectively than X-rays.
Although coherence might survive interstellar distances, Berera does note quantum signals might lose fidelity. “This means the quantum state is sustained, but it can have a phase shift, so although the quantum information is preserved in these states, it has been altered by the effect of gravity.” Therefore, it may “take some work at the receiving end to account for these phase shifts and be able to assess the information contained in the original state.”
Why might an interstellar civilization transmit quantum signals as opposed to regular ones? The researchers note that quantum communication may allow
greater data compression and, in some cases, exponentiallyfaster speeds than classical channels. Such a boost in efficiency might prove very useful for civilizations separated by interstellar distances.
“It could be that quantum communication is the main communication mode in an extraterrestrial's world, so they just apply what is at hand to send signals into the cosmos,” Berera says.
The scientists detailed
their findings online 28 June in the journal Physical Review D.
Match ID: 18 Score: 5.00 source: spectrum.ieee.org age: 132 days qualifiers: 5.00 travel(|ing)
James Webb Space Telescope (JWST) reveals its first images on 12 July, they will be the by-product of carefully crafted mirrors and scientific instruments. But all of its data-collecting prowess would be moot without the spacecraft’s communications subsystem.
The Webb’s comms aren’t flashy. Rather, the data and communication systems are designed to be incredibly, unquestionably dependable and reliable. And while some aspects of them are relatively new—it’s the first mission to use
Ka-band frequencies for such high data rates so far from Earth, for example—above all else, JWST’s comms provide the foundation upon which JWST’s scientific endeavors sit.
As previous articles in this series have noted, JWST is parked at
Lagrange point L2. It’s a point of gravitational equilibrium located about 1.5 million kilometers beyond Earth on a straight line between the planet and the sun. It’s an ideal location for JWST to observe the universe without obstruction and with minimal orbital adjustments.
Being so far away from Earth, however, means that data has farther to travel to make it back in one piece. It also means the communications subsystem needs to be reliable, because the prospect of a repair mission being sent to address a problem is, for the near term at least, highly unlikely. Given the cost and time involved, says
Michael Menzel, the mission systems engineer for JWST, “I would not encourage a rendezvous and servicing mission unless something went wildly wrong.”
According to Menzel, who has worked on JWST in some capacity for over 20 years, the plan has always been to use well-understood K
a-band frequencies for the bulky transmissions of scientific data. Specifically, JWST is transmitting data back to Earth on a 25.9-gigahertz channel at up to 28 megabits per second. The Ka-band is a portion of the broader K-band (another portion, the Ku-band, was also considered).
The Lagrange points are equilibrium locations where competing gravitational tugs on an object net out to zero. JWST is one of three craft currently occupying L2 (Shown here at an exaggerated distance from Earth). IEEE Spectrum
Both the data-collection and transmission rates of JWST dwarf those of the older
Hubble Space Telescope. Compared to Hubble, which is still active and generates 1 to 2 gigabytes of data daily, JWST can produce up to 57 GB each day (although that amount is dependent on what observations are scheduled).
Menzel says he first saw the frequency selection proposals for JWST around 2000, when he was working at
Northrop Grumman. He became the mission systems engineer in 2004. “I knew where the risks were in this mission. And I wanted to make sure that we didn’t get any new risks,” he says.
a-band frequencies can transmit more data than X-band (7 to 11.2 GHz) or S-band (2 to 4 GHz), common choices for craft in deep space. A high data rate is a necessity for the scientific work JWST will be undertaking. In addition, according to Carl Hansen, a flight systems engineer at the Space Telescope Science Institute (the science operations center for JWST), a comparable X-band antenna would be so large that the spacecraft would have trouble remaining steady for imaging.
Although the 25.9-GHz K
a-band frequency is the telescope’s workhorse communication channel, it also employs two channels in the S-band. One is the 2.09-GHz uplink that ferries future transmission and scientific observation schedules to the telescope at 16 kilobits per second. The other is the 2.27-GHz, 40-kb/s downlink over which the telescope transmits engineering data—including its operational status, systems health, and other information concerning the telescope’s day-to-day activities.
Any scientific data the JWST collects during its lifetime will need to be stored on board, because the spacecraft doesn’t maintain round-the-clock contact with Earth. Data gathered from its scientific instruments, once collected, is stored within the spacecraft’s 68-GB solid-state drive (3 percent is reserved for engineering and telemetry data).
Alex Hunter, also a flight systems engineer at the Space Telescope Science Institute, says that by the end of JWST’s 10-year mission life, they expect to be down to about 60 GB because of deep-space radiation and wear and tear.
The onboard storage is enough to collect data for about 24 hours before it runs out of room. Well before that becomes an issue, JWST will have scheduled opportunities to beam that invaluable data to Earth.
Sandy Kwan, a DSN systems engineer, says that contact windows with spacecraft are scheduled 12 to 20 weeks in advance. JWST had a greater number of scheduled contact windows during its commissioning phase, as instruments were brought on line, checked, and calibrated. Most of that process required real-time communication with Earth.
All of the communications channels use the
Reed-Solomonerror-correction protocol—the same error-correction standard as used in DVDs and Blu-ray discs as well as QR codes. The lower data-rate S-band channels use binary phase-shift key modulation—involving phase shifting of a signal’s carrier wave. The K-band channel, however, uses a quadrature phase-shift key modulation. Quadrature phase-shift keying can double a channel’s data rate, at the cost of more complicated transmitters and receivers.
JWST’s communications with Earth incorporate an acknowledgement protocol—only after the JWST gets confirmation that a file has been successfully received will it go ahead and delete its copy of the data to clear up space.
The communications subsystem was assembled along with the rest of the spacecraft bus by
Northrop Grumman, using off-the-shelf components sourced from multiple manufacturers.
JWST has had a long and
often-delayed development, but its communications system has always been a bedrock for the rest of the project. Keeping at least one system dependable means it’s one less thing to worry about. Menzel can remember, for instance, ideas for laser-based optical systems that were invariably rejected. “I can count at least two times where I had been approached by people who wanted to experiment with optical communications,” says Menzel. “Each time they came to me, I sent them away with the old ‘Thank you, but I don’t need it. And I don’t want it.’”
Match ID: 19 Score: 5.00 source: spectrum.ieee.org age: 142 days qualifiers: 5.00 travel(|ing)
In the latest push for nuclear power in space, the Pentagon’s Defense Innovation Unit (DIU) awarded a contract in May to Seattle-based Ultra Safe Nuclear to advance its nuclear power and propulsion concepts. The company is making a soccer ball–size radioisotope battery it calls EmberCore. The DIU’s goal is to launch the technology into space for demonstration in 2027.
Ultra Safe Nuclear’s system is intended to be lightweight, scalable, and usable as both a propulsion source and a power source. It will be specifically designed to give small-to-medium-size military spacecraft the ability to maneuver nimbly in the space between Earth orbit and the moon. The DIU effort is part of the U.S. military’s recently announced plans to develop a surveillance network in cislunar space.
Besides speedy space maneuvers, the DIU wants to power sensors and communication systems without having to worry about solar panels pointing in the right direction or batteries having enough charge to work at night, says Adam Schilffarth, director of strategy at Ultra Safe Nuclear. “Right now, if you are trying to take radar imagery in Ukraine through cloudy skies,” he says, “current platforms can only take a very short image because they draw so much power.”
Radioisotope power sources are well suited for small, uncrewed spacecraft, adds Christopher Morrison, who is leading EmberCore’s development. Such sources rely on the radioactive decay of an element that produces energy, as opposed to nuclear fission, which involves splitting atomic nuclei in a controlled chain reaction to release energy. Heat produced by radioactive decay is converted into electricity using thermoelectric devices.
Radioisotopes have provided heat and electricity for spacecraft since 1961. The Curiosity and Perseverance rovers on Mars, and deep-space missions including Cassini, New Horizons, and Voyager all use radioisotope batteries that rely on the decay of plutonium-238, which is nonfissile—unlike plutonium-239, which is used in weapons and power reactors.
For EmberCore, Ultra Safe Nuclear has instead turned to medical isotopes such as cobalt-60 that are easier and cheaper to produce. The materials start out inert, and have to be charged with neutrons to become radioactive. The company encapsulates the material in a proprietary ceramic for safety.
Cobalt-60 has a half-life of five years (compared to plutonium-238’s 90 years), which is enough for the cislunar missions that the DOD and NASA are looking at, Morrison says. He says that EmberCore should be able to provide 10 times as much power as a plutonium-238 system, providing over 1 million kilowatt-hours of energy using just a few pounds of fuel. “This is a technology that is in many ways commercially viable and potentially more scalable than plutonium-238,” he says.
One downside of the medical isotopes is that they can produce high-energy X-rays in addition to heat. So Ultra Safe Nuclear wraps the fuel with a radiation-absorbing metal shield. But in the future, the EmberCore system could be designed for scientists to use the X-rays for experiments. “They buy this heater and get an X-ray source for free,” says Schilffarth. “We’ve talked with scientists who right now have to haul pieces of lunar or Martian regolith up to their sensor because the X-ray source is so weak. Now we’re talking about a spotlight that could shine down to do science from a distance.”
Ultra Safe Nuclear’s contract is one of two awarded by the DIU—which aims to speed up the deployment of commercial technology through military use—to develop nuclear power and propulsion for spacecraft. The other contract was awarded to Avalanche Energy, which is making a lunchbox-size fusion device it calls an Orbitron. The device will use electrostatic fields to trap high-speed ions in slowly changing orbits around a negatively charged cathode. Collisions between the ions can result in fusion reactions that produce energetic particles.
Both companies will use nuclear energy to power high-efficiency electric propulsion systems. Electric propulsion technologies such as ion thrusters, which use electromagnetic fields to accelerate ions and generate thrust, are more efficient than chemical rockets, which burn fuel. Solar panels typically power the ion thrusters that satellites use today to change their position and orientation. Schilffarth says that the higher power from EmberCore should give a greater velocity change of 10 kilometers per second in orbit than today’s electric propulsion systems.
Ultra Safe Nuclear is also one of three companies developing nuclear fission thermal propulsion systems for NASA and the Department of Energy. Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is seeking companies to develop a fission-based nuclear thermal rocket engine, with demonstrations expected in 2026.
This article appears in the August 2022 print issue as “Spacecraft to Run on Radioactive Decay.”
Match ID: 20 Score: 5.00 source: spectrum.ieee.org age: 171 days qualifiers: 5.00 travel(|ing)