In 201o, politicians pledged to halt devastation of Earth’s wildlife. Since then, no progress has been made. And despite glimmers of hope, prospects look grim for next month’s top-level meeting in Canada
In 2010, politicians and scientists made a pledge to halt the devastating reductions in wildlife numbers that had been denuding the planet of its animals and sea creatures for the previous century. At that time, wild animal populations were declining by about 2.5% a year on average as habitat loss, invasive species, pollution, climate change and disease ravaged habitats and lives. Such losses must end within a decade, it was agreed.
Next month, conservationists and politicians will meet in Montreal for this year’s biodiversity summit where they will judge what progress has been made over the past 12 years. “It will be an easy assessment to make,” said Andrew Terry, the director of conservation at ZSL, the Zoological Society of London. “Absolutely no progress has been made. Populations have continued to decline at a rate of around 2.5% a year. We haven’t slowed the destruction in the slightest. Our planet’s biodiversity is now in desperate peril as a result.”
Continue reading... Match ID: 4 Score: 15.00 source: www.theguardian.com age: 0 days qualifiers: 15.00 judge
Head of Brazil's electoral court rejects claim from outgoing president’s coalition that said voting machines malfunctioned
The head of Brazil’s electoral court has rejected an attempt by outgoing president Jair Bolsonaro’s party to overturn the results of October’s run-off election, which he lost.
Alexandre de Moraes, a supreme court justice, also fined the parties in Bolsonaro’s coalition 22.9m reais ($4.3m) for what the court described as bad faith litigation.
Continue reading... Match ID: 5 Score: 10.71 source: www.theguardian.com age: 4 days qualifiers: 10.71 judge
Georgia Supreme Court reinstates six-week abortion ban Wed, 23 Nov 2022 15:59:18 EST The ban had been overturned one week earlier by a Fulton County judge who ruled it "unconstitutional." Match ID: 6 Score: 10.71 source: www.washingtonpost.com age: 4 days qualifiers: 10.71 judge
Democrats press for assault weapons ban, other gun laws after new mass shootings Sun, 27 Nov 2022 18:03:21 EST “The idea we still allow semi-automatic weapons to be purchased is sick. Just sick,” President Biden said after recent mass shootings at an LGBTQ club in Colorado Springs and a Walmart in Chesapeake, Va. Match ID: 7 Score: 10.00 source: www.washingtonpost.com age: 0 days qualifiers: 10.00 congress
Chinese leader will see widespread demonstrations against zero-Covid policy as threat to CCP’s authority
Just five weeks after being elected to a historic third term, President Xi Jinping suddenly faces cracks in the facade of unchallenged authority that he so successfully presented to the world at the 20th national congress of the Chinese Communist party.
Supreme Court clears way for Trump tax returns to go to Congress Wed, 23 Nov 2022 11:18:52 EST Lawmakers say they need Donald Trump’s tax returns from his time in office to help evaluate the effectiveness of annual presidential audits -- a premise Trump rejects. Match ID: 11 Score: 7.14 source: www.washingtonpost.com age: 4 days qualifiers: 7.14 congress
Cowen analysts on Wednesday said the possibility of the first U.S. rail strike since 1991 is currently at about 30% after statements earlier this week from union members. "Channel checks suggest that customers are already pulling freight off the rails as strike risk rises," analyst Jason Seidl said in a research note. Back in September when a potential work stoppage had loomed ahead of the midterm elections, Seidl projected a roughly 15% chance of a rail strike. While Congress appears motivated to intervene if a strike takes place, Seidl said he's seeing "clear stubbornness from both sides that is likely increasing animosity" and that strike sentiment appears to be growing and shippers are taking action. The president of the Association of American Railroads said Monday, "The window continues to narrow as deadlines rapidly approach" and that the companies are ready to reach new agreements with unions. The association's members work at Warren Buffett's BNSF, Union Pacific Corp. and Norfolk Southern .
Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.
Match ID: 12 Score: 7.14 source: www.marketwatch.com age: 4 days qualifiers: 7.14 congress
Match ID: 13 Score: 7.14 source: theintercept.com age: 8 days qualifiers: 3.57 federal judge(|s), 2.14 judge, 1.43 congress
Biden has appointed many judges but hasn’t recast the bench like Trump Mon, 21 Nov 2022 10:46:39 EST By keeping their Senate majority, Democrats can keep confirming judges. But thanks to the GOP’s 2015-2016 blockade, the makeup of the courts hasn’t shifted as substantially. Match ID: 14 Score: 6.43 source: www.washingtonpost.com age: 6 days qualifiers: 6.43 judge
Mice, windows, icons, and menus: these are the ingredients of computer interfaces designed to be easy to grasp, simplicity itself to use, and straightforward to describe. The mouse is a pointer. Windows divide up the screen. Icons symbolize application programs and data. Menus list choices of action.
But the development of today’s graphical user interface was anything but simple. It took some 30 years of effort by engineers and computer scientists in universities, government laboratories, and corporate research groups, piggybacking on each other’s work, trying new ideas, repeating each other’s mistakes.
This article was first published as “Of Mice and menus: designing the user-friendly interface.” It appeared in the September 1989 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The photographs and diagrams appeared in the original print version.
Throughout the 1970s and early 1980s, many of the early concepts for windows, menus, icons, and mice were arduously researched at Xerox Corp.’s Palo Alto Research Center (PARC), Palo Alto, Calif. In 1973, PARC developed the prototype Alto, the first of two computers that would prove seminal in this area. More than 1200 Altos were built and tested. From the Alto’s concepts, starting in 1975, Xerox’s System Development Department then developed the Star and introduced it in 1981—the first such user-friendly machine sold to the public.
In 1984, the low-cost Macintosh from Apple Computer Inc., Cupertino, Calif., brought the friendly interface to thousands of personal computer users. During the next five years, the price of RAM chips fell enough to accommodate the huge memory demands of bit-mapped graphics, and the Mac was followed by dozens of similar interfaces for PCs and workstations of all kinds. By now, application programmers are becoming familiar with the idea of manipulating graphic objects.
The Mac’s success during the 1980s spurred Apple Computer to pursue legal action over ownership of many features of the graphical user interface. Suits now being litigated could assign those innovations not to the designers and their companies, but to those who first filed for legal protection on them.
The GUI started with Sketchpad
The grandfather of the graphical user interface was Sketchpad [see photograph]. Massachusetts Institute of Technology student Ivan E. Sutherland built it in 1962 as a Ph.D. thesis at MIT’s Lincoln Laboratory in Lexington, Mass. Sketchpad users could not only draw points, line segments, and circular arcs on a cathode ray tube (CRT) with a light pen—they could also assign constraints to, and relationships among, whatever they drew.
Arcs could have a specified diameter, lines could be horizontal or vertical, and figures could be built up from combinations of elements and shapes. Figures could be moved, copied, shrunk, expanded, and rotated, with their constraints (shown as onscreen icons) dynamically preserved. At a time when a CRT monitor was a novelty in itself, the idea that users could interactively create objects by drawing on a computer was revolutionary.
Moreover, to zoom in on objects, Sutherland wrote the first window-drawing program, which required him to come up with the first clipping algorithm. Clipping is a software routine that calculates which part of a graphic object is to be displayed and displays only that part on the screen. The program must calculate where a line is to be drawn, compare that position to the coordinates of the window in use, and prevent the display of any line segment whose coordinates fall outside the window.
Though films of Sketchpad in operation were widely shown in the computer research community, Sutherland says today that there was little immediate fallout from the project. Running on MIT’s TX-2 mainframe, it demanded too much computing power to be practical for individual use. Many other engineers, however, see Sketchpad’s design and algorithms as a primary influence on an entire generation of research into user interfaces.
The origin of the computer mouse
The light pens used to select areas of the screen by interactive computer systems of the 1950s and 1960s—including Sketchpad—had drawbacks. To do the pointing, the user’s arm had to be lifted up from the table, and after a while that got tiring. Picking up the pen required fumbling around on the table or, if it had a holder, taking the time after making a selection to put it back.
Sensing an object with a light pen was straightforward: the computer displayed spots of light on the screen and interrogated the pen as to whether it sensed a spot, so the program always knew just what was being displayed. Locating the position of the pen on the screen required more sophisticated techniques—like displaying a cross pattern of nine points on the screen, then moving the cross until it centered on the light pen.
In 1964, Douglas Engelbart, a research project leader at SRI International in Menlo Park, Calif., tested all the commercially available pointing devices, from the still-popular light pen to a joystick and a Graphicon (a curve-tracing device that used a pen mounted on the arm of a potentiometer). But he felt the selection failed to cover the full spectrum of possible pointing devices, and somehow he should fill in the blanks.
Then he remembered a 1940s college class he had taken that covered the use of a planimeter to calculate area. (A planimeter has two arms, with a wheel on each. The wheels can roll only along their axes; when one of them rolls, the other must slide.)
If a potentiometer were attached to each wheel to monitor its rotation, he thought, a planimeter could be used as a pointing device. Engelbart explained his roughly sketched idea to engineer William English, who with the help of the SRI machine shop built what they quickly dubbed “the mouse.”
This first mouse was big because it used single-turn potentiometers: one rotation of the wheels had to be scaled to move a cursor from one side of the screen to the other. But it was simple to interface with the computer: the processor just read frequent samples of the potentiometer positioning signals through analog-to-digital converters.
The cursor moved by the mouse was easy to locate, since readings from the potentiometer determined the position of the cursor on the screen-unlike the light pen. But programmers for later windowing systems found that the software necessary to determine which object the mouse had selected was more complex than that for the light pen: they had to compare the mouse’s position with that of all the objects displayed onscreen.
The computer mouse gets redesigned—and redesigned again
Engelbart’s group at SRI ran controlled experiments with mice and other pointing devices, and the mouse won hands down. People adapted to it quickly, it was easy to grab, and it stayed where they put it. Still, Engelbart wanted to tinker with it. After experimenting, his group had concluded that the proper ratio of cursor movement to mouse movement was about 2:1, but he wanted to try varying that ratio—decreasing it at slow speeds and raising it at fast speeds—to improve user control of fine movements and speed up larger movements. Some modern mouse-control software incorporates this idea, including that of the Macintosh.
The mouse, still experimental at this stage, did not change until 1971. Several members of Engelbart’s group had moved to the newly established PARC, where many other researchers had seen the SRI mouse and the test report. They decided there was no need to repeat the tests; any experimental systems they designed would use mice.
Said English, “This was my second chance to build a mouse; it was obvious that it should be a lot smaller, and that it should be digital.” Chuck Thacker, then a member of the research staff, advised PARC to hire inventor Jack Hawley to build it.
Hawley decided the mouse should use shaft encoders, which measure position by a series of pulses, instead of potentiometers (both were covered in Engelbart’s 1970 patent), to eliminate the expensive analog-to-digital converters. The basic principle, of one wheel rolling while the other slid, was licensed from SRI.
The ball mouse was the “easiest patent I ever got. It took me five minutes to think of, half an hour to describe to the attorney, and I was done.” —Ron Rider
In 1972, the mouse changed again. Ron Rider, now vice president of systems architecture at PARC but then a new arrival, said he was using the wheel mouse while an engineer made excuses for its asymmetric operation (one wheel dragging while one turned). “I suggested that they turn a trackball upside down, make it small, and use it as a mouse instead,” Rider told IEEE Spectrum. This device came to be known as the ball mouse. “Easiest patent I ever got,” Rider said. “It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”
The pixel pattern that makes up the graphic display on a computer screen.
The motion of pressing a mouse button to Initiate an action by software; some actions require double-clicking.
Graphical user interface (GUI)
The combination of windowing displays, menus, icons, and a mouse that is increasingly used on personal computers and workstations.
An onscreen drawing that represents programs or data.
A list of command options currently available to the computer user; some stay onscreen, while pop-up or pull-down menus are requested by the user.
A device whose motion across a desktop or other surface causes an on-screen cursor to move commensurately; today’s mice move on a ball and have one, two, or three buttons.
A cathode ray tube on which Images are displayed as patterns of dots, scanned onto the screen sequentially in a predetermined pattern of lines.
A cathode ray tube whose gun scans lines, or vectors, onto the screen phosphor.
An area of a computer display, usually one of several, in which a particular program is executing.
In the PARC ball mouse design, the weight of the mouse is transferred to the ball by a swivel device and on one or two casters at the end of the mouse farthest from the wire “tail.” A prototype was built by Xerox’s Electronics Division in El Segundo, Calif., then redesigned by Hawley. The rolling ball turned two perpendicular shafts, with a drum on the end of each that was coated with alternating stripes of conductive and nonconductive material. As the drum turned, the stripes transmitted electrical impulses through metal wipers.
When Apple Computer decided in 1979 to design a mouse for its Lisa computer, the design mutated yet again. Instead of a metal ball held against the substrate by a swivel, Apple used a rubber ball whose traction depended on the friction of the rubber and the weight of the ball itself. Simple pads on the bottom of the case carried the weight, and optical scanners detected the motion of the internal wheels. The device had loose tolerances and few moving parts, so that it cost perhaps a quarter as much to build as previous ball mice.
How the computer mouse gained and lost buttons
The first, wooden, SRI mouse had only one button, to test the concept. The plastic batch of SRI mice bad three side-by-side buttons—all there was room for, Engelbart said. The first PARC mouse bad a column of three buttons-again, because that best fit the mechanical design. Today, the Apple mouse has one button, while the rest have two or three. The issue is no longer 1950—a standard 6-by-10-cm mouse could now have dozens of buttons—but human factors, and the experts have strong opinions.
Said English, now director of internationalization at Sun Microsystems Inc., Mountain View, Calif.: “Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” He sees two buttons as the minimum because two functions are basic to selecting an object: pointing to its start, then extending the motion to the end of the object.
William Verplank, a human factors specialist in the group that tested the graphical interface at Xerox from 1978 into the early 1980s, concurred. He told Spectrum that with three buttons, Alto users forgot which button did what. The group’s tests showed that one button was also confusing, because it required actions such as double-clicking to select and then open a file.
“We have agonizing videos of naive users struggling” with these problems, Verplank said. They concluded that for most users, two buttons (as used on the Star) are optimal, if a button means the same thing in every application. English experimented with one-button mice at PARC before concluding they were a bad idea.
“Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” —William English
But many interface designers dislike multiple buttons, saying that double-clicking a single button to select an item is easier than remembering which button points and which extends. Larry Tesler, formerly a computer scientist at PARC, brought the one-button mouse to Apple, where he is now vice president of advanced technology. The company’s rationale is that to attract novices to its computers one button was as simple as it could get.
More than two million one-button Apple mice are now in use. The Xerox and Microsoft two-button mice are less common than either Apple’s ubiquitous one-button model or the three-button mice found on technical workstations. Dozens of companies manufacture mice today; most are slightly smaller than a pack of cigarettes, with minor variations in shape.
How windows first came to the computer screen
In 1962, Sketchpad could split its screen horizontally into two independent sections. One section could, for example, give a close-up view of the object in the other section. Researchers call Sketchpad the first example of tiled windows, which are laid out side by side. They differ from overlapping windows, which can be stacked on top of each other, or overlaid, obscuring all or part of the lower layers.
Windows were an obvious means of adding functionality to a small screen. In 1969, Engelbart equipped NLS (as the On-Line System he invented at SRI during the 1960s was known, to distinguish it from the Off-Line System known as FLS) with windows. They split the screen into multiple parts horizontally or vertically, and introduced cross-window editing with a mouse.
By 1972, led by researcher Alan Kay, the Smalltalk programming language group at Xerox PARC had implemented their version of windows. They were working with far different technology from Sutherland or Engelbart: by deciding that their images had to be displayed as dots on the screen, they led a move from vector to raster displays, to make it simple to map the assigned memory location of each of those spots. This was the bit map invented at PARC, and made viable during the 1980s by continual performance improvements in processor logic and memory speed.
Experimenting with bit-map manipulation, Smalltalk researcher Dan Ingalls developed the bit-block transfer procedure, known as BitBlt. The BitBlt software enabled application programs to mix and manipulate rectangular arrays of pixel values in on-screen or off-screen memory, or between the two, combining the pixel values and storing the result in the appropriate bit-map location.
BitBlt made it much easier to write programs to scroll a window (move an image through it), resize (enlarge or contract) it, and drag windows (move them from one location to another on screen). It led Kay to create overlapping windows. They were soon implemented by the Smalltalk group, but made clipping harder.
Some researchers question whether overlapping windows offer more benefits than tiled on the grounds that screens with overlapping windows become so messy the user gets lost.
In a tiling system, explained researcher Peter Deutsch, who worked with the Smalltalk group, the clipping borders are simply horizontal or vertical lines from one screen border to another, and software just tracks the location of those lines. But overlapping windows may appear anywhere on the screen, randomly obscuring bits and pieces of other windows, so that quite irregular regions must be clipped. Thus application software must constantly track which portions of their windows remain visible.
Some researchers still question whether overlapping windows offer more benefits than tiled, at least above a certain screen size, on the grounds that screens with overlapping windows become so messy the user gets lost. Others argue that overlapping windows more closely match users’ work patterns, since no one arranges the papers on their physical desktop in neat horizontal and vertical rows. Among software engineers, however, overlapping windows seem to have won for the user interface world.
So has the cut-and-paste editing model that Larry Tesler developed, first for the Gypsy text editor he wrote at PARC and later for Apple. Charles Irby—who worked on Xerox’s windows and is now vice president of development at Metaphor Computer Systems Inc., Mountain View, Calif.—noted, however, that cut-and-paste worked better for pure text-editing than for moving graphic objects from one application to another.
The origin of the computer menu bar
Menus—functions continuously listed onscreen that could be called into action with key combinations—were commonly used in defense computing by the 1960s. But it was only with the advent of BitBlt and windows that menus could be made to appear as needed and to disappear after use. Combined with a pointing device to indicate a user’s selection, they are now an integral part of the user-friendly interface: users no longer need to refer to manuals or memorize available options.
Instead, the choices can be called up at a moment’s notice whenever needed. And menu design has evolved. Some new systems use nested hierarchies of menus; others offer different menu versions—one with the most commonly used commands for novices, another with all available commands for the experienced user.
Among the first to test menus on demand was PARC researcher William Newman, in a program called Markup. Hard on his heels, the Smalltalk group built in pop-up menus that appeared on screen at the cursor site when the user pressed one of the mouse buttons.
Implementation was on the whole straightforward, recalled Deutsch. The one exception was determining whether the menu or the application should keep track of the information temporarily obscured by the menu. In the Smalltalk 76 version, the popup menu saved and restored the screen bits it overwrote. But in today’s multitasking systems, that would not work, because an application may change those bits without the menu’s knowledge. Such systems add another layer to the operating system: a display manager that tracks what is written where.
The production Xerox Star, in 1981, featured a further advance: a menu bar, essentially a row of words indicating available menus that could be popped up for each window. Human factors engineer Verplank recalled that the bar was at first located at the bottom of its window. But the Star team found users were more likely to associate a bar with the window below it, so it was moved to the top of its window.
Apple simplified things in its Lisa and Macintosh with a single bar placed at the top of the screen. This menu bar relates only to the window in use: the menus could be ‘‘pulled down” from the bar, to appear below it. Designer William D. Atkinson received a patent (assigned to Apple Computer) in August 1984 for this innovation.
One new addition that most user interface pioneers consider an advantage is the tear-off menu, which the user can move to a convenient spot on the screen and “pin” there, always visible for ready access.
Many windowing interfaces now offer command-key or keyboard alternatives for many commands as well. This return to the earliest of user interfaces—key combinations—neatly supplements menus, providing both ease of use for novices and for the less experienced, and speed for those who can type faster than they can point to a menu and click on a selection.
How the computer “icon” got its name
Sketchpad had on-screen graphic objects that represented constraints (for example, a rule that lines be the same length), and the Flex machine built in 1967 at the University of Utah by students Alan Kay and Ed Cheadle had squares that represented programs and data (like today’s computer “folders”). Early work on icons was also done by Bell Northern Research, Ottawa, Canada, stemming from efforts to replace the recently legislated bilingual signs with graphic symbols.
But the concept of the computer “icon” was not formalized until 1975. David Canfield Smith, a computer science graduate student at Stanford University in California, began work on his Ph.D. thesis in 1973. His advisor was PARC’s Kay, who suggested that he look at using the graphics power of the experimental Alto not just to display text, but rather to help people program.
David Canfield Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents.
Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents: a Russian icon of a saint is holy and is to be venerated. Smith’s computer icons contained all the properties of the programs and data represented, and therefore could be linked or acted on as if they were the real thing.
After receiving his Ph.D. in 1975, Smith joined Xerox in 1976 to work on Star development. The first thing he did, he said, was to recast his concept of icons in office terms. “I looked around my office and saw papers, folders, file cabinets, a telephone, and bookshelves, and it was an easy translation to icons,” he said.
Xerox researchers developed, tested, and revised icons for the Star interface for three years before the first version was complete. At first they attempted to make the icons look like a detailed photographic rendering of the object, recalled Irby, who worked on testing and refining the Xerox windows. Trading off label space, legibility, and the number of icons that fit on the screen, they decided to constrain icons to a 1-inch (2.5-centimeter) square of 64 by 64 pixels, or 512 eight-bit bytes.
Then, Verplank recalls, they discovered that because of a background pattern based on two-pixel dots, the right-hand side of the icons appeared jagged. So they increased the width of the icons to 65 pixels, despite an outcry from programmers who liked the neat 16-bit breakdown. But the increase stuck, Verplank said, because they had already decided to store 72 bits per side to allow for white space around each icon.
After settling on a size for the icons, the Star developers tested four sets developed by two graphic designers and two software engineers. They discovered that, for example, resizing may cause problems. They shrunk the icon for a person—a head and shoulders—in order to use several of them to represent a group, only to hear one test subject say the screen resolution made the reduced icon look like a cross above a tombstone. Computer graphics artist Norm Cox, now of Cox & Hall, Dallas, Texas, was finally hired to redesign the icons.
Icon designers today still wrestle with the need to make icons adaptable to the many different system configurations offered by computer makers. Artist Karen Elliott, who has designed icons for Microsoft, Apple, Hewlett-Packard Co., and others, noted that on different systems an icon may be displayed in different colors, several resolutions, and a variety of gray shades, and it may also be inverted (light and dark areas reversed).
In the past few years, another concern has been added to icon designers’ tasks: internationalization. Icons designed in the United States often lack space for translations into languages other than English. Elliott therefore tries to leave space for both the longer words and the vertical orientation of some languages.
The main rule is to make icons simple, clean, and easily recognizable. Discarded objects are placed in a trash can on the Macintosh. On the NeXT Computer System, from NeXT Inc., Palo Alto, Calif.—the company formed by Apple cofounder Steven Jobs after he left Apple—they are dumped into a Black Hole. Elliott sees NeXT’s black hole as one of the best icons ever designed: ”It is distinct; its roundness stands out from the other, square icons, and this is important on a crowded display. It fits my image of information being sucked away, and it makes it clear that dumping something is serious.
English disagrees vehemently. The black hole “is fundamentally wrong,” he said. “You can dig paper out of a wastebasket, but you can’t dig it out of a black hole.” Another critic called the black hole familiar only to “computer nerds who read mostly science fiction and comics,” not to general users.
With the introduction of the Xerox Star in June 1981, the graphical user interface, as it is known today, arrived on the market. Though not a commercial triumph, the Star generated great interest among computer users, as the Alto before it had within the universe of computer designers.
Even before the Star was introduced, Jobs, then still at Apple, had visited Xerox PARC in November 1979 and asked the Smalltalk researchers dozens of questions about the Alto’s internal design. He later recruited Larry Tesler from Xerox to design the user interface of the Apple Lisa.
With the Lisa and then the Macintosh, introduced in January 1983 and January 1984 respectively, the graphical user interface reached the low-cost, high-volume computer market.
At almost $10,000, buyers deemed the Lisa too expensive for the office market. But aided by prizewinning advertising and its lower price, the Macintosh took the world by storm. Early Macs had only 128K bytes of RAM, which made them slow to respond because it was too little memory for heavy graphic manipulation. Also, the time needed for programmers to learn its Toolbox of graphics routines delayed application packages until well into 1985. But the Mac’s ease of use was indisputable, and it generated interest that spilled over into the MS-DOS world of IBM PCs and clones, as well as Unix-based workstations.
Who owns the graphical user interface?
The widespread acceptance of such interfaces, however, has led to bitter lawsuits to establish exactly who owns what. So far, none of several litigious companies has definitively established that it owns the software that implements windows, icons, or early versions of menus. But the suits continue.
Virtually all the companies that make and sell either wheel or ball mice paid license fees to SRI or to Xerox for their patents. Engelbart recalled that SRI patent attorneys inspected all the early work on the interface, but understood only hardware. After looking at developments like the implementation of windows, they told him that none of it was patentable.
At Xerox, the Star development team proposed 12 patents having to do with the user interface. The company’s patent committee rejected all but two on hardware—one on BitBlt, the other on the Star architecture. At the time, Charles Irby said, it was a good decision. Patenting required full disclosure, and no precedents then existed for winning software patent suits.
The most recent and most publicized suit was filed in March 1988, by Apple, against both Microsoft and Hewlett-Packard Co., Palo Alto, Calif. Apple alleges that HP’s New Wave interface, requiring version 2.03 of Microsoft’s Windows program, embodies the copyrighted “audio visual computer display” of the Macintosh without permission; that the displays of Windows 2.03 are illegal copies of the Mac’s audiovisual works; and that Windows 2.03 also exceeds the rights granted in a November 198S agreement in which Microsoft acknowledged that the displays in Windows 1.0 were derivatives of those in Apple’s Lisa and Mac.
In March 1989, U.S. District Judge William W. Schwarzer ruled Microsoft had exceeded the bounds of its license in creating Windows 2.03. Then in July 1989 Schwarzer ruled that all but 11 of the 260 items that Apple cited in its suit were, in fact, acceptable under the 1985 agreement. The larger issue—whether Apple’s copyrights are valid, and whether Microsoft and HP infringed on them—will not now be examined until 1990.
Among those 11 are overlapping windows and movable icons. According to Pamela Samuelson, a noted software intellectual property expert and visiting professor at Emory University Law School, Atlanta, Ga., many experts would regard both as functional features of an interface that cannot be copyrighted, rather than “expressions” of an idea protectable by copyright.
But lawyers for Apple—and for other companies that have filed lawsuits to protect the “look and feel’’ of their screen displays—maintain that if such protection is not granted, companies will lose the economic incentive to market technological innovations. How is Apple to protect its investment in developing the Lisa and Macintosh, they argue, if it cannot license its innovations to companies that want to take advantage of them?
If the Apple-Microsoft case does go to trial on the copyright issues, Samuelson said, the court may have to consider whether Apple can assert copyright protection for overlapping windows-an interface feature on which patents have also been granted. In April 1989, for example, Quarterdeck Office Systems Inc., Santa Monica, Calif., received a patent for a multiple windowing system in its Desq system software, introduced in 1984.
Adding fuel to the legal fire, Xerox said in May 1989 it would ask for license fees from companies that use the graphical user interface. But it is unclear whether Xerox has an adequate claim to either copyright or patent protection for the early graphical interface work done at PARC. Xerox did obtain design patents on later icons, noted human factors engineer Verplank. Meanwhile, both Metaphor and Sun Microsystems have negotiated licenses with Xerox for their own interfaces.
To Probe Further
The September 1989 IEEE Computer contains an article, “The Xerox ‘Star’: A Retrospective,” by Jeff Johnson et al., covering development of the Star. “Designing the Star User Interface,’’ [PDF] by David C. Smith et al., appeared in the April 1982 issue of Byte.
The Sept. 12, 1989, PC Magazine contains six articles on graphical user interfaces for personal computers and workstations. The July 1989 Byte includes ‘‘A Guide to [Graphical User Interfaces),” by Frank Hayes and Nick Baran, which describes 12 current interfaces for workstations and personal computers. “The Interface of Tomorrow, Today,’’ by Howard Reingold, in the July 10, 1989, InfoWorld does the same. “The interface that launched a thousand imitations,” by Richard Rawles, in the March 21, 1989, MacWeek covers the Macintosh interface.
The human factors of user interface design are discussed in The Psychology of Everyday Things, by Donald A. Norman (Basic Books Inc., New York, 1988). The January 1989 IEEE Software contains several articles on methods, techniques, and tools for designing and implementing graphical interfaces. The Way Things Work, by David Macaulay (Houghton Mifflin Co., Boston, 1988), contains a detailed drawing of a ball mouse.
William Atkinson received patent no. 4,464,652 for the pulldown menu system on Aug. 8, 1984, and assigned it to Apple. Gary Pope received patent no. 4,823,108, for an improved system for displaying images in “windows” on a computer screen, on April 18, 1989, and assigned it to Quarterdeck Office Systems.
The wheel mouse patent, no. 3,541,541, “X-Y position indicator for a display system,” was issued to Douglas Engelbart on Nov. 17, 1970, and assigned to SRI International. The ball mouse patent, no. 3,835,464, was issued to Ronald Rider on Sept. 10, 1974, and assigned to Xerox.
NASA’s Artemis I mission launched early in the predawn hours this morning, at 1:04 a.m. eastern time, carrying with it the hopes of a space program aiming now to land American astronauts back on the moon. The Orion spacecraft now on its way to the moon also carries with it a lot of CubeSat-size science. (As of press time, some satellites have even begun to tweet.)
And while the objective of Artemis I is to show that the launch system and spacecraft can make a trip to the moon and return safely to Earth, the mission is also a unique opportunity to send a whole spacecraft-load of science into deep space. In addition to the interior of the Orion capsule itself, there are enough nooks and crannies to handle a fair number of CubeSats, and NASA has packed as many experiments as it can into the mission. From radiation phantoms to solar sails to algae to a lunar surface payload, Artemis I has a lot going on.
Most of the variety of the science on Artemis I comes in the form of CubeSats, little satellites that are each the size of a large shoebox. The CubeSats are tucked snugly into berths inside the Orion stage adapter, which is the bit that connects the interim cryogenic propulsion stage to the ESA service module and Orion. Once the propulsion stage lifts Orion out of Earth orbit and pushes it toward the moon, the stage and adapter will separate from Orion, and the CubeSats will launch themselves.
Ten CubeSats rest inside the Orion stage adapter at NASA’s Kennedy Space Center.NASA KSC
While the CubeSats look identical when packed up, each one is totally unique in both hardware and software, with different destinations and mission objectives. There are 10 in total (three weren’t ready in time for launch, which is why there are a couple of empty slots in the image above).
Here is what each one is and does:
While the CubeSats head off to do their own thing, inside the Orion capsule itself will be the temporary home of a trio of mannequins. The first, a male-bodied version provided by NASA, is named Commander Moonikin Campos, after NASA electrical engineer Arturo Campos, who was the guy who wrote the procedures that allowed the Apollo 13 command module to steal power from the lunar module’s batteries, one of many actions that saved the Apollo 13 crew.
Moonikin Campos prepares for placement in the Orion capsule.NASA
Moonikin Campos will spend the mission in the Orion commander’s seat, wearing an Orion crew survival system suit. Essentially itself a spacecraft, the suit is able to sustain its occupant for up to six days if necessary. Moonikin Campos’s job will be to pretend to be an astronaut, and sensors inside him will measure radiation, acceleration, and vibration to help NASA prepare to launch human astronauts in the next Artemis mission.
Helga and Zohar in place on the flight deck of the Orion spacecraft.NASA/DLR
Accompanying Moonikin Campos are two female-bodied mannequins, named Helga and Zohar, developed by the German Aerospace Center (DLR) along with the Israel Space Agency. These are more accurately called “anthropomorphic phantoms,” and their job is to provide a detailed recording of the radiation environment inside the capsule over the course of the mission. The phantoms are female because women have more radiation-sensitive tissue than men. Both Helga and Zohar have over 6,000 tiny radiation detectors placed throughout their artificial bodies, but Zohar will be wearing an AstroRad radiation protection vest to measure how effective it is.
NASA’s Biology Experiment-1 is transferred to the Orion team.NASA/KSC
The final science experiment to fly onboard Orion is NASA’s Biology Experiment-1. The experiment is really just seeing what time in deep space does to some specific kinds of biology, so all that has to happen is for Orion to successfully haul some packages of sample tubes around the moon and back. Samples include:
Plant seeds to characterize how spaceflight affects nutrient stores
Photosynthetic algae to identify genes that contribute to its survival in deep space
Aspergillus fungus to investigate radioprotective effects of melanin and DNA damage response
Yeast used as a model organism to identify genes that enable adaptations to conditions in both low Earth orbit and deep space
There is some concern that because of the extensive delays with the Artemis launch, the CubeSats have been sitting so long that their batteries may have run down. Some of the CubeSats could be recharged, but for others, recharging was judged to be so risky that they were left alone. Even for CubeSats that don’t start right up, though, it’s possible that after deployment, their solar panels will be able to get them going. But at this point, there’s still a lot of uncertainty, and the CubeSats’ earthbound science teams are now pinning their hopes on everything going well after launch.
For the rest of the science payloads, success mostly means Orion returning to Earth safe and sound, which will also be a success for the Artemis I mission as a whole. And assuming it does so, there will be a lot more science to come.
Match ID: 17 Score: 2.14 source: spectrum.ieee.org age: 11 days qualifiers: 2.14 judge
“Energy and information are two basic currencies of organic and social systems,” the economics Nobelist Herb Simon once
observed. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.”
Electric vehicles at scale alter the terms of both basic currencies concurrently. Reliable, secure supplies of minerals and software are core elements for EVs, which represent a “shift from a fuel-intensive to a material-intensive energy system,” according to a
report by the International Energy Agency (IEA). For example, the mineral requirements for an EV’s batteries and electric motors are six times that of an internal-combustion-engine (ICE) vehicle, which can increase the average weight of an EV by 340 kilograms (750 pounds). For something like the Ford Lightning, the weight can be more than twice that amount.
EVs also create a shift from an electromechanical-intensive to an information-intensive vehicle. EVs offer a virtual clean slate from which to accelerate the design of safe,
software-defined vehicles, with computing and supporting electronics being the prime enabler of a vehicle’s features, functions, and value. Software also allows for the decoupling of the internal mechanical connections needed in an ICE vehicle, permitting an EV to be controlled remotely or autonomously. An added benefit is that the loss of the ICE power train not only reduces the components a vehicle requires but also frees up space for increased passenger comfort and storage.
The effects of Simon’s profound changes are readily apparent, forcing a 120-year-old industry to fundamentally reinvent itself. EVs require automakers to design new manufacturing processes and build plants to make both EVs and their batteries. Ramping up the battery supply chain is the automakers’ current “most challenging topic,” according to VW chief financial officer Arno Antlitz.
It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.
Furthermore, Kristin Dziczek a policy analyst with the Federal Reserve Bank of Chicago adds, there are
scores of new global EV competitors actively seeking to replace the legacy automakers. The “simplicity” of EVs in comparison with ICE vehicles allows these disruptors to compete virtually from scratch with legacy automakers, not only in the car market itself but for the material and labor inputs as well.
Batteries and the supply-chain challenge
Another critical question is whether all the planned battery-plant output
will support expected EV production demands. For instance, the United States will require 8 million EV batteries annually by 2030 if its target to make EVs half of all new-vehicle sales is met, with that number rising each year after. As IEA executive director Fatih Birolobserves, “Today, the data shows a looming mismatch between the world’s strengthened climate ambitions and the availability of critical minerals that are essential to realizing those ambitions.”
This mismatch worries automakers.
GM, Ford, Tesla, and others have moved to secure batteries through 2025, but it could be tricky after that. Rivian Automotive chief executive RJ Scaringe was recently quoted in the Wall Street Journal as saying that “90 to 95 percent of the (battery) supply chain does not exist,” and that the current semiconductor chip shortage is “a small appetizer to what we are about to feel on battery cells over the next two decades.”
The competition for securing raw materials, along with the increased consumer demand, has caused EV prices to spike. Ford has
raised the price of the Lightning $6,000 to $8,500, and CEO Jim Farley bluntly states that in regard to material shortages in the foreseeable future, “I don’t think we should be confident in any other outcomes than an increase in prices.”
Stiff Competition for Engineering Talent
One critical area of resource competition is over the limited supply of software and systems engineers with the mechatronics and robotics expertise needed for EVs. Major automakers have moved aggressively to bring more software and systems-engineering expertise on board, rather than have it reside at their suppliers, as they have traditionally done. Automakers feel that if they're not in control of the software, they're not in control of their product.
Even for the large auto suppliers, the transition to EVs will not be an easy road. For instance, automakers are demanding these suppliers absorb more cost cuts because automakers are finding EVs so expensive to build. Not only do automakers want to bring cutting-edge software expertise in-house, they want greater inside expertise in critical EV supply-chain components, especially batteries.
The underlying reason for the worry: Supplying sufficient raw materials to existing and planned battery plants as well as to the manufacturers of
other renewable energy sources and military systems—who are competing for the same materials—has several complications to overcome. Among them is the need for more mines to provide the metals required, which have spiked in price as demand has increased. For example, while demand for lithium is growing rapidly, investment in mines has significantly lagged the investment that has been aimed toward EVs and battery plants. It can take five or more years to get a lithium mine up and going, but operations can start only after it has secured the required permits, a process that itself can take years.
Mining the raw materials, of course, assumes that there is sufficient refining capability to process them,
which, outside of China, is limited. This is especially true in the United States, which, according to a Biden Administration special supply-chain investigative report, has “limited raw material production capacity and virtually no processing capacity.” Consequently, the report states, the United States “exports the limited raw materials produced today to foreign markets.” For example, output from the only nickel mine in the United States, the Eagle mine in Minnesota, is sent to Canada for smelting.
“Energy and information are two basic currencies of organic and social systems. A new technology that alters the terms on which one or the other of these is available to a system can work on it the most profound changes.” —Herb Simon
Another solution may be recycling both EV batteries as well as the waste and rejects from battery manufacturing, which can run
between 5 to 10 percent of production. Effective recycling of EV batteries “has the potential to reduce primary demand compared to total demand in 2040, by approximately 25 percent for lithium, 35 percent for cobalt and nickel, and 55 percent for copper,” according to a report by the University of Sidney’sInstitute for Sustainable Futures.
Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.
“Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.
Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.
DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.
“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.
“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”
So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.
The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.
The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA
But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.
NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.
The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.
The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA
What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”
So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.
“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”
There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.
“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”
Match ID: 20 Score: 1.43 source: spectrum.ieee.org age: 65 days qualifiers: 1.43 congress
Each contender is taking a different approach to space-based cellular service. The Apple offering uses the existing satellite bandwidth Globalstar once used for messaging offerings, but without the need for a satellite-specific handset. The AST project and another company, Lynk Global, would use a dedicated network of satellites with larger-than-normal antennas to produce a 4G, 5G, and someday 6G cellular signal compatible with any existing 4G-compatible phone (as detailed in other recent IEEESpectrum coverage of space-based 5G offerings). Assuming regulatory approval is forthcoming, the technology would work first in equatorial regions and then across more of the planet as these providers expand their satellite constellations. T-Mobile and Starlink’s offering would work in the former PCS band in the United States. SpaceX, like AST and Lynk, would need to negotiate access to spectrum on a country-by-country basis.
Apple’s competitors are unlikely to see commercial operations before 2024.
“Regulators have not decided on the power limits from space, what concerns there are about interference, especially across national borders. There’s a whole bunch of regulatory issues that simply haven’t been thought about to date.” —Tim Farrar, telecommunications consultant
The T-Mobile–Starlink announcement is “in some ways an endorsement” of AST and Lynk’s proposition, and “in other ways a great threat,” says telecommunications consultant Tim Farrar of Tim Farrar Associates in Menlo Park, Calif. AST and Lynk have so far told investors they expect their national mobile network operator partners to charge per use or per day, but T-Mobile announced that they plan to include satellite messaging in the 1,900-megahertz range in their existing services. Apple said their Emergency SOS via Satellite service would be free the first two years for U.S. and Canadian iPhone 14 buyers, but did not say what it would cost after that. For now, the Globalstar satellites it is using cannot offer the kind of broadband bandwidth AST has promised, but Globalstar has reported to investors orders for new satellites that might offer new capabilities, including new gateways.
Even under the best conditions—a clear view of the sky—users will need 15 seconds to send a message via Apple’s service. They will also have to follow onscreen guidance to keep the device pointed at the satellites they are using. Light foliage can cause the same message to take more than a minute to send. Ashley Williams, a satellite engineer at Apple who recorded the service’s announcement, also mentioned a data-compression algorithm and a series of rescue-related suggested auto-replies intended to minimize the amount of data that users would need to send during a rescue.
Meanwhile, AST SpaceMobile says it aims to launch an experimental satellite Saturday, 10 September, to test its cellular broadband offering.
Last month’s T-Mobile-SpaceX announcement “helped the world focus attention on the huge market opportunity for SpaceMobile, the only planned space-based cellular broadband network. BlueWalker 3, which has a 693 sq ft array, is scheduled for launch within weeks!” tweeted AST SpaceMobile CEO Abel Avellan on 25 August. The size of the array matters because AST SpaceMobile has so far indicated in its applications for experimental satellite licenses that it intends to use lower radio frequencies (700–900 MHz) with less propagation loss but that require antennas much larger than conventional satellites carry.
So far government agencies have issued licenses for thousands of low-Earth-orbiting satellites, which have the biggest impact on astronomers. Even with the constellations starting to form, satellite-cellular telecommunications companies are still open to big regulatory risks. “Regulators have not decided on the power limits from space, what concerns there are about interference, especially across national borders. There’s a whole bunch of regulatory issues that simply haven’t been thought about to date,” Farrar says.
Update 5 Sept.: For now, NASA’s giant Artemis I remains on the ground after two launch attempts scrubbed by a hydrogen leak and a balky engine sensor. Mission managers say Artemis will fly when everything's ready—but haven't yet specified whether that might be in late September or in mid-October.
“When you look at the rocket, it looks almost retro,” said Bill Nelson, the administrator of NASA. “Looks like we’re looking back toward the Saturn V. But it’s a totally different, new, highly sophisticated—more sophisticated—rocket, and spacecraft.”
Artemis, powered by the Space Launch System rocket, is America’s first attempt to send astronauts to the moon since Apollo 17 in 1972, and technology has taken giant leaps since then. On Artemis I, the first test flight, mission managers say they are taking the SLS, with its uncrewed Orion spacecraft up top, and “stressing it beyond what it is designed for”—the better to ensure safe flights when astronauts make their first landings, currently targeted to begin with Artemis III in 2025.
But Nelson is right: The rocket is retro in many ways, borrowing heavily from the space shuttles America flew for 30 years, and from the Apollo-Saturn V.
Much of Artemis’s hardware is refurbished: Its four main engines, and parts of its two strap-on boosters, all flew before on shuttle missions. The rocket’s apricot color comes from spray-on insulation much like the foam on the shuttle’s external tank. And the large maneuvering engine in Orion’s service module is actually 40 years old—used on 19 space shuttle flights between 1984 and 1992.
“I have a name for missions that use too much new technology—failures.” —John Casani, NASA
Perhaps more important, the project inherits basic engineering from half a century of spaceflight. Just look at Orion’s crew capsule—a truncated cone, somewhat larger than the Apollo Command Module but conceptually very similar.
Old, of course, does not mean bad. NASA says there is no need to reinvent things engineers got right the first time.
“There are certain fundamental aspects of deep-space exploration that are really independent of money,” says Jim Geffre, Orion vehicle-integration manager at the Johnson Space Center in Houston. “The laws of physics haven’t changed since the 1960s. And capsule shapes happen to be really good for coming back into the atmosphere at Mach 32.”
Roger Launius, who served as NASA’s chief historian from 1990 to 2002 and as a curator at the Smithsonian Institution from then until 2017, tells of a conversation he had with John Casani, a veteran NASA engineer who managed the Voyager, Galileo, and Cassini probes to the outer planets.
“I have a name for missions that use too much new technology,” he recalls Casani saying. “Failures.”
The Artemis I flight is slated for about six weeks. (Apollo 11 lasted eight days.) The ship roughly follows Apollo’s path to the moon’s vicinity, but then puts itself in what NASA calls a distant retrograde orbit. It swoops within 110 kilometers of the lunar surface for a gravity assist, then heads 64,000 km out—taking more than a month but using less fuel than it would in closer orbits. Finally, it comes home, reentering the Earth’s atmosphere at 11 km per second, slowing itself with a heatshield and parachutes, and splashing down in the Pacific not far from San Diego.
If all four, quadruply redundant flight computer modules fail, there is a fifth, entirely separate computer onboard, running different code to get the spacecraft home.
“That extra time in space,” says Geffre, “allows us to operate the systems, give more time in deep space, and all those things that stress it, like radiation and micrometeoroids, thermal environments.”
There are, of course, newer technologies on board. Orion is controlled by two vehicle-management computers, each composed of two flight computer modules (FCMs) to handle guidance, navigation, propulsion, communications, and other systems. The flight control system, Geffre points out, is quad-redundant; if at any point one of the four FCMs disagrees with the others, it will take itself offline and, in a 22-second process, reset itself to make sure its outputs are consistent with the others’. If all four FCMs fail, there is a fifth, entirely separate computer running different code to get the spacecraft home.
Guidance and navigation, too, have advanced since the sextant used on Apollo. Orion uses a star tracker to determine its attitude, imaging stars and comparing them to an onboard database. And an optical navigation camera shoots Earth and the moon so that guidance software can determine their distance and position and keep the spacecraft on course. NASA says it’s there as backup, able to get Orion to a safe splashdown even if all communication with Earth has been lost.
But even those systems aren’t entirely new. Geffre points out that the guidance system’s architecture is derived from the Boeing 787. Computing power in deep space is limited by cosmic radiation, which can corrupt the output of microprocessors beyond the protection of Earth’s atmosphere and magnetic field.
Beyond that is the inevitable issue of cost. Artemis is a giant project, years behind schedule, started long before NASA began to buy other launches from companies like SpaceX and Rocket Lab. NASA’s inspector general, Paul Martin, testified to Congressin March that the first four Artemis missions would cost US $4.1 billion each—“a price tag that strikes us as unsustainable.”
Launius, for one, rejects the argument that government is inherently wasteful. “Yes, NASA’s had problems in managing programs in the past. Who hasn’t?” he says. He points out that Blue Origin and SpaceX have had plenty of setbacks of their own—they’re just not obliged to be public about them. “I could go on and on. It’s not a government thing per se and it’s not a NASA thing per se.”
So why return to the moon with—please forgive the pun—such a retro rocket? Partly, say those who watch Artemis closely, because it’s become too big to fail, with so much American money and brainpower invested in it. Partly because it turns NASA’s astronauts outward again, exploring instead of maintaining a space station. Partly because new perspectives could come of it. And partly because China and Russia have ambitions in space that threaten America’s.
“Apollo was a demonstration of technological verisimilitude—to the whole world,” says Launius. “And the whole world knew then, as they know today, that the future belongs to the civilization that can master science and technology.”
Update 7 Sept.: Artemis I has been on launchpad 39B, not 39A as previously reported, at Kennedy Space Center.
Match ID: 22 Score: 1.43 source: spectrum.ieee.org age: 91 days qualifiers: 1.43 congress
NASA Administrator Statement on Agency Authorization Bill Thu, 28 Jul 2022 15:22 EDT NASA Administrator Bill Nelson released this statement Thursday following approval by the U.S. Congress for the NASA Authorization Act of 2022, which is part of the Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022. Match ID: 23 Score: 1.43 source: www.nasa.gov age: 122 days qualifiers: 1.43 congress
TransPennine Express uses ‘outrageous’ loophole in which services cancelled a day ahead do not appear in statistics
One of the north of England’s main railway companies is taking advantage of an “outrageous” legal loophole that allows it to vastly under-report cancellations, it has emerged.
Figures obtained by the Guardian show that during the October half-term holiday, TransPennine Express (TPE) cancelled 30% of all trains, and at least 20% each subsequent week until 20 November. Most of those services were cancelled in full, but some started or ended at different stations from those advertised on the current May 2022 timetable.
Continue reading... Match ID: 1 Score: 35.00 source: www.theguardian.com age: 0 days qualifiers: 35.00 travel(|ing)
The excellent leads lift this fitfully handsome adaptation of DH Lawrence’s forbidden classic
There’s enough of a spark between Emma Corrin, playing Lady Constance Chatterley, and Jack O’Connell, as smouldering gamekeeper Oliver Mellors, to fuel a sizeable chunk of the national grid. Which is why it’s surprising that the many breathlessly urgent sex scenes in this handsome adaptation of DH Lawrence’s novel seem a little underpowered – a combination of a weirdly unappealing blueish tone to the grade and the agitated camerawork loses some of the erotic tension. It’s a pity, because elsewhere the film is impressive: there’s a feverish wildness to Corrin’s performance, while O’Connell unleashes the full force of his considerable charisma.
The Magnum photographer’s image of a family in Sicily recalls Fellini and Visconti in its romantic depiction of everyday Italian life
Bruno Barbey chanced upon this family defying gravity on their dad’s scooter in Palermo in 1963. The French-Moroccan photographer had been travelling in Italy for a couple of years by then, restless for exactly this kind of image, with its seductive mix of humour and authenticity. Has there ever been a better articulation of contrasting roles in the patriarchal family? Father sitting comfortably in his jacket and cap and smiling for the camera, while behind him his possibly pregnant wife sees trouble ahead, as she and their three kids and their big checked bag compete for precarious discomfort.
Barbey, then 22, had gone to Italy to try to find pictures that captured “a national spirit” as the country sought to rediscover the dolce vita in cities still recovering from war. He travelled in an old VW van and in Palermo in particular he located scenes that might have been choreographed for the working-class heroes of the Italian neorealist films, the self-absorbed dreamers of Fellini and Visconti (The Leopard, the latter’s Hollywood epic set in Sicily was released in the same year). Barbey’s camera with its wide angle lens picked up the detail of vigorous crowd scenes among street children and barflies and religious processions. His book, The Italians, now republished, is a time capsule of that already disappearing black-and-white world of priests and mafiosi and nightclub girls and nuns.
The Black Bull Inn, 44 Main Street, Sedbergh LA10 5BL (015396 20264, theblackbullsedbergh.co.uk). Snacks £4.50-£6.50, sandwiches £6.95-£14.95, starters £9.95-£10.9, mains £18.50-£27.95, desserts £7.50-£8.50, wines from £28
It would be easy to misread the Black Bull at Sedbergh, located in that part of the Yorkshire Dales which offers a lofty wave to the Lake District. On a weekday lunchtime, the dining rooms fill quickly with parents in expensive waxed outerwear, grabbing lunch with their kids from the eponymous boarding school that dominates the town. A parade of burgers and sandwiches, precision stabbed with cocktail sticks, alongside soups with doorstep slabs of bread, troop out of the kitchen. And a pint please for the pink-cheeked, broad-chested chap with the Range Rover outside.
Continue reading... Match ID: 4 Score: 35.00 source: www.theguardian.com age: 0 days qualifiers: 35.00 travel(|ing)
Richard Drax reported to have visited Caribbean island for meeting on next steps, including plans for former sugar plantation
The government of Barbados is considering plans to make a wealthy Conservative MP the first individual to pay reparations for his ancestor’s pivotal role in slavery.
The Observer understands that Richard Drax, MP for South Dorset, recently travelled to the Caribbean island for a private meeting with the country’s prime minister, Mia Mottley. A report is now before Mottley’s cabinet laying out the next steps, which include legal action in the event that no agreement is reached with Drax.
Continue reading... Match ID: 5 Score: 35.00 source: www.theguardian.com age: 1 day qualifiers: 35.00 travel(|ing)
The Italian photographer was in San Francisco’s Chinatown when she came across this grand ivory building
Arianna Genghini’s first stop on her family road trip through four US states was San Francisco. While they went on to travel through Utah, Nevada and Arizona in a rented minivan, it was the California city’s expansive Chinatown that captured the Italian photographer’s eye most powerfully.
“I was exploring with my sister Sofia, and we spotted the Dragon Gate at the entrance to the district. It’s one of the largest Chinese communities outside China, just like a little city inside a bigger one. Stepping inside, I fell in love,” she says.
Continue reading... Match ID: 7 Score: 35.00 source: www.theguardian.com age: 1 day qualifiers: 35.00 travel(|ing)
Home Office argues people trafficked to Syria were exposed to extreme violence which poses ‘almighty problem’
People trafficked to Syria and radicalised remain threats to national security as they may be desensitised after exposure to extreme violence, the Home Office has argued, in contesting Shamima Begum’s appeal against the removal of her British citizenship.
Begum was 15 when she travelled from her home in Bethnal Green, east London, through Turkey and into territory controlled by Islamic State (IS). After she was found, nine months pregnant in a Syrian refugee camp in February 2019, the then home secretary, Sajid Javid, revoked her British citizenship on national security grounds.
Continue reading... Match ID: 8 Score: 30.00 source: www.theguardian.com age: 3 days qualifiers: 30.00 travel(|ing)
He began his career in 1980 as a management trainee at the National Dairy Development Board, in Anand, India. A year later he joined Milma, a state government marketing cooperative for the dairy industry, in Thiruvananthapuram, as a manager of planning and systems. After 15 years with Milma, he joined IBM in Tokyo as a manager of technology services.
In 2000 he helped found InApp, a company in Palo Alto, Calif., that provides software development services. He served as its CEO and executive chairman until he died.
Raja was the 2011–2012 chair of the IEEE Humanitarian Activities Committee. He wanted to find a way to mobilize engineers to apply their expertise to develop sustainable solutions that help their local community. To achieve the goal, in 2011 he founded IEEE SIGHT. Today there are more than 150 SIGHT groups in 50 countries that are working on projects such as sustainable irrigation and photovoltaic systems.
For the past two years, Rajah chaired the IEEE Admission and Advancement Review Panel, which approves applications for new members and elevations to higher membership grades.
He was a member of the International Centre for Free and Open Source Software’s advisory board. The organization was established by the government of Kerala, India, to facilitate the development and distribution of free, open-source software. Raja also served on the board of directors at Bedroc, an IT staffing and support firm in Nashville.
Terry was a computer engineer at Hewlett-Packard in Fort Collins, Colo., for 18 years.
He joined HP in 1978 as a software developer, and he chaired the Portable Operating System Interface (POSIX) working group. POSIX is a family of standards specified by the IEEE Computer Society for maintaining compatibility among operating systems. While there, he also developed software for the Motorola 68000 microprocessor.
Terry left HP in 1997 to join Softway Solutions, also in Fort Collins, where he developed tools for Interix, a Unix subsystem of the Windows NT operating system. After Microsoft acquired Softway in 1999, he stayed on as a senior software development engineer at its Seattle location. There he worked on static analysis, a method of computer-program debugging that is done by examining the code without executing the program. He also helped to create SAL, a Microsoft source-code annotation language, which was developed to make code design easier to understand and analyze.
Terry retired in 2014. He loved science fiction, boating, cooking, and spending time with his family, according to his daughter, Kristin.
He earned a bachelor’s degree in electrical engineering in 1970 and a Ph.D. in computer science in 1978, both from the University of Washington in Seattle.
Signal processing engineer
Life senior member, 70; died 25 August
Sandham applied his signal processing expertise to a wide variety of disciplines including medical imaging, biomedical data analysis, and geophysics.
He began his career in 1974 as a physicist at the University of Glasgow. While working there, he pursued a Ph.D. in geophysics. He earned his degree in 1981 at the University of Birmingham in England. He then joined the British National Oil Corp. (now Britoil) as a geophysicist.
In 1986 he left to join the University of Strathclyde, in Glasgow, as a lecturer in the signal processing department. During his time at the university, he published more than 200 journal papers and five books that addressed blood glucose measurement, electrocardiography data analysis and compression, medical ultrasound, MRI segmentation, prosthetic limb fitting, and sleep apnea detection.
Sandham left the university in 2003 and founded Scotsig, a signal processing consulting and research business, also in Glasgow.
Sandham earned his bachelor’s degree in electrical engineering in 1974 from the University of Glasgow.
Stephen M. Brustoski
Life member, 69; died 6 January
For 40 years, Brustoski worked as a loss-prevention engineer for insurance company FM Global. He retired from the company, which was headquartered in Johnston, R.I., in 2014.
He was an elder at his church, CrossPoint Alliance, in Akron, Ohio, where he oversaw administrative work and led Bible studies and prayer meetings. He was an assistant scoutmaster for 12 years, and he enjoyed hiking and traveling the world with his family, according to his wife, Sharon.
Brustoski earned a bachelor’s degree in electrical engineering in 1973 from the University of Akron.
President and CEO of Essex Corp.
Life senior member, 96; died 7 May 2020
As president and CEO of Essex Corp., in Columbia, Md., Letaw handled the development and commercialization of optoelectronic and signal processing solutions for defense, intelligence, and commercial customers. He retired in 1995.
He had served in World War II as an aviation engineer for the U.S. Army. After he was discharged, he earned a bachelor’s degree in chemistry, then a master’s degree and Ph.D., all from the University of Florida in Gainesville, in 1949, 1951, and 1952.
After he graduated, he became a postdoctoral assistant at the University of Illinois at Urbana-Champaign. He left to become a researcher at Raytheon Technologies, an aerospace and defense manufacturer, in Wayland, Mass.
We’d like to find out the reasons that prevent people in the UK from working as much as they’d like – whether it’s childcare, health issues, housing or travel
We’re keen to hear from people in the UK who would like to work or work more than they currently do and find out what prevents them from doing so.
Whether it is your health, childcare, travel or being unable to find housing, or anything else that stands in the way, we’d like to hear from you. It doesn’t matter whether you’re actively looking for work or not.
Continue reading... Match ID: 10 Score: 20.00 source: www.theguardian.com age: 5 days qualifiers: 20.00 travel(|ing)
Kleiman plans to give a presentation next year about the programmers as part of the IEEE Industry Hub Initiative’s Impact Speaker series. The initiative aims to introduce industry professionals and academics to IEEE and its offerings.
Planning for the event, which is scheduled to be held in Silicon Valley, is underway. Details are to be announced before the end of the year.
The Institute spoke with Kleiman, who teaches Internet technology and governance for lawyers at American University, in Washington, D.C., about her mission to publicize the programmers’ contributions. The interview has been condensed and edited for clarity.
Kathy Kleiman delves into the ENIAC programmers’ lives and the pioneering work they did in her book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer.Kathy Kleiman
What inspired you to film the documentary?
Kathy Kleiman: The ENIAC was a secret project of the U.S. Army during World War II. It was the first general-purpose, programmable, all-electronic computer—the key to the development of our smartphones, laptops, and tablets today. The ENIAC was a highly experimental computer, with 18,000 vacuums, and some of the leading technologists at the time didn’t think it would work, but it did.
Six months after the war ended, the Army decided to reveal the existence of ENIAC and heavily publicize it. To do so, in February 1946 the Army took a lot of beautiful, formal photos of the computer and the team of engineers that developed it. I found these pictures while researching women in computer science as an undergraduate at
Harvard. At the time, I knew of only two women in computer science: Ada Lovelace and then U.S. Navy Capt. Grace Hopper. [Lovelace was the first computer programmer; Hopper co-developed COBOL, one of the earliest standardized computer languages.] But I was sure there were more women programmers throughout history, so I went looking for them and found the images taken of the ENIAC.
The pictures fascinated me because they had both men and women in them. Some of the photos had just women in front of the computer, but they weren’t named in any of the photos’ captions. I tracked them down after I found their identities, and four of six original ENIAC programmers responded. They were in their late 70s at the time, and over the course of many years they told me about their work during World War II and how they were recruited by the U.S. Army to be “human computers.”
Eckert and Mauchly promised the U.S. Army that the ENIAC could calculate artillery trajectories in seconds rather than the hours it took to do the calculations by hand. But after they built the 2.5-meter-tall by 24-meter-long computer, they couldn’t get it to work. Out of approximately 100 human computers working for the U.S. Army during World War II, six women were chosen to write a program for the computer to run differential calculus equations. It was hard because the program was complex, memory was very limited, and the direct programming interface that connected the programmers to the ENIAC was hard to use. But the women succeeded. The trajectory program was a great success. But Bartik, McNulty, Meltzer, Snyder, Spence, and Teitelbaum’s contributions to the technology were never recognized. Leading technologists and the public never knew of their work.
I was inspired by their story and wanted to share it. I raised funds, researched and recorded 20 hours of broadcast-quality oral histories with the ENIAC programmers—which eventually became the documentary. It allows others to see the women telling their story.
“If we open the doors to history, I think it would make it a lot easier to recruit the wonderful people we are trying to urge to enter engineering, computer science, and related fields.”
Why was the accomplishment of the six women important?
Kleiman: The ENIAC is considered by many to have launched the information age.
We generally think of women leaving the factory and farm jobs they held during World War II and giving them back to the men, but after ENIAC was completed, the six women continued to work for the U.S. Army. They helped world-class mathematicians program the ENIAC to complete “hundred-year problems” [problems that would take 100 years to solve by hand]. They also helped teach the next generation of ENIAC programmers, and some went on to create the foundations of modern programming.
What influenced you to continue telling the ENIAC programmers’ story in your book?
Kleiman: After my documentary premiered at the film festival, young women from tech companies who were in the audience came up to me to share why they were excited to learn the programmers’ story. They were excited to learn that women were an integral part of the history of early computing programming, and were inspired by their stories. Young men also came up to me and shared stories of their grandmothers and great-aunts who programmed computers in the 1960s and ’70s and inspired them to explore careers in computer science.
I met more women and men like the ones in Seattle all over the world, so it seemed like a good idea to tell the full story along with its historical context and background information about the lives of the ENIAC programmers, specifically what happened to them after the computer was completed.
What did you find most rewarding about sharing their story?
Kleiman: It was wonderful and rewarding to get to know the ENIAC programmers. They were incredible, wonderful, warm, brilliant, and exceptional people. Talking to the people who created the programming was inspiring and helped me to see that I could work at the cutting edge too. I entered Internet law as one of the first attorneys in the field because of them.
What I enjoy most is that the women’s experiences inspire young people today just as they inspired me when I was an undergraduate.
Clockwise from top left: Jean Bartik, Kathleen Antonelli, Betty Holberton, Ruth Teitelbaum, Marlyn Meltzer, Frances Spence.Clockwise from top left: The Bartik Family; Bill Mauchly, Priscilla Holberton, Teitelbaum Family, Meltzer Family, Spence Family
Is it important to highlight the contributions made throughout history by women in STEM?
Kleiman: [Actor] Geena Davis founded the Geena Davis Institute on Gender in Media, which works collaboratively with the entertainment industry to dramatically increase the presence of female characters in media. It’s based on the philosophy of “you can’t be what you can’t see.”
That philosophy is both right and wrong. I think you can be what you can’t see, and certainly every pioneer who has ever broken a racial, ethnic, religion, or gender barrier has done so. However, it’s certainly much easier to enter a field if there are role models who look like you. To that end, many computer scientists today are trying to diversify the field. Yet I know from my work in Internet policy and my recent travels across the country for my book tour that many students still feel locked out because of old stereotypes in computing and engineering. By sharing strong stories of pioneers in the fields who are women and people of color, I hope we can open the doors to computing and engineering. I hope history and herstory that is shared make it much easier to recruit young people to join engineering, computer science, and related fields.
Are you planning on writing more books or producing another documentary?
Kleiman: I would like to continue the story of the ENIAC programmers and write about what happened to them after the war ended. I hope that my next book will delve into the 1950s and uncover more about the history of the Universal Automatic Computer, the first modern commercial computer series, and the diverse group of people who built and programmed it.
Match ID: 11 Score: 15.00 source: spectrum.ieee.org age: 6 days qualifiers: 15.00 travel(|ing)
The vacuum-tube triode wasn’t quite 20 years old when physicists began trying to create its successor, and the stakes were huge. Not only had the triode made long-distance telephony and movie sound possible, it was driving the entire enterprise of commercial radio, an industry worth more than a billion dollars in 1929. But vacuum tubes were power-hungry and fragile. If a more rugged, reliable, and efficient alternative to the triode could be found, the rewards would be immense.
The goal was a three-terminal device made out of semiconductors that would accept a low-current signal into an input terminal and use it to control the flow of a larger current flowing between two other terminals, thereby amplifying the original signal. The underlying principle of such a device would be something called the field effect—the ability of electric fields to modulate the electrical conductivity of semiconductor materials. The field effect was already well known in those days, thanks to diodes and related research on semiconductors.
In the cutaway photo of a point-contact, two thin conductors are visible; these connect to the points that make contact with a tiny slab of germanium. One of these points is the emitter and the other is the collector. A third contact, the base, is attached to the reverse side of the germanium.AT&T ARCHIVES AND HISTORY CENTER
But building such a device had proved an insurmountable challenge to some of the world’s top physicists for more than two decades. Patents for transistor-like devices had been filed
starting in 1925, but the first recorded instance of a working transistor was the legendary point-contact device built at AT&T Bell Telephone Laboratories in the fall of 1947.
Though the point-contact transistor was the most important invention of the 20th century, there exists, surprisingly, no clear, complete, and authoritative account of how the thing actually worked. Modern, more robust junction and planar transistors rely on the physics in the bulk of a semiconductor, rather than the surface effects exploited in the first transistor. And relatively little attention has been paid to this gap in scholarship.
It was an ungainly looking assemblage of germanium, plastic, and gold foil, all topped by a squiggly spring. Its inventors were a soft-spoken Midwestern theoretician, John Bardeen, and a voluble and “
somewhat volatile” experimentalist, Walter Brattain. Both were working under William Shockley, a relationship that would later prove contentious. In November 1947, Bardeen and Brattain were stymied by a simple problem. In the germanium semiconductor they were using, a surface layer of electrons seemed to be blocking an applied electric field, preventing it from penetrating the semiconductor and modulating the flow of current. No modulation, no signal amplification.
Sometime late in 1947 they hit on a solution. It featured two pieces of barely separated gold foil gently pushed by that squiggly spring into the surface of a small slab of germanium.
Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate. Indeed, the current edition of that bible of undergraduate EEs,
The Art of Electronics by Horowitz and Hill, makes no mention of the point-contact transistor at all, glossing over its existence by erroneously stating that the junction transistor was a “Nobel Prize-winning invention in 1947.” But the transistor that was invented in 1947 was the point-contact; the junction transistor was invented by Shockley in 1948.
So it seems appropriate somehow that the most comprehensive explanation of the point-contact transistor is contained within
John Bardeen’s lecture for that Nobel Prize, in 1956. Even so, reading it gives you the sense that a few fine details probably eluded even the inventors themselves. “A lot of people were confused by the point-contact transistor,” says Thomas Misa, former director of the Charles Babbage Institute for the History of Science and Technology, at the University of Minnesota.
Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate.
A year after Bardeen’s lecture, R. D. Middlebrook, a professor of electrical engineering at Caltech who would go on to do pioneering work in power electronics,
wrote: “Because of the three-dimensional nature of the device, theoretical analysis is difficult and the internal operation is, in fact, not yet completely understood.”
Nevertheless, and with the benefit of 75 years of semiconductor theory, here we go. The point-contact transistor was built around a thumb-size slab of
n-type germanium, which has an excess of negatively charged electrons. This slab was treated to produce a very thin surface layer that was p-type, meaning it had an excess of positive charges. These positive charges are known as holes. They are actually localized deficiencies of electrons that move among the atoms of the semiconductor very much as a real particle would. An electrically grounded electrode was attached to the bottom of this slab, creating the base of the transistor. The two strips of gold foil touching the surface formed two more electrodes, known as the emitter and the collector.
That’s the setup. In operation, a small positive voltage—just a fraction of a volt—is applied to the emitter, while a much larger negative voltage—4 to 40 volts—is applied to the collector, all with reference to the grounded base. The interface between the
p-type layer and the n-type slab created a junction just like the one found in a diode: Essentially, the junction is a barrier that allows current to flow easily in only one direction, toward lower voltage. So current could flow from the positive emitter across the barrier, while no current could flow across that barrier into the collector.
The Western Electric Type-2 point-contact transistor was the first transistor to be manufactured in large quantities, in 1951, at Western Electric’s plant in Allentown, Pa. By 1960, when this photo was taken, the plant had switched to producing junction transistors.AT&T ARCHIVES AND HISTORY CENTER
Now, let’s look at what happens down among the atoms. First, we’ll disconnect the collector and see what happens around the emitter without it. The emitter injects positive charges—holes—into the
p-type layer, and they begin moving toward the base. But they don’t make a beeline toward it. The thin layer forces them to spread out laterally for some distance before passing through the barrier into the n-type slab. Think about slowly pouring a small amount of fine powder onto the surface of water. The powder eventually sinks, but first it spreads out in a rough circle.
Now we connect the collector. Even though it can’t draw current by itself through the barrier of the
p-n junction, its large negative voltage and pointed shape do result in a concentrated electric field that penetrates the germanium. Because the collector is so close to the emitter, and is also negatively charged, it begins sucking up many of the holes that are spreading out from the emitter. This charge flow results in a concentration of holes near the p-n barrier underneath the collector. This concentration effectively lowers the “height” of the barrier that would otherwise prevent current from flowing between the collector and the base. With the barrier lowered, current starts flowing from the base into the collector—much more current than what the emitter is putting into the transistor.
The amount of current depends on the height of the barrier. Small decreases or increases in the emitter’s voltage cause the barrier to fluctuate up and down, respectively. Thus very small changes in the the emitter current control very large changes at the collector, so voilà! Amplification. (EEs will notice that the functions of base and emitter are reversed compared with those in later transistors, where the base, not the emitter, controls the response of the transistor.)
Ungainly and fragile though it was, it
was a semiconductor amplifier, and its progeny would change the world. And its inventors knew it. The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil, with that tiny slit separating the emitter and collector contacts. This configuration gave reliable power gain, and the duo knew then that they had succeeded. In his carpool home that night, Brattain told his companions he’d just done “the most important experiment that I’d ever do in my life” and swore them to secrecy. The taciturn Bardeen, too, couldn’t resist sharing the news. As his wife, Jane, prepared dinner that night, he reportedly said, simply, “We discovered something today.” With their children scampering around the kitchen, she responded, “That’s nice, dear.”
It was a transistor, at last, but it was pretty rickety. The inventors later hit on the idea of electrically forming the collector by passing large currents through it during the transistor’s manufacturing. This technique enabled them to get somewhat larger current flows that weren’t so tightly confined within the surface layer. The electrical forming was a bit hit-or-miss, though. “They would just throw out the ones that didn’t work,” Misa notes.
The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil…
The Bell Labs group wasn’t alone in its successful pursuit of a transistor. In Aulnay-sous-Bois, a suburb northeast of Paris, two German physicists, Herbert Mataré and Heinrich Welker, were also trying to build a three-terminal semiconductor amplifier. Working for a French subsidiary of Westinghouse, they were following up on very
intriguing observations Mataré had made while developing germanium and silicon rectifiers for the German military in 1944. The two succeeded in creating a reliable point-contact transistor in June 1948.
They were astounded, a week or so later, when Bell Labs finally revealed the news of its own transistor, at a press conference on 30 June 1948. Though they were developed completely independently, and in secret, the two devices were more or less identical.
Here the story of the transistor takes a weird turn, breathtaking in its brilliance and also disturbing in its details. Bardeen’s and Brattain’s boss,
William Shockley, was furious that his name was not included with Bardeen’s and Brattain’s on the original patent application for the transistor. He was convinced that Bardeen and Brattain had merely spun his theories about using fields in semiconductors into their working device, and had failed to give him sufficient credit. Yet in 1945, Shockley had built a transistor based on those very theories, and it hadn’t worked.
In 1953, RCA engineer Gerald Herzog led a team that designed and built the first "all-transistor" television (although, yes, it had a cathode-ray tube). The team used point-contact transistors produced by RCA under a license from Bell Labs. TRANSISTOR MUSEUM JERRY HERZOG ORAL HISTORY
At the end of December, barely two weeks after the initial success of the point-contact transistor, Shockley traveled to Chicago for the annual meeting of the American Physical Society. On New Year’s Eve, holed up in his hotel room and fueled by a potent mix of jealousy and indignation, he began designing a transistor of his own. In three days he scribbled
some 30 pages of notes. By the end of the month, he had the basic design for what would become known as the bipolar junction transistor, or BJT, which would eventually supersede the point-contact transistor and reign as the dominant transistor until the late 1970s.
With insights gleaned from the Bell Labs work, RCA began developing its own point-contact transistors in 1948. The group included the seven shown here—four of which were used in RCA's experimental, 22-transistor television set built in 1953. These four were the TA153 [top row, second from left], the TA165 [top, far right], the TA156 [bottom row, middle] and the TA172 [bottom, right].TRANSISTOR MUSEUM JONATHAN HOPPE COLLECTION
The BJT was based on Shockley’s conviction that charges could, and should, flow through the bulk semiconductors rather than through a thin layer on their surface. The
device consisted of three semiconductor layers, like a sandwich: an emitter, a base in the middle, and a collector. They were alternately doped, so there were two versions: n-type/p-type/n-type, called “NPN,” and p-type/n-type/p-type, called “PNP.”
The BJT relies on essentially the same principles as the point-contact, but it uses two
p-n junctions instead of one. When used as an amplifier, a positive voltage applied to the base allows a small current to flow between it and the emitter, which in turn controls a large current between the collector and emitter.
Consider an NPN device. The base is
p-type, so it has excess holes. But it is very thin and lightly doped, so there are relatively few holes. A tiny fraction of the electrons flowing in combines with these holes and are removed from circulation, while the vast majority (more than 97 percent) of electrons keep flowing through the thin base and into the collector, setting up a strong current flow.
But those few electrons that do combine with holes must be drained from the base in order to maintain the
p-type nature of the base and the strong flow of current through it. That removal of the “trapped” electrons is accomplished by a relatively small flow of current through the base. That trickle of current enables the much stronger flow of current into the collector, and then out of the collector and into the collector circuit. So, in effect, the small base current is controlling the larger collector circuit.
Electric fields come into play, but they do not modulate the current flow, which the early theoreticians thought would have to happen for such a device to function. Here’s the gist: Both of the
p-n junctions in a BJT are straddled by depletion regions, in which electrons and holes combine and there are relatively few mobile charge carriers. Voltage applied across the junctions sets up electric fields at each, which push charges across those regions. These fields enable electrons to flow all the way from the emitter, across the base, and into the collector.
In the BJT, “the applied electric fields affect the carrier density, but because that effect is exponential, it only takes a little bit to create a lot of diffusion current,” explains Ioannis “John” Kymissis, chair of the department of electrical engineering at Columbia University.
The very first transistors were a type known as point contact, because they relied on metal contacts touching the surface of a semiconductor. They ramped up output current—labeled “Collector current” in the top diagram—by using an applied voltage to overcome a barrier to charge flow. Small changes to the input, or “emitter,” current modulate this barrier, thus controlling the output current.
The bipolar junction transistor accomplishes amplification using much the same principles but with two semiconductor interfaces, or junctions, rather than one. As with the point-contact transistor, an applied voltage overcomes a barrier and enables current flow that is modulated by a smaller input current. In particular, the semiconductor junctions are straddled by depletion regions, across which the charge carriers diffuse under the influence of an electric field.Chris Philpot
The BJT was more rugged and reliable than the point-contact transistor, and those features primed it for greatness. But it took a while for that to become obvious. The BJT was the technology used to make integrated circuits, from the first ones in the early 1960s all the way until the late 1970s, when metal-oxide-semiconductor field-effect transistors (MOSFETs) took over. In fact, it was these field-effect transistors, first the junction field-effect transistor and then MOSFETs, that finally realized the decades-old dream of a three-terminal semiconductor device whose operation was based on the field effect—Shockley’s original ambition.
Such a glorious future could scarcely be imagined in the early 1950s, when AT&T and others were struggling to come up with practical and efficient ways to manufacture the new BJTs. Shockley himself went on to literally put the silicon into Silicon Valley. He moved to Palo Alto and in 1956 founded a company that led the switch from germanium to silicon as the electronic semiconductor of choice. Employees from his company would go on to found Fairchild Semiconductor, and then Intel.
Later in his life, after losing his company because of his terrible management, he became a professor at Stanford and began promulgating ungrounded and unhinged theories about race, genetics, and intelligence. In 1951 Bardeen left Bell Labs to become a professor at the University of Illinois at Urbana-Champaign, where he won a second Nobel Prize for physics, for a theory of superconductivity. (He is the only person to have won two Nobel Prizes in physics.) Brattain stayed at Bell Labs until 1967, when he joined the faculty at Whitman College, in Walla Walla, Wash.
Shockley died a largely friendless pariah in 1989. But his transistor would change the world, though it was still not clear as late as 1953 that the BJT would be the future. In an interview that year,
Donald G. Fink, who would go on to help establish the IEEE a decade later, mused, “Is it a pimpled adolescent, now awkward, but promising future vigor? Or has it arrived at maturity, full of languor, surrounded by disappointments?”
It was the former, and all of our lives are so much the better because of it.
This article appears in the December 2022 print issue as “The First Transistor and How it Worked .”
Match ID: 12 Score: 10.00 source: spectrum.ieee.org age: 7 days qualifiers: 10.00 travel(|ing)
‘Twas the day before launch and all across the globe, people await liftoff for Artemis I with hope.
NASA’s Space Launch System (SLS) rocket and the Orion spacecraft with its European Service Module, is seen here on Launch Pad 39B at NASA's Kennedy Space Center in Florida, USA, on 12 November.
After much anticipation, NASA launch authorities have given the GO for the first opportunity for launch: tomorrow, 16 November with a two-hour launch window starting at 07:04 CET (06:04 GMT, 1:04 local time).
Artemis I is the first mission in a large programme to send astronauts around and on the Moon sustainably. This uncrewed first launch will see the Orion spacecraft travel to the Moon, enter an elongated orbit around our satellite and then return to Earth, powered by the European-built service module that supplies electricity, propulsion, fuel, water and air as well as keeping the spacecraft operating at the right temperature.
The European Service Modules are made from components supplied by over 20 companies in ten ESA Member States and USA. As the first European Service Module sits atop the SLS rocket on the launchpad, the second is only 8 km away being integrated with the Orion crew capsule for the first crewed mission – Artemis II. The third and fourth European Service Modules – that will power astronauts to a Moon landing – are in production in Bremen, Germany.
With a 16 November launch, the three-week Artemis I mission would end on 11 December with a splashdown in the Pacific Ocean. The European Service Module detaches from the Orion Crew Module before splashdown and burns up harmlessly in the atmosphere, its job complete after taking Orion to the Moon and back safely.
Backup Artemis I launch dates include 19 November. Check ESA’s Orion blog for updates and more details. Watch the launch live on ESA Web TV from 15 Nov, 20:30 GMT (21:30 CET) when the rocket fuelling starts, and from 16 November 00:00 GMT/01:00 CET for the launch coverage.
Match ID: 14 Score: 5.00 source: www.esa.int age: 12 days qualifiers: 5.00 travel(|ing)
From the outside, there is little to tell a basic Ford XL ICE
F-150 from the electric Ford PRO F-150 Lightning. Exterior changes could pass for a typical model-year refresh. While there are LED headlight and rear-light improvements along with a more streamlined profile, the Lightning’s cargo box is identical to that of an ICE F-150, complete with tailgate access steps and a jobsite ruler. The Lightning’s interior also has a familiar feel.
But when you pop the Lightning’s hood, you find that the internal combustion engine has gone missing. In its place is a
front trunk (“frunk”), while concealed beneath is the new skateboard frame with its dual electric motors (one for each axle) and a big 98-kilowatt-hour standard (and 131-kWh extended-range)battery pack. The combination permits the Lightning to travel 230 miles (370 kilometers) without recharging and go from 0 to 60 miles per hour in 4.5 seconds, making it the fastest F-150 available despite its much heavier weight.
Invisible, too, are the Lightning’s sophisticated computing and software systems. The 2016 ICE F-150 reportedly had about
150 million lines of code. The Lightning’s software suite may even be larger than its ICE counterpart (Ford will not confirm this). The Lightning replaces the Ford F-150 ICE-related software in the electronic control units (ECUs) with new “intelligent” software and systems that control the main motors, manage the battery system, and provide charging information to the driver.
The EV Transition Explained
This is the first in a series of articles presenting just some of the technological and social challenges in moving from vehicles with internal-combustion engines to electric vehicles. These must be addressed at scale before EVs can happen. Each challenge entails a multitude of interacting systems, subsystems, sub-subsystems, and so on. In reviewing each article, readers should bear in mind Nobel Prize–winning physicist Richard Feynman’s admonition: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”
says the Lightning’s software will identify nearby public charging stations and tell drivers when to recharge. To increase the accuracy of the range calculation, the software will draw upon similar operational data communicated from other Lightning owners that Ford will dynamically capture, analyze, and feed back to the truck.
For executives, however, Lightning’s software is not only a big consumer draw but also among the biggest threats to its success. Ford CEO Jim Farley
told the New York Times that software bugs worry him most. To mitigate the risk, Ford has incorporated an over-the-air (OTA) software-update capability for both bug fixes and feature upgrades. Yet with an incorrect setting in the Lightning’s tire pressure monitoring system requiring a software fix only a few weeks after its initial delivery, and with some new Ford Mustang Mach-Esrecalled because of misconfigured software caused by a “service update or as an over-the-air update,” Farley’s worries probably won’t be soothed for some time.
The F-150 Lightning's front trunk (also known as a frunk) helps this light-duty electric pickup haul even more.
However, long-term success is not guaranteed. “Ford is walking a tightrope, trying at the same time to convince everyone that EVs are the same as ICE vehicles yet different,” says
University of Michigan professor emeritus John Leslie King, who has long studied the auto industry. Ford and other automakers will need to convince tens of millions of customers to switch to EVs to meet the Biden Administration’s decarbonization goals of 50 percent new auto sales being non-ICE vehicles by 2030.
King points out that neither Ford nor other automakers can forever act like EVs are merely interchangeable with—but more ecofriendly than—their ICE counterparts. As EVs proliferate at scale, they operate in a vastly different technological, political, and social ecosystem than ICE vehicles. The core technologies and requisite expertise, supply-chain dependencies, and political alliances are different. The expectations of and about EV owners, and their agreement to change their lifestyles, also differ significantly.
Indeed, the challenges posed by the transition from ICE vehicles to EVs at scale are significantly larger in scope and more complex than the policymakers setting the regulatory timeline appreciate. The systems-engineering task alone is enormous, with countless interdependencies that are outside policymakers' control, and resting on optimistic assumptions about promising technologies and wished-for changes in human behavior. The risk of getting it wrong, and the resulting negative environmental and economic consequences created, are high. In this series, we will break down the myriad infrastructure, policy, and social challenges involved learned from discussions with numerous industry insiders and industry watchers. Let's take a look at some of the elemental challenges blocking the road ahead for EVs.
The soft car
For Ford and the other automakers that have shaped the ICE vehicle ecosystem for more than a century, ultimate success is beyond the reach of the traditional political, financial, and technological levers they once controlled.
Renault chief executive Luca de Meo, for example, is quoted in the Financial Times as saying that automakers must recognize that “the game has changed,” and they will “have to play by new rules” dictated by the likes of mining and energy companies.
One reason for the new rules, observes professor
Deepak Divan, the director of the Center for Distributed Energy at Georgia Tech, is that the EV transition is “a subset of the energy transition” away from fossil fuels. On the other hand, futurist Peter Schwartzcontends that the entire electric system is part of the EV supply chain. These alternative framings highlight the strong codependencies involved. Consequently, automakers will be competing against not only other EV manufacturers but also numerous players involved in the energy transition aiming to grab the same scarce resources and talent.
“Ford is walking a tightrope, trying at the same time to convince everyone that EVs are the same as ICE vehicles yet different.” —John Leslie King
EVs represent a new class of cyberphysical systems that unify the physical with information technology, allowing them to sense, process, act, and communicate in real time within a large transportation ecosystem, as I have
noted in detail elsewhere. While computing in ICE vehicles typically optimizes a car’s performance at the time of sale, EV-based cyberphysical systems are designed to evolve as they are updated and upgraded, postponing their obsolescence.
“As an automotive company, we’ve been trained to put vehicles out when they’re perfect,” Ford’s Farley told the
New York Times. “But with software, you can change it with over-the-air updates.” This allows new features to be introduced in existing models instead of waiting for next year’s model to appear. Farley sees Ford spending much less effort on changing vehicles’ physical properties and devoting more to upgrading their software capabilities in the future.
Systems engineering for holistic solutions
EV success at scale depends on as much, if not more, on political decisions as technical ones. Government decision-makers in the United States at both the state and federal level, for instance, have
created EV market incentives and set increasingly aggressive dates to sunset ICE vehicle sales, regardless of whether the technological infrastructure needed to support EVs at scale actually exists. While passing public policy can set a direction, it does not guarantee that engineering results will be available when needed.
“A systems-engineering approach towards managing the varied and often conflicting interests of the many stakeholders involved will be necessary to find a workable solution.” —Chris Paredis
$1.2 trillion through 2030 so far toward decarbonizing the planet, automakers are understandably wary not only of the fast reconfiguration of the auto industry but of the concurrent changes required in the energy, telecom, mining, recycling, and transportation industries that must succeed for their investments to pay off.
The EV transition is part of an unprecedented, planetary-wide, cyberphysical systems-engineering project with massive potential benefits as well as costs. Considering the sheer magnitude, interconnectedness, and uncertainties presented by the concurrent technological, political, and social changes necessary, the EV transition will undoubtedly be messy.
This chart from the
Global EV Outlook 2021, IEA, Paris shows 2020 EV sales in the first column; in the second column, projected sales under current climate-mitigation policies; in the third column, projected sales under accelerated climate-mitigation policies.
How many stumbles and how long the transition will take depend on whether the multitude of challenges involved are fully recognized and realistically addressed.
“Everyone needs to stop thinking in silos. It is the adjacency interactions that are going to kill you.” —Deepak Divan
“A systems-engineering approach towards managing the varied and often conflicting interests of the many stakeholders involved will be necessary to find a workable solution,” says
Chris Paredis, the BMW Endowed Chair in Automotive Systems Integration at Clemson University. The range of engineering-infrastructure improvements needed to support EVs, for instance, “will need to be coordinated at a national/international level beyond what can be achieved by individual companies,” he states.
If the nitty gritty but hard-to-solve issues are glossed over or ignored, or if EV expectations are
hyped beyond the market’s capability to deliver, no one should be surprised by a backlash against EVs, making the transition more difficult.
What has not yet been proven, but is widely assumed, is that BEVs can rapidly replace the majority of the
current 1.3 billion-plus light-duty ICE vehicles. The interrelated challenges involving EV engineering infrastructure, policy, and societal acceptance, however, will test how well this assumption holds true.
Therefore, the successful transition to EVs at scale demands a “holistic approach,” emphasizes Georgia Tech’s Deepak Divan. “Everyone needs to stop thinking in silos. It is the adjacency interactions that are going to kill you.”
“We cannot foresee all the details needed to make the EV transition successful,” John Leslie King says. “While there’s a reason to believe we will get there, there’s less reason to believe we know the way. It is going to be hard.”
In the next article in the series, we will look at the complexities introduced by trading our dependence on oil for our dependence on batteries.
Match ID: 15 Score: 5.00 source: spectrum.ieee.org age: 14 days qualifiers: 5.00 travel(|ing)
Collective Mental Time Travel Can Influence the Future Wed, 09 Nov 2022 13:00:00 +0000 The way people imagine the past and future of society can sway attitudes and behaviors. How might this be wielded for good? Match ID: 16 Score: 5.00 source: www.wired.com age: 18 days qualifiers: 5.00 travel(|ing)
Collisions with birds are a serious problem for commercial aircraft, costing the industry billions of dollars and killing thousands of animals every year. New research shows that a robotic imitation of a peregrine falcon could be an effective way to keep them out of flight paths.
Worldwide, so-called birdstrikes are estimated to cost the civil aviation industry almost US $1.4 billion annually. Nearby habitats are often deliberately made unattractive to birds, but airports also rely on a variety of deterrents designed to scare them away, such as loud pyrotechnics or speakers that play distress calls from common species.
However, the effectiveness of these approaches tends to decrease over time, as the birds get desensitized by repeated exposure, says Charlotte Hemelrijk, a professor on the faculty of science and engineering at the University of Groningen, in the Netherlands. Live hawks or blinding lasers are also sometimes used to disperse flocks, she says, but this is controversial as it can harm the animals, and keeping and training falcons is not cheap.
“The birds don’t distinguish [RobotFalcon] from a real falcon, it seems.” —Charlotte Hemelrijk, University of Groningen
In an effort to find a more practical and lasting solution, Hemelrijk and colleagues designed a robotic peregrine falcon that can be used to chase flocks away from airports. The device is the same size and shape as a real hawk, and its fiberglass and carbon-fiber body has been painted to mimic the markings of its real-life counterpart.
Rather than flapping like a bird, the RobotFalcon relies on two small battery-powered propellers on its wings, which allows it to travel at around 30 miles per hour for up to 15 minutes at a time. A human operator controls the machine remotely from a hawk’s-eye perspective via a camera perched above the robot’s head.
To see how effective the RobotFalcon was at scaring away birds, the researchers tested it against a conventional quadcopter drone over three months of field testing, near the Dutch city of Workum. They also compared their results to 15 years of data collected by the Royal Netherlands Air Force that assessed the effectiveness of conventional deterrence methods such as pyrotechnics and distress calls.
In a paper published in the Journal of the Royal Society Interface, the team showed that the RobotFalcon cleared fields of birds faster and more effectively than the drone. It also kept birds away from fields longer than distress calls, the most effective of the conventional approaches.
There was no evidence of birds getting habituated to the RobotFalcon over three months of testing, says Hemelrijk, and the researchers also found that the birds exhibited behavior patterns associated with escaping from predators much more frequently with the robot than with the drone. “The way of reacting to the RobotFalcon is very similar to the real falcon,” says Hemelrijk. “The birds don’t distinguish it from a real falcon, it seems.”
Other attempts to use hawk-imitating robots to disperse birds have had less promising results, though. Morgan Drabik-Hamshare, a research wildlife biologist at the DoA, and her colleagues published a paper in Scientific Reports last year that described how they pitted a robotic peregrine falcon with flapping wings against a quadcopter and a fixed-wing remote-controlled aircraft.
They found the robotic falcon was the least effective of the three at scaring away turkey vultures, with the quadcopter scaring the most birds off and the remote-controlled plane eliciting the quickest response. “Despite the predator silhouette, the vultures did not perceive the predator UAS [unmanned aircraft system] as a threat,” Drabik-Hamshare wrote in an email.
Zihao Wang, an associate lecturer at the University of Sydney, in Australia, who develops UAS for bird deterrence, says the RobotFalcon does seem to be effective at dispersing flocks. But he points out that its wingspan is nearly twice the diagonal length of the quadcopter it was compared with, which means it creates a much larger silhouette when viewed from the birds’ perspective. This means the birds could be reacting more to its size than its shape, and he would like to see the RobotFalcon compared with a similar size drone in the future.
The unique design also means the robot requires an experienced and specially trained operator, Wang adds, which could make it difficult to roll out widely. A potential solution could be to make the system autonomous, he says, but it’s unclear how easy this would be.
Hemelrijk says automating the RobotFalcon is probably not feasible, both due to strict regulations around the use of autonomous drones near airports as well as the sheer technical complexity. Their current operator is a falconer with significant experience in how hawks target their prey, she says, and creating an autonomous system that could recognize and target bird flocks in a similar way would be highly challenging.
But while the need for skilled operators is a limitation, Hemelrijk points out that most airports already have full-time staff dedicated to bird deterrence, who could be trained. And given the apparent lack of habituation and the ability to chase birds in a specific direction—so that they head away from runways—she thinks the robotic falcon could be a useful addition to their arsenal.
Match ID: 17 Score: 5.00 source: spectrum.ieee.org age: 21 days qualifiers: 5.00 travel(|ing)
From virtual showrooms to cutting-edge tech, the all-electric CUPRA Born is showing what the next generation of business travel looks like
Looking at a new company car online and checking one out in a showroom have, up until now, been two very separate experiences – neither of which are ideal. Sitting at home in front of your computer screen will allow you to spec a vehicle. You might be able to give it a 360-degree spin if the manufacturer’s website features all the bells and whistles, but you won’t really get much of a feel for your potential new car; and you’ll have to go digging through the rest of the website to find answers to any specific questions you may have. Visiting a showroom, on the other hand, will get you up close and personal to the vehicle, but you have to physically get to the dealership in the first place.
In a best-of-both worlds approach, CUPRA is combining the website and showroom experiences into one single process. In the market for a new company car, for example the Born all-electric vehicle? Then visit the new CUPRA Virtual Showroom and you’ll be able to get a live tour of the car online – through your computer or phone – with a product expert showing you around the vehicle’s exterior and interior, taking you through its numerous features and answering all the questions you can think of. No waiting around, no wasted time: click the link, set up an appointment and a CUPRA agent will send you a message, connect you to an audio and video session, and you’re ready to go.
You can direct the agent through the car as you wish, and sessions can be as brief or as detailed as you need, lasting from just a few minutes to an hour. It’s totally up to you. And the experience itself is impressive. Being able to guide the agent around the car, essentially via a video call, allows you to see what you want to see of the vehicle in clear, close-up detail, as well as witnessing the interior tech being put to use in real time. In the modern hybrid working landscape, where Zoom calls are now the norm, the CUPRA Virtual Showroom has successfully plugged itself into the zeitgeist.
“It’s pretty innovative,” says Martin Gray, CUPRA’s UK contract hire and leasing manager. “We’ve had great reactions from customers so far. It really works for the Born, as the car is so different from others in its class. Because of the way it looks, and because of its technology and the way the dashboard is set up, people really want to get a good look at it. And in a climate where supply of actual physical vehicles has become a real issue, this gives more people the opportunity to see the Born up close and personal.”
Continue reading... Match ID: 18 Score: 5.00 source: www.theguardian.com age: 37 days qualifiers: 5.00 travel(|ing)
As climate change edges from crisis to emergency, the aviation sector looks set to miss its 2050 goal of net-zero emissions. In the five years preceding the pandemic, the top four U.S. airlines—American, Delta, Southwest, and United—saw a 15 percent increase in the use of jet fuel. Despite continual improvements in engine efficiencies, that number is projected to keep rising.
A glimmer of hope, however, comes from solar fuels. For the first time, scientists and engineers at the Swiss Federal Institute of Technology (ETH) in Zurich have reported a successful demonstration of an integrated fuel-production plant for solar kerosene. Using concentrated solar energy, they were able to produce kerosene from water vapor and carbon dioxide directly from air. Fuel thus produced is a drop-in alternative to fossil-derived fuels and can be used with existing storage and distribution infrastructures, and engines.
Fuels derived from synthesis gas (or syngas)—an intermediate product that is a specific mixture of carbon monoxide and hydrogen—is a known alternative to conventional, fossil-derived fuels. Syngas is produced by Fischer-Tropsch (FT) synthesis, in which chemical reactions convert carbon monoxide and water vapor into hydrocarbons. The team of researchers at ETH found that a solar-driven thermochemical method to split water and carbon dioxide using a metal oxide redox cycle can produce renewable syngas. They demonstrated the process in a rooftop solar refinery at the ETH Machine Laboratory in 2019.
Reticulated porous structure made of ceria used in the solar reactor to thermochemically split CO2 and H2O and produce syngas, a specific mixture of H2 and CO.ETH Zurich
The current pilot-scale solar tower plant was set up at the IMDEA Energy Institute in Spain. It scales up the solar reactor of the 2019 experiment by a factor of 10, says Aldo Steinfeld, an engineering professor at ETH who led the study. The fuel plant brings together three subsystems—the solar tower concentrating facility, solar reactor, and gas-to-liquid unit.
First, a heliostat field made of mirrors that rotate to follow the sun concentrates solar irradiation into a reactor mounted on top of the tower. The reactor is a cavity receiver lined with reticulated porous ceramic structures made of ceria (or cerium(IV) oxide). Within the reactor, the concentrated sunlight creates a high-temperature environment of about 1,500 °C which is hot enough to split captured carbon dioxide and water from the atmosphere to produce syngas. Finally, the syngas is processed to kerosene in the gas-to-liquid unit. A centralized control room operates the whole system.
Fuel produced using this method closes the fuel carbon cycle as it only produces as much carbon dioxide as has gone into its manufacture. “The present pilot fuel plant is still a demonstration facility for research purposes,” says Steinfeld, “but it is a fully integrated plant and uses a solar-tower configuration at a scale that is relevant for industrial implementation.”
“The solar reactor produced syngas with selectivity, purity, and quality suitable for FT synthesis,” the authors noted in their paper. They also reported good material stability for multiple consecutive cycles. They observed a value of 4.1 percent solar-to-syngas energy efficiency, which Steinfeld says is a record value for thermochemical fuel production, even though better efficiencies are required to make the technology economically competitive.
A heliostat field concentrates solar radiation onto a solar reactor mounted on top of the solar tower. The solar reactor cosplits water and carbon dioxide and produces a mixture of molecular hydrogen and carbon monoxide, which in turn is processed to drop-in fuels such as kerosene.ETH Zurich
“The measured value of energy conversion efficiency was obtained without any implementation of heat recovery,” he says. The heat rejected during the redox cycle of the reactor accounted for more than 50 percent of the solar-energy input. “This fraction can be partially recovered via thermocline heat storage. Thermodynamic analyses indicate that sensible heat recovery could potentially boost the energy efficiency to values exceeding 20 percent.”
To do so, more work is needed to optimize the ceramic structures lining the reactor, something the ETH team is actively working on, by looking at 3D-printed structures for improved volumetric radiative absorption. “In addition, alternative material compositions, that is, perovskites or aluminates, may yield improved redox capacity, and consequently higher specific fuel output per mass of redox material,” Steinfeld adds.
The next challenge for the researchers, he says, is the scale-up of their technology for higher solar-radiative power inputs, possibly using an array of solar cavity-receiver modules on top of the solar tower.
To bring solar kerosene into the market, Steinfeld envisages a quota-based system. “Airlines and airports would be required to have a minimum share of sustainable aviation fuels in the total volume of jet fuel that they put in their aircraft,” he says. This is possible as solar kerosene can be mixed with fossil-based kerosene. This would start out small, as little as 1 or 2 percent, which would raise the total fuel costs at first, though minimally—adding “only a few euros to the cost of a typical flight,” as Steinfeld puts it
Meanwhile, rising quotas would lead to investment, and to falling costs, eventually replacing fossil-derived kerosene with solar kerosene. “By the time solar jet fuel reaches 10 to 15 percent of the total jet-fuel volume, we ought to see the costs for solar kerosene nearing those of fossil-derived kerosene,” he adds.
However, we may not have to wait too long for flights to operate solely on solar fuel. A commercial spin-off of Steinfeld’s laboratory, Synhelion, is working on commissioning the first industrial-scale solar fuel plant in 2023. The company has also collaborated with the airline SWISS to conduct a flight solely using its solar kerosene.
Match ID: 19 Score: 5.00 source: spectrum.ieee.org age: 116 days qualifiers: 5.00 travel(|ing)
Quantum signals may possess a number of advantages over regular forms of communication, leading scientists to wonder if humanity was not alone in discovering such benefits. Now a new study suggests that, for hypothetical extraterrestrial civilizations, quantum transmissions using X-rays may be possible across interstellar distances.
Quantum communication relies on a quantum phenomenon known as
entanglement. Essentially, two or more particles such as photons that get “linked” via entanglement can, in theory, influence each other instantly no matter how far apart they are.
Entanglement is essential to
quantum teleportation, in which data can essentially disappear one place and reappear someplace else. Since this information does not travel across the intervening space, there is no chance the information will be lost.
To accomplish quantum teleportation, one would first entangle two photons. Then, one of the photons—the one to be teleported—is kept at one location while the other is beamed to whatever destination is desired.
Next, the photon at the destination's quantum state—which defines its key characteristics—is analyzed, an act that also destroys its quantum state. Entanglement will lead the destination photon to prove identical to its partner. For all intents and purposes, the photon at the origin point “teleported” to the destination point—no physical matter moved, but the two photons are physically indistinguishable.
And to be clear, quantum teleportation cannot send information faster than the speed of light, because the destination photon must still be transmitted via conventional means.
One weakness of quantum communication is that entanglement is fragile. Still, researchers have successfully transmitted entangled photons that remained stable or “coherent” enough for quantum teleportation across distances as great as 1,400 kilometers.
“If photons in Earth’s atmosphere don’t decohere to 100 km, then in interstellar space where the medium is much less dense then our atmosphere, photons won’t decohere up to even the size of the galaxy,” Berera says.
In the new study, the researchers investigated whether and how well quantum communication might survive interstellar distances. Quantum signals might face disruption from a number of factors, such as the gravitational pull of interstellar bodies, they note.
The scientists discovered the best quantum communication channels for interstellar messages are X-rays. Such frequencies are easier to focus and detect across interstellar distances. (NASA has tested deep-space X-ray communication with its
XCOM experiment.) The researchers also found that the optical and microwave bands could enable communication across large distances as well, albeit less effectively than X-rays.
Although coherence might survive interstellar distances, Berera does note quantum signals might lose fidelity. “This means the quantum state is sustained, but it can have a phase shift, so although the quantum information is preserved in these states, it has been altered by the effect of gravity.” Therefore, it may “take some work at the receiving end to account for these phase shifts and be able to assess the information contained in the original state.”
Why might an interstellar civilization transmit quantum signals as opposed to regular ones? The researchers note that quantum communication may allow
greater data compression and, in some cases, exponentiallyfaster speeds than classical channels. Such a boost in efficiency might prove very useful for civilizations separated by interstellar distances.
“It could be that quantum communication is the main communication mode in an extraterrestrial's world, so they just apply what is at hand to send signals into the cosmos,” Berera says.
The scientists detailed
their findings online 28 June in the journal Physical Review D.
Match ID: 20 Score: 5.00 source: spectrum.ieee.org age: 132 days qualifiers: 5.00 travel(|ing)
James Webb Space Telescope (JWST) reveals its first images on 12 July, they will be the by-product of carefully crafted mirrors and scientific instruments. But all of its data-collecting prowess would be moot without the spacecraft’s communications subsystem.
The Webb’s comms aren’t flashy. Rather, the data and communication systems are designed to be incredibly, unquestionably dependable and reliable. And while some aspects of them are relatively new—it’s the first mission to use
Ka-band frequencies for such high data rates so far from Earth, for example—above all else, JWST’s comms provide the foundation upon which JWST’s scientific endeavors sit.
As previous articles in this series have noted, JWST is parked at
Lagrange point L2. It’s a point of gravitational equilibrium located about 1.5 million kilometers beyond Earth on a straight line between the planet and the sun. It’s an ideal location for JWST to observe the universe without obstruction and with minimal orbital adjustments.
Being so far away from Earth, however, means that data has farther to travel to make it back in one piece. It also means the communications subsystem needs to be reliable, because the prospect of a repair mission being sent to address a problem is, for the near term at least, highly unlikely. Given the cost and time involved, says
Michael Menzel, the mission systems engineer for JWST, “I would not encourage a rendezvous and servicing mission unless something went wildly wrong.”
According to Menzel, who has worked on JWST in some capacity for over 20 years, the plan has always been to use well-understood K
a-band frequencies for the bulky transmissions of scientific data. Specifically, JWST is transmitting data back to Earth on a 25.9-gigahertz channel at up to 28 megabits per second. The Ka-band is a portion of the broader K-band (another portion, the Ku-band, was also considered).
The Lagrange points are equilibrium locations where competing gravitational tugs on an object net out to zero. JWST is one of three craft currently occupying L2 (Shown here at an exaggerated distance from Earth). IEEE Spectrum
Both the data-collection and transmission rates of JWST dwarf those of the older
Hubble Space Telescope. Compared to Hubble, which is still active and generates 1 to 2 gigabytes of data daily, JWST can produce up to 57 GB each day (although that amount is dependent on what observations are scheduled).
Menzel says he first saw the frequency selection proposals for JWST around 2000, when he was working at
Northrop Grumman. He became the mission systems engineer in 2004. “I knew where the risks were in this mission. And I wanted to make sure that we didn’t get any new risks,” he says.
a-band frequencies can transmit more data than X-band (7 to 11.2 GHz) or S-band (2 to 4 GHz), common choices for craft in deep space. A high data rate is a necessity for the scientific work JWST will be undertaking. In addition, according to Carl Hansen, a flight systems engineer at the Space Telescope Science Institute (the science operations center for JWST), a comparable X-band antenna would be so large that the spacecraft would have trouble remaining steady for imaging.
Although the 25.9-GHz K
a-band frequency is the telescope’s workhorse communication channel, it also employs two channels in the S-band. One is the 2.09-GHz uplink that ferries future transmission and scientific observation schedules to the telescope at 16 kilobits per second. The other is the 2.27-GHz, 40-kb/s downlink over which the telescope transmits engineering data—including its operational status, systems health, and other information concerning the telescope’s day-to-day activities.
Any scientific data the JWST collects during its lifetime will need to be stored on board, because the spacecraft doesn’t maintain round-the-clock contact with Earth. Data gathered from its scientific instruments, once collected, is stored within the spacecraft’s 68-GB solid-state drive (3 percent is reserved for engineering and telemetry data).
Alex Hunter, also a flight systems engineer at the Space Telescope Science Institute, says that by the end of JWST’s 10-year mission life, they expect to be down to about 60 GB because of deep-space radiation and wear and tear.
The onboard storage is enough to collect data for about 24 hours before it runs out of room. Well before that becomes an issue, JWST will have scheduled opportunities to beam that invaluable data to Earth.
Sandy Kwan, a DSN systems engineer, says that contact windows with spacecraft are scheduled 12 to 20 weeks in advance. JWST had a greater number of scheduled contact windows during its commissioning phase, as instruments were brought on line, checked, and calibrated. Most of that process required real-time communication with Earth.
All of the communications channels use the
Reed-Solomonerror-correction protocol—the same error-correction standard as used in DVDs and Blu-ray discs as well as QR codes. The lower data-rate S-band channels use binary phase-shift key modulation—involving phase shifting of a signal’s carrier wave. The K-band channel, however, uses a quadrature phase-shift key modulation. Quadrature phase-shift keying can double a channel’s data rate, at the cost of more complicated transmitters and receivers.
JWST’s communications with Earth incorporate an acknowledgement protocol—only after the JWST gets confirmation that a file has been successfully received will it go ahead and delete its copy of the data to clear up space.
The communications subsystem was assembled along with the rest of the spacecraft bus by
Northrop Grumman, using off-the-shelf components sourced from multiple manufacturers.
JWST has had a long and
often-delayed development, but its communications system has always been a bedrock for the rest of the project. Keeping at least one system dependable means it’s one less thing to worry about. Menzel can remember, for instance, ideas for laser-based optical systems that were invariably rejected. “I can count at least two times where I had been approached by people who wanted to experiment with optical communications,” says Menzel. “Each time they came to me, I sent them away with the old ‘Thank you, but I don’t need it. And I don’t want it.’”
Match ID: 21 Score: 5.00 source: spectrum.ieee.org age: 142 days qualifiers: 5.00 travel(|ing)
In the latest push for nuclear power in space, the Pentagon’s Defense Innovation Unit (DIU) awarded a contract in May to Seattle-based Ultra Safe Nuclear to advance its nuclear power and propulsion concepts. The company is making a soccer ball–size radioisotope battery it calls EmberCore. The DIU’s goal is to launch the technology into space for demonstration in 2027.
Ultra Safe Nuclear’s system is intended to be lightweight, scalable, and usable as both a propulsion source and a power source. It will be specifically designed to give small-to-medium-size military spacecraft the ability to maneuver nimbly in the space between Earth orbit and the moon. The DIU effort is part of the U.S. military’s recently announced plans to develop a surveillance network in cislunar space.
Besides speedy space maneuvers, the DIU wants to power sensors and communication systems without having to worry about solar panels pointing in the right direction or batteries having enough charge to work at night, says Adam Schilffarth, director of strategy at Ultra Safe Nuclear. “Right now, if you are trying to take radar imagery in Ukraine through cloudy skies,” he says, “current platforms can only take a very short image because they draw so much power.”
Radioisotope power sources are well suited for small, uncrewed spacecraft, adds Christopher Morrison, who is leading EmberCore’s development. Such sources rely on the radioactive decay of an element that produces energy, as opposed to nuclear fission, which involves splitting atomic nuclei in a controlled chain reaction to release energy. Heat produced by radioactive decay is converted into electricity using thermoelectric devices.
Radioisotopes have provided heat and electricity for spacecraft since 1961. The Curiosity and Perseverance rovers on Mars, and deep-space missions including Cassini, New Horizons, and Voyager all use radioisotope batteries that rely on the decay of plutonium-238, which is nonfissile—unlike plutonium-239, which is used in weapons and power reactors.
For EmberCore, Ultra Safe Nuclear has instead turned to medical isotopes such as cobalt-60 that are easier and cheaper to produce. The materials start out inert, and have to be charged with neutrons to become radioactive. The company encapsulates the material in a proprietary ceramic for safety.
Cobalt-60 has a half-life of five years (compared to plutonium-238’s 90 years), which is enough for the cislunar missions that the DOD and NASA are looking at, Morrison says. He says that EmberCore should be able to provide 10 times as much power as a plutonium-238 system, providing over 1 million kilowatt-hours of energy using just a few pounds of fuel. “This is a technology that is in many ways commercially viable and potentially more scalable than plutonium-238,” he says.
One downside of the medical isotopes is that they can produce high-energy X-rays in addition to heat. So Ultra Safe Nuclear wraps the fuel with a radiation-absorbing metal shield. But in the future, the EmberCore system could be designed for scientists to use the X-rays for experiments. “They buy this heater and get an X-ray source for free,” says Schilffarth. “We’ve talked with scientists who right now have to haul pieces of lunar or Martian regolith up to their sensor because the X-ray source is so weak. Now we’re talking about a spotlight that could shine down to do science from a distance.”
Ultra Safe Nuclear’s contract is one of two awarded by the DIU—which aims to speed up the deployment of commercial technology through military use—to develop nuclear power and propulsion for spacecraft. The other contract was awarded to Avalanche Energy, which is making a lunchbox-size fusion device it calls an Orbitron. The device will use electrostatic fields to trap high-speed ions in slowly changing orbits around a negatively charged cathode. Collisions between the ions can result in fusion reactions that produce energetic particles.
Both companies will use nuclear energy to power high-efficiency electric propulsion systems. Electric propulsion technologies such as ion thrusters, which use electromagnetic fields to accelerate ions and generate thrust, are more efficient than chemical rockets, which burn fuel. Solar panels typically power the ion thrusters that satellites use today to change their position and orientation. Schilffarth says that the higher power from EmberCore should give a greater velocity change of 10 kilometers per second in orbit than today’s electric propulsion systems.
Ultra Safe Nuclear is also one of three companies developing nuclear fission thermal propulsion systems for NASA and the Department of Energy. Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is seeking companies to develop a fission-based nuclear thermal rocket engine, with demonstrations expected in 2026.
This article appears in the August 2022 print issue as “Spacecraft to Run on Radioactive Decay.”
Match ID: 22 Score: 5.00 source: spectrum.ieee.org age: 171 days qualifiers: 5.00 travel(|ing)
Filter efficiency 97.013 (23 matches/770 results)
ABOUT THE PROJECT
RSS Rabbit links users to publicly available RSS entries. Vet every link before clicking! The creators accept no responsibility for the contents of these entries.
We're not prepared to take user feedback yet. Check back soon!