logo RSS Rabbit quadric
News that matters, fast.
Good luck, have news.
Happy scrolling!

Categories



Date/Time of Last Update: Mon Nov 28 06:00:32 2022 UTC




********** UNIVERSITY **********
return to top



How the Graphical User Interface Was Invented
Sun, 20 Nov 2022 20:00:00 +0000


Mice, windows, icons, and menus: these are the ingredients of computer interfaces designed to be easy to grasp, simplicity itself to use, and straightforward to describe. The mouse is a pointer. Windows divide up the screen. Icons symbolize application programs and data. Menus list choices of action.

But the development of today’s graphical user interface was anything but simple. It took some 30 years of effort by engineers and computer scientists in universities, government laboratories, and corporate research groups, piggybacking on each other’s work, trying new ideas, repeating each other’s mistakes.


This article was first published as “Of Mice and menus: designing the user-friendly interface.” It appeared in the September 1989 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The photographs and diagrams appeared in the original print version.


Throughout the 1970s and early 1980s, many of the early concepts for windows, menus, icons, and mice were arduously researched at Xerox Corp.’s Palo Alto Research Center (PARC), Palo Alto, Calif. In 1973, PARC developed the prototype Alto, the first of two computers that would prove seminal in this area. More than 1200 Altos were built and tested. From the Alto’s concepts, starting in 1975, Xerox’s System Development Department then developed the Star and introduced it in 1981—the first such user-friendly machine sold to the public.

In 1984, the low-cost Macintosh from Apple Computer Inc., Cupertino, Calif., brought the friendly interface to thousands of personal computer users. During the next five years, the price of RAM chips fell enough to accommodate the huge memory demands of bit-mapped graphics, and the Mac was followed by dozens of similar interfaces for PCs and workstations of all kinds. By now, application programmers are becoming familiar with the idea of manipulating graphic objects.

The Mac’s success during the 1980s spurred Apple Computer to pursue legal action over ownership of many features of the graphical user interface. Suits now being litigated could assign those innovations not to the designers and their companies, but to those who first filed for legal protection on them.

The GUI started with Sketchpad


The grandfather of the graphical user interface was Sketchpad [see photograph]. Massachusetts Institute of Technology student Ivan E. Sutherland built it in 1962 as a Ph.D. thesis at MIT’s Lincoln Laboratory in Lexington, Mass. Sketchpad users could not only draw points, line segments, and circular arcs on a cathode ray tube (CRT) with a light pen—they could also assign constraints to, and relationships among, whatever they drew.

Arcs could have a specified diameter, lines could be horizontal or vertical, and figures could be built up from combinations of elements and shapes. Figures could be moved, copied, shrunk, expanded, and rotated, with their constraints (shown as onscreen icons) dynamically preserved. At a time when a CRT monitor was a novelty in itself, the idea that users could interactively create objects by drawing on a computer was revolutionary.


Man sitting in front of a round cathode ray display with a white square and triangle on a black background

Moreover, to zoom in on objects, Sutherland wrote the first window-drawing program, which required him to come up with the first clipping algorithm. Clipping is a software routine that calculates which part of a graphic object is to be displayed and displays only that part on the screen. The program must calculate where a line is to be drawn, compare that position to the coordinates of the window in use, and prevent the display of any line segment whose coordinates fall outside the window.

Though films of Sketchpad in operation were widely shown in the computer research community, Sutherland says today that there was little immediate fallout from the project. Running on MIT’s TX-2 mainframe, it demanded too much computing power to be practical for individual use. Many other engineers, however, see Sketchpad’s design and algorithms as a primary influence on an entire generation of research into user interfaces.

The origin of the computer mouse


The light pens used to select areas of the screen by interactive computer systems of the 1950s and 1960s—including Sketchpad—had drawbacks. To do the pointing, the user’s arm had to be lifted up from the table, and after a while that got tiring. Picking up the pen required fumbling around on the table or, if it had a holder, taking the time after making a selection to put it back.

Sensing an object with a light pen was straightforward: the computer displayed spots of light on the screen and interrogated the pen as to whether it sensed a spot, so the program always knew just what was being displayed. Locating the position of the pen on the screen required more sophisticated techniques—like displaying a cross pattern of nine points on the screen, then moving the cross until it centered on the light pen.

In 1964, Douglas Engelbart, a research project leader at SRI International in Menlo Park, Calif., tested all the commercially available pointing devices, from the still-popular light pen to a joystick and a Graphicon (a curve-tracing device that used a pen mounted on the arm of a potentiometer). But he felt the selection failed to cover the full spectrum of possible pointing devices, and somehow he should fill in the blanks.

Then he remembered a 1940s college class he had taken that covered the use of a planimeter to calculate area. (A planimeter has two arms, with a wheel on each. The wheels can roll only along their axes; when one of them rolls, the other must slide.)

If a potentiometer were attached to each wheel to monitor its rotation, he thought, a planimeter could be used as a pointing device. Engelbart explained his roughly sketched idea to engineer William English, who with the help of the SRI machine shop built what they quickly dubbed “the mouse.”



This first mouse was big because it used single-turn potentiometers: one rotation of the wheels had to be scaled to move a cursor from one side of the screen to the other. But it was simple to interface with the computer: the processor just read frequent samples of the potentiometer positioning signals through analog-to-digital converters.

The cursor moved by the mouse was easy to locate, since readings from the potentiometer determined the position of the cursor on the screen-unlike the light pen. But programmers for later windowing systems found that the software necessary to determine which object the mouse had selected was more complex than that for the light pen: they had to compare the mouse’s position with that of all the objects displayed onscreen.

The computer mouse gets redesigned—and redesigned again

Engelbart’s group at SRI ran controlled experiments with mice and other pointing devices, and the mouse won hands down. People adapted to it quickly, it was easy to grab, and it stayed where they put it. Still, Engelbart wanted to tinker with it. After experimenting, his group had concluded that the proper ratio of cursor movement to mouse movement was about 2:1, but he wanted to try varying that ratio—decreasing it at slow speeds and raising it at fast speeds—to improve user control of fine movements and speed up larger movements. Some modern mouse-control software incorporates this idea, including that of the Macintosh.

The mouse, still experimental at this stage, did not change until 1971. Several members of Engelbart’s group had moved to the newly established PARC, where many other researchers had seen the SRI mouse and the test report. They decided there was no need to repeat the tests; any experimental systems they designed would use mice.

Said English, “This was my second chance to build a mouse; it was obvious that it should be a lot smaller, and that it should be digital.” Chuck Thacker, then a member of the research staff, advised PARC to hire inventor Jack Hawley to build it.

Hawley decided the mouse should use shaft encoders, which measure position by a series of pulses, instead of potentiometers (both were covered in Engelbart’s 1970 patent), to eliminate the expensive analog-to-digital converters. The basic principle, of one wheel rolling while the other slid, was licensed from SRI.

The ball mouse was the “easiest patent I ever got. It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”
—Ron Rider

In 1972, the mouse changed again. Ron Rider, now vice president of systems architecture at PARC but then a new arrival, said he was using the wheel mouse while an engineer made excuses for its asymmetric operation (one wheel dragging while one turned). “I suggested that they turn a trackball upside down, make it small, and use it as a mouse instead,” Rider told IEEE Spectrum. This device came to be known as the ball mouse. “Easiest patent I ever got,” Rider said. “It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”

Defining terms


Bit map

The pixel pattern that makes up the graphic display on a computer screen.

Clicking

The motion of pressing a mouse button to Initiate an action by software; some actions require double-clicking.

Graphical user interface (GUI)

The combination of windowing displays, menus, icons, and a mouse that is increasingly used on personal computers and workstations.

Icon

An onscreen drawing that represents programs or data.

Menu

A list of command options currently available to the computer user; some stay onscreen, while pop-up or pull-down menus are requested by the user.

Mouse

A device whose motion across a desktop or other surface causes an on-screen cursor to move commensurately; today’s mice move on a ball and have one, two, or three buttons.

Raster display

A cathode ray tube on which Images are displayed as patterns of dots, scanned onto the screen sequentially in a predetermined pattern of lines.

Vector display

A cathode ray tube whose gun scans lines, or vectors, onto the screen phosphor.

Window

An area of a computer display, usually one of several, in which a particular program is executing.


In the PARC ball mouse design, the weight of the mouse is transferred to the ball by a swivel device and on one or two casters at the end of the mouse farthest from the wire “tail.” A prototype was built by Xerox’s Electronics Division in El Segundo, Calif., then redesigned by Hawley. The rolling ball turned two perpendicular shafts, with a drum on the end of each that was coated with alternating stripes of conductive and nonconductive material. As the drum turned, the stripes transmitted electrical impulses through metal wipers.

When Apple Computer decided in 1979 to design a mouse for its Lisa computer, the design mutated yet again. Instead of a metal ball held against the substrate by a swivel, Apple used a rubber ball whose traction depended on the friction of the rubber and the weight of the ball itself. Simple pads on the bottom of the case carried the weight, and optical scanners detected the motion of the internal wheels. The device had loose tolerances and few moving parts, so that it cost perhaps a quarter as much to build as previous ball mice.

How the computer mouse gained and lost buttons

The first, wooden, SRI mouse had only one button, to test the concept. The plastic batch of SRI mice bad three side-by-side buttons—all there was room for, Engelbart said. The first PARC mouse bad a column of three buttons-again, because that best fit the mechanical design. Today, the Apple mouse has one button, while the rest have two or three. The issue is no longer 1950—a standard 6-by-10-cm mouse could now have dozens of buttons—but human factors, and the experts have strong opinions.

Said English, now director of internationalization at Sun Microsystems Inc., Mountain View, Calif.: “Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” He sees two buttons as the minimum because two functions are basic to selecting an object: pointing to its start, then extending the motion to the end of the object.

William Verplank, a human factors specialist in the group that tested the graphical interface at Xerox from 1978 into the early 1980s, concurred. He told Spectrum that with three buttons, Alto users forgot which button did what. The group’s tests showed that one button was also confusing, because it required actions such as double-clicking to select and then open a file.

“We have agonizing videos of naive users struggling” with these problems, Verplank said. They concluded that for most users, two buttons (as used on the Star) are optimal, if a button means the same thing in every application. English experimented with one-button mice at PARC before concluding they were a bad idea.


“Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.”
—William English


A computer monitor with a chunky white keyboard sitting on a desk

But many interface designers dislike multiple buttons, saying that double-clicking a single button to select an item is easier than remembering which button points and which extends. Larry Tesler, formerly a computer scientist at PARC, brought the one-button mouse to Apple, where he is now vice president of advanced technology. The company’s rationale is that to attract novices to its computers one button was as simple as it could get.

More than two million one-button Apple mice are now in use. The Xerox and Microsoft two-button mice are less common than either Apple’s ubiquitous one-button model or the three-button mice found on technical workstations. Dozens of companies manufacture mice today; most are slightly smaller than a pack of cigarettes, with minor variations in shape.

How windows first came to the computer screen


In 1962, Sketchpad could split its screen horizontally into two independent sections. One section could, for example, give a close-up view of the object in the other section. Researchers call Sketchpad the first example of tiled windows, which are laid out side by side. They differ from overlapping windows, which can be stacked on top of each other, or overlaid, obscuring all or part of the lower layers.

Windows were an obvious means of adding functionality to a small screen. In 1969, Engelbart equipped NLS (as the On-Line System he invented at SRI during the 1960s was known, to distinguish it from the Off-Line System known as FLS) with windows. They split the screen into multiple parts horizontally or vertically, and introduced cross-window editing with a mouse.

By 1972, led by researcher Alan Kay, the Smalltalk programming language group at Xerox PARC had implemented their version of windows. They were working with far different technology from Sutherland or Engelbart: by deciding that their images had to be displayed as dots on the screen, they led a move from vector to raster displays, to make it simple to map the assigned memory location of each of those spots. This was the bit map invented at PARC, and made viable during the 1980s by continual performance improvements in processor logic and memory speed.

Experimenting with bit-map manipulation, Smalltalk researcher Dan Ingalls developed the bit-block transfer procedure, known as BitBlt. The BitBlt software enabled application programs to mix and manipulate rectangular arrays of pixel values in on-screen or off-screen memory, or between the two, combining the pixel values and storing the result in the appropriate bit-map location.

BitBlt made it much easier to write programs to scroll a window (move an image through it), resize (enlarge or contract) it, and drag windows (move them from one location to another on screen). It led Kay to create overlapping windows. They were soon implemented by the Smalltalk group, but made clipping harder.

Some researchers question whether overlapping windows offer more benefits than tiled on the grounds that screens with overlapping windows become so messy the user gets lost.

In a tiling system, explained researcher Peter Deutsch, who worked with the Smalltalk group, the clipping borders are simply horizontal or vertical lines from one screen border to another, and software just tracks the location of those lines. But overlapping windows may appear anywhere on the screen, randomly obscuring bits and pieces of other windows, so that quite irregular regions must be clipped. Thus application software must constantly track which portions of their windows remain visible.

Some researchers still question whether overlapping windows offer more benefits than tiled, at least above a certain screen size, on the grounds that screens with overlapping windows become so messy the user gets lost. Others argue that overlapping windows more closely match users’ work patterns, since no one arranges the papers on their physical desktop in neat horizontal and vertical rows. Among software engineers, however, overlapping windows seem to have won for the user interface world.

So has the cut-and-paste editing model that Larry Tesler developed, first for the Gypsy text editor he wrote at PARC and later for Apple. Charles Irby—who worked on Xerox’s windows and is now vice president of development at Metaphor Computer Systems Inc., Mountain View, Calif.—noted, however, that cut-and-paste worked better for pure text-editing than for moving graphic objects from one application to another.

The origin of the computer menu bar


Menus—functions continuously listed onscreen that could be called into action with key combinations—were commonly used in defense computing by the 1960s. But it was only with the advent of BitBlt and windows that menus could be made to appear as needed and to disappear after use. Combined with a pointing device to indicate a user’s selection, they are now an integral part of the user-friendly interface: users no longer need to refer to manuals or memorize available options.

Instead, the choices can be called up at a moment’s notice whenever needed. And menu design has evolved. Some new systems use nested hierarchies of menus; others offer different menu versions—one with the most commonly used commands for novices, another with all available commands for the experienced user.

Among the first to test menus on demand was PARC researcher William Newman, in a program called Markup. Hard on his heels, the Smalltalk group built in pop-up menus that appeared on screen at the cursor site when the user pressed one of the mouse buttons.

Implementation was on the whole straightforward, recalled Deutsch. The one exception was determining whether the menu or the application should keep track of the information temporarily obscured by the menu. In the Smalltalk 76 version, the popup menu saved and restored the screen bits it overwrote. But in today’s multitasking systems, that would not work, because an application may change those bits without the menu’s knowledge. Such systems add another layer to the operating system: a display manager that tracks what is written where.

The production Xerox Star, in 1981, featured a further advance: a menu bar, essentially a row of words indicating available menus that could be popped up for each window. Human factors engineer Verplank recalled that the bar was at first located at the bottom of its window. But the Star team found users were more likely to associate a bar with the window below it, so it was moved to the top of its window.

Apple simplified things in its Lisa and Macintosh with a single bar placed at the top of the screen. This menu bar relates only to the window in use: the menus could be ‘‘pulled down” from the bar, to appear below it. Designer William D. Atkinson received a patent (assigned to Apple Computer) in August 1984 for this innovation.

One new addition that most user interface pioneers consider an advantage is the tear-off menu, which the user can move to a convenient spot on the screen and “pin” there, always visible for ready access.

Many windowing interfaces now offer command-key or keyboard alternatives for many commands as well. This return to the earliest of user interfaces—key combinations—neatly supplements menus, providing both ease of use for novices and for the less experienced, and speed for those who can type faster than they can point to a menu and click on a selection.

How the computer “icon” got its name


Sketchpad had on-screen graphic objects that represented constraints (for example, a rule that lines be the same length), and the Flex machine built in 1967 at the University of Utah by students Alan Kay and Ed Cheadle had squares that represented programs and data (like today’s computer “folders”). Early work on icons was also done by Bell Northern Research, Ottawa, Canada, stemming from efforts to replace the recently legislated bilingual signs with graphic symbols.

But the concept of the computer “icon” was not formalized until 1975. David Canfield Smith, a computer science graduate student at Stanford University in California, began work on his Ph.D. thesis in 1973. His advisor was PARC’s Kay, who suggested that he look at using the graphics power of the experimental Alto not just to display text, but rather to help people program.

David Canfield Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents.

Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents: a Russian icon of a saint is holy and is to be venerated. Smith’s computer icons contained all the properties of the programs and data represented, and therefore could be linked or acted on as if they were the real thing.

After receiving his Ph.D. in 1975, Smith joined Xerox in 1976 to work on Star development. The first thing he did, he said, was to recast his concept of icons in office terms. “I looked around my office and saw papers, folders, file cabinets, a telephone, and bookshelves, and it was an easy translation to icons,” he said.

Xerox researchers developed, tested, and revised icons for the Star interface for three years before the first version was complete. At first they attempted to make the icons look like a detailed photographic rendering of the object, recalled Irby, who worked on testing and refining the Xerox windows. Trading off label space, legibility, and the number of icons that fit on the screen, they decided to constrain icons to a 1-inch (2.5-centimeter) square of 64 by 64 pixels, or 512 eight-bit bytes.

Then, Verplank recalls, they discovered that because of a background pattern based on two-pixel dots, the right-hand side of the icons appeared jagged. So they increased the width of the icons to 65 pixels, despite an outcry from programmers who liked the neat 16-bit breakdown. But the increase stuck, Verplank said, because they had already decided to store 72 bits per side to allow for white space around each icon.

After settling on a size for the icons, the Star developers tested four sets developed by two graphic designers and two software engineers. They discovered that, for example, resizing may cause problems. They shrunk the icon for a person—a head and shoulders—in order to use several of them to represent a group, only to hear one test subject say the screen resolution made the reduced icon look like a cross above a tombstone. Computer graphics artist Norm Cox, now of Cox & Hall, Dallas, Texas, was finally hired to redesign the icons.

Icon designers today still wrestle with the need to make icons adaptable to the many different system configurations offered by computer makers. Artist Karen Elliott, who has designed icons for Microsoft, Apple, Hewlett-Packard Co., and others, noted that on different systems an icon may be displayed in different colors, several resolutions, and a variety of gray shades, and it may also be inverted (light and dark areas reversed).

In the past few years, another concern has been added to icon designers’ tasks: internationalization. Icons designed in the United States often lack space for translations into languages other than English. Elliott therefore tries to leave space for both the longer words and the vertical orientation of some languages.


A square white macintosh computer with a white keyboard, in a separate image below, computer icons and the text address book, address, addresses

The main rule is to make icons simple, clean, and easily recognizable. Discarded objects are placed in a trash can on the Macintosh. On the NeXT Computer System, from NeXT Inc., Palo Alto, Calif.—the company formed by Apple cofounder Steven Jobs after he left Apple—they are dumped into a Black Hole. Elliott sees NeXT’s black hole as one of the best icons ever designed: ”It is distinct; its roundness stands out from the other, square icons, and this is important on a crowded display. It fits my image of information being sucked away, and it makes it clear that dumping something is serious.

English disagrees vehemently. The black hole “is fundamentally wrong,” he said. “You can dig paper out of a wastebasket, but you can’t dig it out of a black hole.” Another critic called the black hole familiar only to “computer nerds who read mostly science fiction and comics,” not to general users.

With the introduction of the Xerox Star in June 1981, the graphical user interface, as it is known today, arrived on the market. Though not a commercial triumph, the Star generated great interest among computer users, as the Alto before it had within the universe of computer designers.

Even before the Star was introduced, Jobs, then still at Apple, had visited Xerox PARC in November 1979 and asked the Smalltalk researchers dozens of questions about the Alto’s internal design. He later recruited Larry Tesler from Xerox to design the user interface of the Apple Lisa.

With the Lisa and then the Macintosh, introduced in January 1983 and January 1984 respectively, the graphical user interface reached the low-cost, high-volume computer market.

At almost $10,000, buyers deemed the Lisa too expensive for the office market. But aided by prizewinning advertising and its lower price, the Macintosh took the world by storm. Early Macs had only 128K bytes of RAM, which made them slow to respond because it was too little memory for heavy graphic manipulation. Also, the time needed for programmers to learn its Toolbox of graphics routines delayed application packages until well into 1985. But the Mac’s ease of use was indisputable, and it generated interest that spilled over into the MS-DOS world of IBM PCs and clones, as well as Unix-based workstations.

Who owns the graphical user interface?


The widespread acceptance of such interfaces, however, has led to bitter lawsuits to establish exactly who owns what. So far, none of several litigious companies has definitively established that it owns the software that implements windows, icons, or early versions of menus. But the suits continue.

Virtually all the companies that make and sell either wheel or ball mice paid license fees to SRI or to Xerox for their patents. Engelbart recalled that SRI patent attorneys inspected all the early work on the interface, but understood only hardware. After looking at developments like the implementation of windows, they told him that none of it was patentable.

At Xerox, the Star development team proposed 12 patents having to do with the user interface. The company’s patent committee rejected all but two on hardware—one on BitBlt, the other on the Star architecture. At the time, Charles Irby said, it was a good decision. Patenting required full disclosure, and no precedents then existed for winning software patent suits.


A computer screen in blue and white with multiple open windows


Three computer windows with greyscale images on a dark grey background


Computer windows tinted blue on a black background partially obscuring a planet and starfield


The most recent and most publicized suit was filed in March 1988, by Apple, against both Microsoft and Hewlett-Packard Co., Palo Alto, Calif. Apple alleges that HP’s New Wave interface, requiring version 2.03 of Microsoft’s Windows program, embodies the copyrighted “audio visual computer display” of the Macintosh without permission; that the displays of Windows 2.03 are illegal copies of the Mac’s audiovisual works; and that Windows 2.03 also exceeds the rights granted in a November 198S agreement in which Microsoft acknowledged that the displays in Windows 1.0 were derivatives of those in Apple’s Lisa and Mac.

In March 1989, U.S. District Judge William W. Schwarzer ruled Microsoft had exceeded the bounds of its license in creating Windows 2.03. Then in July 1989 Schwarzer ruled that all but 11 of the 260 items that Apple cited in its suit were, in fact, acceptable under the 1985 agreement. The larger issue—whether Apple’s copyrights are valid, and whether Microsoft and HP infringed on them—will not now be examined until 1990.

Among those 11 are overlapping windows and movable icons. According to Pamela Samuelson, a noted software intellectual property expert and visiting professor at Emory University Law School, Atlanta, Ga., many experts would regard both as functional features of an interface that cannot be copyrighted, rather than “expressions” of an idea protectable by copyright.

But lawyers for Apple—and for other companies that have filed lawsuits to protect the “look and feel’’ of their screen displays—maintain that if such protection is not granted, companies will lose the economic incentive to market technological innovations. How is Apple to protect its investment in developing the Lisa and Macintosh, they argue, if it cannot license its innovations to companies that want to take advantage of them?

If the Apple-Microsoft case does go to trial on the copyright issues, Samuelson said, the court may have to consider whether Apple can assert copyright protection for overlapping windows-an interface feature on which patents have also been granted. In April 1989, for example, Quarterdeck Office Systems Inc., Santa Monica, Calif., received a patent for a multiple windowing system in its Desq system software, introduced in 1984.

Adding fuel to the legal fire, Xerox said in May 1989 it would ask for license fees from companies that use the graphical user interface. But it is unclear whether Xerox has an adequate claim to either copyright or patent protection for the early graphical interface work done at PARC. Xerox did obtain design patents on later icons, noted human factors engineer Verplank. Meanwhile, both Metaphor and Sun Microsystems have negotiated licenses with Xerox for their own interfaces.

To Probe Further

The September 1989 IEEE Computer contains an article, “The Xerox ‘Star’: A Retrospective,” by Jeff Johnson et al., covering development of the Star. “Designing the Star User Interface,’’ [PDF] by David C. Smith et al., appeared in the April 1982 issue of Byte.

The Sept. 12, 1989, PC Magazine contains six articles on graphical user interfaces for personal computers and workstations. The July 1989 Byte includes ‘‘A Guide to [Graphical User Interfaces),” by Frank Hayes and Nick Baran, which describes 12 current interfaces for workstations and personal computers. “The Interface of Tomorrow, Today,’’ by Howard Reingold, in the July 10, 1989, InfoWorld does the same. “The interface that launched a thousand imitations,” by Richard Rawles, in the March 21, 1989, MacWeek covers the Macintosh interface.

The human factors of user interface design are discussed in The Psychology of Everyday Things, by Donald A. Norman (Basic Books Inc., New York, 1988). The January 1989 IEEE Software contains several articles on methods, techniques, and tools for designing and implementing graphical interfaces. The Way Things Work, by David Macaulay (Houghton Mifflin Co., Boston, 1988), contains a detailed drawing of a ball mouse.

The October 1985 IEEE Spectrum covered Xerox PARC’s history in “Research at Xerox PARC: a founder’s assessment,” by George Pake (pp. 54-61) and “Inside the PARC: the ‘information architects,’“ by Tekla Perry and Paul Wallich (pp. 62-75).

William Atkinson received patent no. 4,464,652 for the pulldown menu system on Aug. 8, 1984, and assigned it to Apple. Gary Pope received patent no. 4,823,108, for an improved system for displaying images in “windows” on a computer screen, on April 18, 1989, and assigned it to Quarterdeck Office Systems.

The wheel mouse patent, no. 3,541,541, “X-Y position indicator for a display system,” was issued to Douglas Engelbart on Nov. 17, 1970, and assigned to SRI International. The ball mouse patent, no. 3,835,464, was issued to Ronald Rider on Sept. 10, 1974, and assigned to Xerox.

The first selection device tests to include a mouse are covered in “Display-Selection Techniques for Text Manipulation,” by William English, Douglas Engelbart, and Melvyn Berman, in IEEE Transactions on Human Factors in Electronics, March 1967.

Sketchpad: A Man-Machine Graphical Communication System, by Ivan E. Sutherland (Garland Publishing Inc., New York City and London, 1980), reprints his 1963 Ph.D. thesis.










Match ID: 0 Score: 2.86 source: spectrum.ieee.org age: 7 days
qualifiers: 2.86 school

Video Friday: Little Robot, Big Stairs
Fri, 18 Nov 2022 16:43:36 +0000


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs, and even operate in the dark.

[ CMU ]

This robot is designed as a preliminary platform for humanoid robot research. The platform will be further extended with soles as well as upper limbs. In this video, the current lower limb version of the platform shows its capability in traversing uneven terrains without an active or passive ankle joint. The underactuation nature of the robot system has been well addressed with our locomotion-control framework, which also provides a new perspective on the leg design of bipedal robots.

[ CLEAR Lab ]

Thanks, Zejun!

Inbiodroid is a startup “dedicated to the development of fully immersive telepresence technologies that create a deeper connection between people and their environment.” Hot off the ANA Avatar XPrize competition, they’re doing a Kickstarter to fund the next generation of telepresence robots.

[ Kickstarter ] via [ Inbiodroid ]

Thanks, Alejandro!

A robot that can feel what a therapist feels when treating a patient, that can adjust the intensity of rehabilitation exercises at any time according to the patient's abilities and needs, and that can thus go on for hours without getting tired: It seems like fiction, and yet researchers from the Vrije Universiteit Brussel and Imec have now finished a prototype that unites all these skills in one robot.

[ VUB ]

Thanks, Bram!

Self-driving bikes present some special challenges, as this excellent video graphically demonstrates.

[ Paper ]

Pickle robots unload trucks. This is a short overview of the Pickle Robot Unload System in action at the end of October 2022—autonomously picking floor-loaded freight to unload a trailer. As a robotic system built on AI and advanced sensors, the system gets better and faster all the time.

[ Pickle ]

Learning agile skills can be challenging with reward shaping. Imitation learning provides an alternative solution by assuming access to decent expert references. However, such experts are not always available. We propose Wasserstein Adversarial Skill Imitation (WASABI), which acquires agile behaviors from partial and potentially physically incompatible demonstrations. In our work, Solo, a quadruped robot, learns highly dynamic skills (for example, backflips) from only handheld human demonstrations.

WASABI!

[ WASABI ]

NASA and the European Space Agency are developing plans for one of the most ambitious campaigns ever attempted in space: bringing the first samples of Mars material safely back to Earth for detailed study. The diverse set of scientifically curated samples now being collected by NASA’s Mars Perseverance rover could help scientists answer the question of whether ancient life ever arose on the Red Planet.

I thought I was promised some helicopters?

[ NASA ]

A Sanctuary general-purpose robot picks up and sorts medicine pills.

Remotely controlled, if that wasn’t clear.

[ Sanctuary ]

I don’t know what’s going on here, but it scares me.

[ KIMLAB ]

The Canadian Space Agency plans to send a rover to the moon as early as 2026 to explore a polar region. The mission will demonstrate key technologies and accomplish meaningful science. Its objectives are to gather imagery, measurements, and data on the surface of the moon, as well as to have the rover survive an entire night on the moon. Lunar nights, which last about 14 Earth days, are extremely cold and dark, posing a significant technological challenge.

[ CSA ]

Covariant Robotic Induction automates previously manual induction processes. This video shows the Covariant Robotic Induction solution picking a wide range of item types from totes, scanning bar codes, and inducting items onto a unit sorter. Note the robot’s ability to effectively handle items that are traditionally difficult to pick, such as transparent polybagged apparel and small, oddly shaped health and beauty items, and place them precisely onto individual trays.

[ Covariant ]

The solution will integrate Boston Dynamics’ Spot robot; the ExynPak, powered by ExynAI; and the Trimble X7 total station. It will enable fully autonomous missions inside complex and dynamic construction environments, which can result in consistent and precise reality capture for production and quality-control workflows.

[ Exyn ]

Our most advanced programmable robot yet is back and better than ever. Sphero RVR+ includes an advanced gearbox to improve torque and payload capacity; enhanced sensors, including an improved color sensor; and an improved rechargeable and swappable battery.

$279.

[ Sphero ]

I’m glad Starship is taking this seriously, although it’s hard to know from this video how well the robots behave when conditions are less favorable.

[ Starship ]

Complexity, cost, and power requirements for the actuation of individual robots can play a large factor in limiting the size of robotic swarms. Here we present PCBot, a minimalist robot that can precisely move on an orbital shake table using a bi-stable solenoid actuator built directly into its PCB. This allows the actuator to be built as part of the automated PCB manufacturing process, greatly reducing the impact it has on manual assembly.

[ Paper ]

Drone-racing world champion Thomas Bitmatta designed an indoor drone-racing track for ETH Zurich’s autonomous high-speed racing drones, and in something like half an hour, the autonomous drones were able to master the track at superhuman speeds (with the aid of a motion-capture system).

[ ETH RSL ] via [ BMS Racing ]

Thanks, Paul!

Moravec’s paradox is the observation that many things that are difficult for robots to do come easily to humans, and vice versa. Stanford University professor Chelsea Finn has been tasked to explain this concept to 5 different people: a child, a teen, a college student, a grad student, and an expert.

[ Wired ]

Roberto Calandra from Meta AI gives a talk about “Perceiving, Understanding, and Interacting Through Touch.”

[ UPenn ]

AI advancements have been motivated and inspired by human intelligence for decades. How can we use AI to expand our knowledge and understanding of the world and ourselves? How can we leverage AI to enrich our lives? In his Tanner Lecture, Eric Horvitz, chief science officer at Microsoft, will explore these questions and more, tracing the arc of intelligence from its origins and evolution in humans to its manifestations and prospects in the tools we create and use.

[ UMich ]


Match ID: 1 Score: 1.43 source: spectrum.ieee.org age: 9 days
qualifiers: 1.43 school

Filter efficiency 99.740 (2 matches/770 results)


********** TRAVEL **********
return to top



Prince William and Kate to set out green credentials in US
Mon, 28 Nov 2022 05:00:12 GMT
The Prince and Princess of Wales will travel to Boston for the Earthshot environmental prizes.
Match ID: 0 Score: 35.00 source: www.bbc.co.uk age: 0 days
qualifiers: 35.00 travel(|ing)

Revealed: north of England train line vastly under-reports cancellations
Sun, 27 Nov 2022 13:33:43 GMT

TransPennine Express uses ‘outrageous’ loophole in which services cancelled a day ahead do not appear in statistics

One of the north of England’s main railway companies is taking advantage of an “outrageous” legal loophole that allows it to vastly under-report cancellations, it has emerged.

Figures obtained by the Guardian show that during the October half-term holiday, TransPennine Express (TPE) cancelled 30% of all trains, and at least 20% each subsequent week until 20 November. Most of those services were cancelled in full, but some started or ended at different stations from those advertised on the current May 2022 timetable.

Continue reading...
Match ID: 1 Score: 35.00 source: www.theguardian.com age: 0 days
qualifiers: 35.00 travel(|ing)

Lady Chatterley’s Lover review – Emma Corrin and Jack O’Connell crackle in the gloom
Sun, 27 Nov 2022 12:30:04 GMT

The excellent leads lift this fitfully handsome adaptation of DH Lawrence’s forbidden classic

There’s enough of a spark between Emma Corrin, playing Lady Constance Chatterley, and Jack O’Connell, as smouldering gamekeeper Oliver Mellors, to fuel a sizeable chunk of the national grid. Which is why it’s surprising that the many breathlessly urgent sex scenes in this handsome adaptation of DH Lawrence’s novel seem a little underpowered – a combination of a weirdly unappealing blueish tone to the grade and the agitated camerawork loses some of the erotic tension. It’s a pity, because elsewhere the film is impressive: there’s a feverish wildness to Corrin’s performance, while O’Connell unleashes the full force of his considerable charisma.

• In cinemas now and on Netflix from 2 December

Continue reading...
Match ID: 2 Score: 35.00 source: www.theguardian.com age: 0 days
qualifiers: 35.00 travel(|ing)

The big picture: Bruno Barbey captures life on the road in 1960s Palermo
Sun, 27 Nov 2022 07:00:14 GMT

The Magnum photographer’s image of a family in Sicily recalls Fellini and Visconti in its romantic depiction of everyday Italian life

Bruno Barbey chanced upon this family defying gravity on their dad’s scooter in Palermo in 1963. The French-Moroccan photographer had been travelling in Italy for a couple of years by then, restless for exactly this kind of image, with its seductive mix of humour and authenticity. Has there ever been a better articulation of contrasting roles in the patriarchal family? Father sitting comfortably in his jacket and cap and smiling for the camera, while behind him his possibly pregnant wife sees trouble ahead, as she and their three kids and their big checked bag compete for precarious discomfort.

Barbey, then 22, had gone to Italy to try to find pictures that captured “a national spirit” as the country sought to rediscover the dolce vita in cities still recovering from war. He travelled in an old VW van and in Palermo in particular he located scenes that might have been choreographed for the working-class heroes of the Italian neorealist films, the self-absorbed dreamers of Fellini and Visconti (The Leopard, the latter’s Hollywood epic set in Sicily was released in the same year). Barbey’s camera with its wide angle lens picked up the detail of vigorous crowd scenes among street children and barflies and religious processions. His book, The Italians, now republished, is a time capsule of that already disappearing black-and-white world of priests and mafiosi and nightclub girls and nuns.

Les Italiens (French edition) by Bruno Barbey is republished by delpire & co

Continue reading...
Match ID: 3 Score: 35.00 source: www.theguardian.com age: 0 days
qualifiers: 35.00 travel(|ing)

The Black Bull Inn, Sedbergh: ‘We were properly fed and watered’ – restaurant review
Sun, 27 Nov 2022 06:00:13 GMT
It may be a classic Cumbrian pub, but the Black Bull knows how to please a far more diverse crowd

The Black Bull Inn, 44 Main Street, Sedbergh LA10 5BL (015396 20264, theblackbullsedbergh.co.uk). Snacks £4.50-£6.50, sandwiches £6.95-£14.95, starters £9.95-£10.9, mains £18.50-£27.95, desserts £7.50-£8.50, wines from £28

It would be easy to misread the Black Bull at Sedbergh, located in that part of the Yorkshire Dales which offers a lofty wave to the Lake District. On a weekday lunchtime, the dining rooms fill quickly with parents in expensive waxed outerwear, grabbing lunch with their kids from the eponymous boarding school that dominates the town. A parade of burgers and sandwiches, precision stabbed with cocktail sticks, alongside soups with doorstep slabs of bread, troop out of the kitchen. And a pint please for the pink-cheeked, broad-chested chap with the Range Rover outside.

Continue reading...
Match ID: 4 Score: 35.00 source: www.theguardian.com age: 0 days
qualifiers: 35.00 travel(|ing)

Barbados plans to make Tory MP pay reparations for family’s slave past
Sat, 26 Nov 2022 17:16:51 GMT

Richard Drax reported to have visited Caribbean island for meeting on next steps, including plans for former sugar plantation

The government of Barbados is considering plans to make a wealthy Conservative MP the first individual to pay reparations for his ancestor’s pivotal role in slavery.

The Observer understands that Richard Drax, MP for South Dorset, recently travelled to the Caribbean island for a private meeting with the country’s prime minister, Mia Mottley. A report is now before Mottley’s cabinet laying out the next steps, which include legal action in the event that no agreement is reached with Drax.

Continue reading...
Match ID: 5 Score: 35.00 source: www.theguardian.com age: 1 day
qualifiers: 35.00 travel(|ing)

A Criminal Ratted Out His Friend to the FBI. Now He's Trying to Make Amends.
Sat, 26 Nov 2022 12:00:23 +0000

The FBI paid a convicted sex offender $90,000 to set up his friend and his friend’s mentally ill buddy in a terrorism sting.

The post A Criminal Ratted Out His Friend to the FBI. Now He’s Trying to Make Amends. appeared first on The Intercept.


Match ID: 6 Score: 35.00 source: theintercept.com age: 1 day
qualifiers: 35.00 travel(|ing)

‘It made me think of decorations on a Christmas tree’: Arianna Genghini’s best phone picture
Sat, 26 Nov 2022 10:00:24 GMT

The Italian photographer was in San Francisco’s Chinatown when she came across this grand ivory building

Arianna Genghini’s first stop on her family road trip through four US states was San Francisco. While they went on to travel through Utah, Nevada and Arizona in a rented minivan, it was the California city’s expansive Chinatown that captured the Italian photographer’s eye most powerfully.

“I was exploring with my sister Sofia, and we spotted the Dragon Gate at the entrance to the district. It’s one of the largest Chinese communities outside China, just like a little city inside a bigger one. Stepping inside, I fell in love,” she says.

Continue reading...
Match ID: 7 Score: 35.00 source: www.theguardian.com age: 1 day
qualifiers: 35.00 travel(|ing)

‘Desensitised’ ex-IS followers remain threats, Shamima Begum hearing told
Thu, 24 Nov 2022 19:14:40 GMT

Home Office argues people trafficked to Syria were exposed to extreme violence which poses ‘almighty problem’

People trafficked to Syria and radicalised remain threats to national security as they may be desensitised after exposure to extreme violence, the Home Office has argued, in contesting Shamima Begum’s appeal against the removal of her British citizenship.

Begum was 15 when she travelled from her home in Bethnal Green, east London, through Turkey and into territory controlled by Islamic State (IS). After she was found, nine months pregnant in a Syrian refugee camp in February 2019, the then home secretary, Sajid Javid, revoked her British citizenship on national security grounds.

Continue reading...
Match ID: 8 Score: 30.00 source: www.theguardian.com age: 3 days
qualifiers: 30.00 travel(|ing)

IEEE SIGHT Founder Amarnath Raja Dies at 65
Wed, 23 Nov 2022 19:00:01 +0000


Amarnath Raja

Founder of IEEE Special Interest Group on Humanitarian Technology

Senior member, 65; died 5 September

Raja founded the IEEE Special Interest Group on Humanitarian Technology (SIGHT) in 2011. The global network partners with underserved communities and local organizations to leverage technology for sustainable development.


He began his career in 1980 as a management trainee at the National Dairy Development Board, in Anand, India. A year later he joined Milma, a state government marketing cooperative for the dairy industry, in Thiruvananthapuram, as a manager of planning and systems. After 15 years with Milma, he joined IBM in Tokyo as a manager of technology services.

In 2000 he helped found InApp, a company in Palo Alto, Calif., that provides software development services. He served as its CEO and executive chairman until he died.

Raja was the 2011–2012 chair of the IEEE Humanitarian Activities Committee. He wanted to find a way to mobilize engineers to apply their expertise to develop sustainable solutions that help their local community. To achieve the goal, in 2011 he founded IEEE SIGHT. Today there are more than 150 SIGHT groups in 50 countries that are working on projects such as sustainable irrigation and photovoltaic systems.

For his efforts, he received the 2015 Larry K. Wilson Transnational Award from IEEE Member and Geographic Activities. The award honors effective efforts to fulfill one or more of the MGA goals and strategic objectives related to transnational activities.

For the past two years, Rajah chaired the IEEE Admission and Advancement Review Panel, which approves applications for new members and elevations to higher membership grades.

He was a member of the International Centre for Free and Open Source Software’s advisory board. The organization was established by the government of Kerala, India, to facilitate the development and distribution of free, open-source software. Raja also served on the board of directors at Bedroc, an IT staffing and support firm in Nashville.

He earned his bachelor’s degree in chemical engineering in 1979 from the Indian Institute of Technology in Delhi.

Donn S. Terry

Software engineer

Life member, 74; died 14 September

Terry was a computer engineer at Hewlett-Packard in Fort Collins, Colo., for 18 years.

He joined HP in 1978 as a software developer, and he chaired the Portable Operating System Interface (POSIX) working group. POSIX is a family of standards specified by the IEEE Computer Society for maintaining compatibility among operating systems. While there, he also developed software for the Motorola 68000 microprocessor.

Terry left HP in 1997 to join Softway Solutions, also in Fort Collins, where he developed tools for Interix, a Unix subsystem of the Windows NT operating system. After Microsoft acquired Softway in 1999, he stayed on as a senior software development engineer at its Seattle location. There he worked on static analysis, a method of computer-program debugging that is done by examining the code without executing the program. He also helped to create SAL, a Microsoft source-code annotation language, which was developed to make code design easier to understand and analyze.

Terry retired in 2014. He loved science fiction, boating, cooking, and spending time with his family, according to his daughter, Kristin.

He earned a bachelor’s degree in electrical engineering in 1970 and a Ph.D. in computer science in 1978, both from the University of Washington in Seattle.

William Sandham

Signal processing engineer

Life senior member, 70; died 25 August

Sandham applied his signal processing expertise to a wide variety of disciplines including medical imaging, biomedical data analysis, and geophysics.

He began his career in 1974 as a physicist at the University of Glasgow. While working there, he pursued a Ph.D. in geophysics. He earned his degree in 1981 at the University of Birmingham in England. He then joined the British National Oil Corp. (now Britoil) as a geophysicist.

In 1986 he left to join the University of Strathclyde, in Glasgow, as a lecturer in the signal processing department. During his time at the university, he published more than 200 journal papers and five books that addressed blood glucose measurement, electrocardiography data analysis and compression, medical ultrasound, MRI segmentation, prosthetic limb fitting, and sleep apnea detection.

Sandham left the university in 2003 and founded Scotsig, a signal processing consulting and research business, also in Glasgow.

He served on the editorial board of IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing and the EURASIP Journal on Advances in Signal Processing.

He was a Fellow of the Institution of Engineering and Technology and a member of the European Association of Geoscientists and Engineers and the Society of Exploration Geophysicists.

Sandham earned his bachelor’s degree in electrical engineering in 1974 from the University of Glasgow.

Stephen M. Brustoski

Loss-prevention engineer

Life member, 69; died 6 January

For 40 years, Brustoski worked as a loss-prevention engineer for insurance company FM Global. He retired from the company, which was headquartered in Johnston, R.I., in 2014.

He was an elder at his church, CrossPoint Alliance, in Akron, Ohio, where he oversaw administrative work and led Bible studies and prayer meetings. He was an assistant scoutmaster for 12 years, and he enjoyed hiking and traveling the world with his family, according to his wife, Sharon.

Brustoski earned a bachelor’s degree in electrical engineering in 1973 from the University of Akron.

Harry Letaw

President and CEO of Essex Corp.

Life senior member, 96; died 7 May 2020

As president and CEO of Essex Corp., in Columbia, Md., Letaw handled the development and commercialization of optoelectronic and signal processing solutions for defense, intelligence, and commercial customers. He retired in 1995.

He had served in World War II as an aviation engineer for the U.S. Army. After he was discharged, he earned a bachelor’s degree in chemistry, then a master’s degree and Ph.D., all from the University of Florida in Gainesville, in 1949, 1951, and 1952.

After he graduated, he became a postdoctoral assistant at the University of Illinois at Urbana-Champaign. He left to become a researcher at Raytheon Technologies, an aerospace and defense manufacturer, in Wayland, Mass.

Letaw was a member of the American Physical Society and the Phi Beta Kappa and Sigma Xi honor societies.


Match ID: 9 Score: 25.00 source: spectrum.ieee.org age: 4 days
qualifiers: 25.00 travel(|ing)

People in the UK: tell us why you are unable to work despite wanting to
Tue, 22 Nov 2022 13:13:45 GMT

We’d like to find out the reasons that prevent people in the UK from working as much as they’d like – whether it’s childcare, health issues, housing or travel

We’re keen to hear from people in the UK who would like to work or work more than they currently do and find out what prevents them from doing so.

Whether it is your health, childcare, travel or being unable to find housing, or anything else that stands in the way, we’d like to hear from you. It doesn’t matter whether you’re actively looking for work or not.

Continue reading...
Match ID: 10 Score: 20.00 source: www.theguardian.com age: 5 days
qualifiers: 20.00 travel(|ing)

The Women Behind ENIAC
Mon, 21 Nov 2022 19:00:01 +0000


If you looked at the pictures of those working on the first programmable, general-purpose all-electronic computer, you would assume that J. Presper Eckert and John W. Mauchly were the only ones who had a hand in its development. Invented in 1945, the Electronic Numerical Integrator and Computer (ENIAC) was built to improve the accuracy of U.S. artillery during World War II. The two men and their team built the hardware. But hidden behind the scenes were six women—Jean Bartik, Kathleen Antonelli, Marlyn Meltzer, Betty Holberton, Frances Spence, and Ruth Teitelbaum—who programmed the computer to calculate artillery trajectories in seconds.

The U.S. Army recruited the women in 1942 to work as so-called human computersmathematicians who did calculations using a mechanical desktop calculator.

For decades, the six women were largely unknown. But thanks to Kathy Kleiman, cofounder of ICANN (the Internet Corporation for Assigned Names and Numbers), the world is getting to know the ENIAC programmers’ contributions to computer science. This year Kleiman’s book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer was published. It delves into the women’s lives and the pioneering work they did. The book follows an award-winning documentary, The Computers: The Remarkable Story of the ENIAC Programmers, which Kleiman helped produce. It premiered at the 2014 Seattle International Film Festival and won Best Documentary Short at the 2016 U.N. Association Film Festival.

Kleiman plans to give a presentation next year about the programmers as part of the IEEE Industry Hub Initiative’s Impact Speaker series. The initiative aims to introduce industry professionals and academics to IEEE and its offerings.

Planning for the event, which is scheduled to be held in Silicon Valley, is underway. Details are to be announced before the end of the year.

The Institute spoke with Kleiman, who teaches Internet technology and governance for lawyers at American University, in Washington, D.C., about her mission to publicize the programmers’ contributions. The interview has been condensed and edited for clarity.

Image of Kathy Kleiman and her book cover to the right. Kathy Kleiman delves into the ENIAC programmers’ lives and the pioneering work they did in her book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer.Kathy Kleiman

The Institute: What inspired you to film the documentary?

Kathy Kleiman: The ENIAC was a secret project of the U.S. Army during World War II. It was the first general-purpose, programmable, all-electronic computer—the key to the development of our smartphones, laptops, and tablets today. The ENIAC was a highly experimental computer, with 18,000 vacuums, and some of the leading technologists at the time didn’t think it would work, but it did.

Six months after the war ended, the Army decided to reveal the existence of ENIAC and heavily publicize it. To do so, in February 1946 the Army took a lot of beautiful, formal photos of the computer and the team of engineers that developed it. I found these pictures while researching women in computer science as an undergraduate at Harvard. At the time, I knew of only two women in computer science: Ada Lovelace and then U.S. Navy Capt. Grace Hopper. [Lovelace was the first computer programmer; Hopper co-developed COBOL, one of the earliest standardized computer languages.] But I was sure there were more women programmers throughout history, so I went looking for them and found the images taken of the ENIAC.

The pictures fascinated me because they had both men and women in them. Some of the photos had just women in front of the computer, but they weren’t named in any of the photos’ captions. I tracked them down after I found their identities, and four of six original ENIAC programmers responded. They were in their late 70s at the time, and over the course of many years they told me about their work during World War II and how they were recruited by the U.S. Army to be “human computers.”

Eckert and Mauchly promised the U.S. Army that the ENIAC could calculate artillery trajectories in seconds rather than the hours it took to do the calculations by hand. But after they built the 2.5-meter-tall by 24-meter-long computer, they couldn’t get it to work. Out of approximately 100 human computers working for the U.S. Army during World War II, six women were chosen to write a program for the computer to run differential calculus equations. It was hard because the program was complex, memory was very limited, and the direct programming interface that connected the programmers to the ENIAC was hard to use. But the women succeeded. The trajectory program was a great success. But Bartik, McNulty, Meltzer, Snyder, Spence, and Teitelbaum’s contributions to the technology were never recognized. Leading technologists and the public never knew of their work.

I was inspired by their story and wanted to share it. I raised funds, researched and recorded 20 hours of broadcast-quality oral histories with the ENIAC programmers—which eventually became the documentary. It allows others to see the women telling their story.

“If we open the doors to history, I think it would make it a lot easier to recruit the wonderful people we are trying to urge to enter engineering, computer science, and related fields.”

Why was the accomplishment of the six women important?

Kleiman: The ENIAC is considered by many to have launched the information age.

We generally think of women leaving the factory and farm jobs they held during World War II and giving them back to the men, but after ENIAC was completed, the six women continued to work for the U.S. Army. They helped world-class mathematicians program the ENIAC to complete “hundred-year problems” [problems that would take 100 years to solve by hand]. They also helped teach the next generation of ENIAC programmers, and some went on to create the foundations of modern programming.

What influenced you to continue telling the ENIAC programmers’ story in your book?

Kleiman: After my documentary premiered at the film festival, young women from tech companies who were in the audience came up to me to share why they were excited to learn the programmers’ story. They were excited to learn that women were an integral part of the history of early computing programming, and were inspired by their stories. Young men also came up to me and shared stories of their grandmothers and great-aunts who programmed computers in the 1960s and ’70s and inspired them to explore careers in computer science.

I met more women and men like the ones in Seattle all over the world, so it seemed like a good idea to tell the full story along with its historical context and background information about the lives of the ENIAC programmers, specifically what happened to them after the computer was completed.

What did you find most rewarding about sharing their story?

Kleiman: It was wonderful and rewarding to get to know the ENIAC programmers. They were incredible, wonderful, warm, brilliant, and exceptional people. Talking to the people who created the programming was inspiring and helped me to see that I could work at the cutting edge too. I entered Internet law as one of the first attorneys in the field because of them.

What I enjoy most is that the women’s experiences inspire young people today just as they inspired me when I was an undergraduate.

collage of vintage photographs of six women. Clockwise from top left: Jean Bartik, Kathleen Antonelli, Betty Holberton, Ruth Teitelbaum, Marlyn Meltzer, Frances Spence.Clockwise from top left: The Bartik Family; Bill Mauchly, Priscilla Holberton, Teitelbaum Family, Meltzer Family, Spence Family

Is it important to highlight the contributions made throughout history by women in STEM?

Kleiman: [Actor] Geena Davis founded the Geena Davis Institute on Gender in Media, which works collaboratively with the entertainment industry to dramatically increase the presence of female characters in media. It’s based on the philosophy of “you can’t be what you can’t see.”

That philosophy is both right and wrong. I think you can be what you can’t see, and certainly every pioneer who has ever broken a racial, ethnic, religion, or gender barrier has done so. However, it’s certainly much easier to enter a field if there are role models who look like you. To that end, many computer scientists today are trying to diversify the field. Yet I know from my work in Internet policy and my recent travels across the country for my book tour that many students still feel locked out because of old stereotypes in computing and engineering. By sharing strong stories of pioneers in the fields who are women and people of color, I hope we can open the doors to computing and engineering. I hope history and herstory that is shared make it much easier to recruit young people to join engineering, computer science, and related fields.

Are you planning on writing more books or producing another documentary?

Kleiman: I would like to continue the story of the ENIAC programmers and write about what happened to them after the war ended. I hope that my next book will delve into the 1950s and uncover more about the history of the Universal Automatic Computer, the first modern commercial computer series, and the diverse group of people who built and programmed it.


Match ID: 11 Score: 15.00 source: spectrum.ieee.org age: 6 days
qualifiers: 15.00 travel(|ing)

How the First Transistor Worked
Sun, 20 Nov 2022 16:00:00 +0000


The vacuum-tube triode wasn’t quite 20 years old when physicists began trying to create its successor, and the stakes were huge. Not only had the triode made long-distance telephony and movie sound possible, it was driving the entire enterprise of commercial radio, an industry worth more than a billion dollars in 1929. But vacuum tubes were power-hungry and fragile. If a more rugged, reliable, and efficient alternative to the triode could be found, the rewards would be immense.

The goal was a three-terminal device made out of semiconductors that would accept a low-current signal into an input terminal and use it to control the flow of a larger current flowing between two other terminals, thereby amplifying the original signal. The underlying principle of such a device would be something called the field effect—the ability of electric fields to modulate the electrical conductivity of semiconductor materials. The field effect was already well known in those days, thanks to diodes and related research on semiconductors.


A photo of a cutaway of a point-contact of a transistor.  In the cutaway photo of a point-contact, two thin conductors are visible; these connect to the points that make contact with a tiny slab of germanium. One of these points is the emitter and the other is the collector. A third contact, the base, is attached to the reverse side of the germanium.AT&T ARCHIVES AND HISTORY CENTER

But building such a device had proved an insurmountable challenge to some of the world’s top physicists for more than two decades. Patents for transistor-like devices had been filed starting in 1925, but the first recorded instance of a working transistor was the legendary point-contact device built at AT&T Bell Telephone Laboratories in the fall of 1947.

Though the point-contact transistor was the most important invention of the 20th century, there exists, surprisingly, no clear, complete, and authoritative account of how the thing actually worked. Modern, more robust junction and planar transistors rely on the physics in the bulk of a semiconductor, rather than the surface effects exploited in the first transistor. And relatively little attention has been paid to this gap in scholarship.

It was an ungainly looking assemblage of germanium, plastic, and gold foil, all topped by a squiggly spring. Its inventors were a soft-spoken Midwestern theoretician, John Bardeen, and a voluble and “ somewhat volatile” experimentalist, Walter Brattain. Both were working under William Shockley, a relationship that would later prove contentious. In November 1947, Bardeen and Brattain were stymied by a simple problem. In the germanium semiconductor they were using, a surface layer of electrons seemed to be blocking an applied electric field, preventing it from penetrating the semiconductor and modulating the flow of current. No modulation, no signal amplification.


Sometime late in 1947 they hit on a solution. It featured two pieces of barely separated gold foil gently pushed by that squiggly spring into the surface of a small slab of germanium.

Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate. Indeed, the current edition of that bible of undergraduate EEs, The Art of Electronics by Horowitz and Hill, makes no mention of the point-contact transistor at all, glossing over its existence by erroneously stating that the junction transistor was a “Nobel Prize-winning invention in 1947.” But the transistor that was invented in 1947 was the point-contact; the junction transistor was invented by Shockley in 1948.

So it seems appropriate somehow that the most comprehensive explanation of the point-contact transistor is contained within John Bardeen’s lecture for that Nobel Prize, in 1956. Even so, reading it gives you the sense that a few fine details probably eluded even the inventors themselves. “A lot of people were confused by the point-contact transistor,” says Thomas Misa, former director of the Charles Babbage Institute for the History of Science and Technology, at the University of Minnesota.

Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate.

A year after Bardeen’s lecture, R. D. Middlebrook, a professor of electrical engineering at Caltech who would go on to do pioneering work in power electronics, wrote: “Because of the three-dimensional nature of the device, theoretical analysis is difficult and the internal operation is, in fact, not yet completely understood.”

Nevertheless, and with the benefit of 75 years of semiconductor theory, here we go. The point-contact transistor was built around a thumb-size slab of n-type germanium, which has an excess of negatively charged electrons. This slab was treated to produce a very thin surface layer that was p-type, meaning it had an excess of positive charges. These positive charges are known as holes. They are actually localized deficiencies of electrons that move among the atoms of the semiconductor very much as a real particle would. An electrically grounded electrode was attached to the bottom of this slab, creating the base of the transistor. The two strips of gold foil touching the surface formed two more electrodes, known as the emitter and the collector.

That’s the setup. In operation, a small positive voltage—just a fraction of a volt—is applied to the emitter, while a much larger negative voltage—4 to 40 volts—is applied to the collector, all with reference to the grounded base. The interface between the p-type layer and the n-type slab created a junction just like the one found in a diode: Essentially, the junction is a barrier that allows current to flow easily in only one direction, toward lower voltage. So current could flow from the positive emitter across the barrier, while no current could flow across that barrier into the collector.

A photo of rows of people sitting in front of microscopes and stacks of transistors. The Western Electric Type-2 point-contact transistor was the first transistor to be manufactured in large quantities, in 1951, at Western Electric’s plant in Allentown, Pa. By 1960, when this photo was taken, the plant had switched to producing junction transistors.AT&T ARCHIVES AND HISTORY CENTER

Now, let’s look at what happens down among the atoms. First, we’ll disconnect the collector and see what happens around the emitter without it. The emitter injects positive charges—holes—into the p-type layer, and they begin moving toward the base. But they don’t make a beeline toward it. The thin layer forces them to spread out laterally for some distance before passing through the barrier into the n-type slab. Think about slowly pouring a small amount of fine powder onto the surface of water. The powder eventually sinks, but first it spreads out in a rough circle.

Now we connect the collector. Even though it can’t draw current by itself through the barrier of the p-n junction, its large negative voltage and pointed shape do result in a concentrated electric field that penetrates the germanium. Because the collector is so close to the emitter, and is also negatively charged, it begins sucking up many of the holes that are spreading out from the emitter. This charge flow results in a concentration of holes near the p-n barrier underneath the collector. This concentration effectively lowers the “height” of the barrier that would otherwise prevent current from flowing between the collector and the base. With the barrier lowered, current starts flowing from the base into the collector—much more current than what the emitter is putting into the transistor.

The amount of current depends on the height of the barrier. Small decreases or increases in the emitter’s voltage cause the barrier to fluctuate up and down, respectively. Thus very small changes in the the emitter current control very large changes at the collector, so voilà! Amplification. (EEs will notice that the functions of base and emitter are reversed compared with those in later transistors, where the base, not the emitter, controls the response of the transistor.)

Ungainly and fragile though it was, it was a semiconductor amplifier, and its progeny would change the world. And its inventors knew it. The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil, with that tiny slit separating the emitter and collector contacts. This configuration gave reliable power gain, and the duo knew then that they had succeeded. In his carpool home that night, Brattain told his companions he’d just done “the most important experiment that I’d ever do in my life” and swore them to secrecy. The taciturn Bardeen, too, couldn’t resist sharing the news. As his wife, Jane, prepared dinner that night, he reportedly said, simply, “We discovered something today.” With their children scampering around the kitchen, she responded, “That’s nice, dear.

It was a transistor, at last, but it was pretty rickety. The inventors later hit on the idea of electrically forming the collector by passing large currents through it during the transistor’s manufacturing. This technique enabled them to get somewhat larger current flows that weren’t so tightly confined within the surface layer. The electrical forming was a bit hit-or-miss, though. “They would just throw out the ones that didn’t work,” Misa notes.

Nevertheless, point-contact transistors went into production at many companies, under license to AT&T, and, in 1951, at AT&T’s own manufacturing arm, Western Electric. They were used in hearing aids, oscillators, telephone-routing gear, in an experimental TV receiver built at RCA, and in the Tradic, the first airborne digital computer, among other systems. In fact, point-contact transistors remained in production until 1966, in part due to their superior speed compared with the alternatives.

The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil…

The Bell Labs group wasn’t alone in its successful pursuit of a transistor. In Aulnay-sous-Bois, a suburb northeast of Paris, two German physicists, Herbert Mataré and Heinrich Welker, were also trying to build a three-terminal semiconductor amplifier. Working for a French subsidiary of Westinghouse, they were following up on very intriguing observations Mataré had made while developing germanium and silicon rectifiers for the German military in 1944. The two succeeded in creating a reliable point-contact transistor in June 1948.

They were astounded, a week or so later, when Bell Labs finally revealed the news of its own transistor, at a press conference on 30 June 1948. Though they were developed completely independently, and in secret, the two devices were more or less identical.

Here the story of the transistor takes a weird turn, breathtaking in its brilliance and also disturbing in its details. Bardeen’s and Brattain’s boss, William Shockley, was furious that his name was not included with Bardeen’s and Brattain’s on the original patent application for the transistor. He was convinced that Bardeen and Brattain had merely spun his theories about using fields in semiconductors into their working device, and had failed to give him sufficient credit. Yet in 1945, Shockley had built a transistor based on those very theories, and it hadn’t worked.

A photo of a man in a jacket placing a transistor in a device. In 1953, RCA engineer Gerald Herzog led a team that designed and built the first "all-transistor" television (although, yes, it had a cathode-ray tube). The team used point-contact transistors produced by RCA under a license from Bell Labs. TRANSISTOR MUSEUM JERRY HERZOG ORAL HISTORY

At the end of December, barely two weeks after the initial success of the point-contact transistor, Shockley traveled to Chicago for the annual meeting of the American Physical Society. On New Year’s Eve, holed up in his hotel room and fueled by a potent mix of jealousy and indignation, he began designing a transistor of his own. In three days he scribbled some 30 pages of notes. By the end of the month, he had the basic design for what would become known as the bipolar junction transistor, or BJT, which would eventually supersede the point-contact transistor and reign as the dominant transistor until the late 1970s.

A photo of a group of transistors With insights gleaned from the Bell Labs work, RCA began developing its own point-contact transistors in 1948. The group included the seven shown here—four of which were used in RCA's experimental, 22-transistor television set built in 1953. These four were the TA153 [top row, second from left], the TA165 [top, far right], the TA156 [bottom row, middle] and the TA172 [bottom, right].TRANSISTOR MUSEUM JONATHAN HOPPE COLLECTION

The BJT was based on Shockley’s conviction that charges could, and should, flow through the bulk semiconductors rather than through a thin layer on their surface. The device consisted of three semiconductor layers, like a sandwich: an emitter, a base in the middle, and a collector. They were alternately doped, so there were two versions: n-type/p-type/n-type, called “NPN,” and p-type/n-type/p-type, called “PNP.”

The BJT relies on essentially the same principles as the point-contact, but it uses two p-n junctions instead of one. When used as an amplifier, a positive voltage applied to the base allows a small current to flow between it and the emitter, which in turn controls a large current between the collector and emitter.

Consider an NPN device. The base is p-type, so it has excess holes. But it is very thin and lightly doped, so there are relatively few holes. A tiny fraction of the electrons flowing in combines with these holes and are removed from circulation, while the vast majority (more than 97 percent) of electrons keep flowing through the thin base and into the collector, setting up a strong current flow.

But those few electrons that do combine with holes must be drained from the base in order to maintain the p-type nature of the base and the strong flow of current through it. That removal of the “trapped” electrons is accomplished by a relatively small flow of current through the base. That trickle of current enables the much stronger flow of current into the collector, and then out of the collector and into the collector circuit. So, in effect, the small base current is controlling the larger collector circuit.

Electric fields come into play, but they do not modulate the current flow, which the early theoreticians thought would have to happen for such a device to function. Here’s the gist: Both of the p-n junctions in a BJT are straddled by depletion regions, in which electrons and holes combine and there are relatively few mobile charge carriers. Voltage applied across the junctions sets up electric fields at each, which push charges across those regions. These fields enable electrons to flow all the way from the emitter, across the base, and into the collector.

In the BJT, “the applied electric fields affect the carrier density, but because that effect is exponential, it only takes a little bit to create a lot of diffusion current,” explains Ioannis “John” Kymissis, chair of the department of electrical engineering at Columbia University.

An illustration of a point-contact transistor. The very first transistors were a type known as point contact, because they relied on metal contacts touching the surface of a semiconductor. They ramped up output current—labeled “Collector current” in the top diagram—by using an applied voltage to overcome a barrier to charge flow. Small changes to the input, or “emitter,” current modulate this barrier, thus controlling the output current.

An illustration of a Bipolar Junction Transistor The bipolar junction transistor accomplishes amplification using much the same principles but with two semiconductor interfaces, or junctions, rather than one. As with the point-contact transistor, an applied voltage overcomes a barrier and enables current flow that is modulated by a smaller input current. In particular, the semiconductor junctions are straddled by depletion regions, across which the charge carriers diffuse under the influence of an electric field.Chris Philpot

The BJT was more rugged and reliable than the point-contact transistor, and those features primed it for greatness. But it took a while for that to become obvious. The BJT was the technology used to make integrated circuits, from the first ones in the early 1960s all the way until the late 1970s, when metal-oxide-semiconductor field-effect transistors (MOSFETs) took over. In fact, it was these field-effect transistors, first the junction field-effect transistor and then MOSFETs, that finally realized the decades-old dream of a three-terminal semiconductor device whose operation was based on the field effect—Shockley’s original ambition.

Such a glorious future could scarcely be imagined in the early 1950s, when AT&T and others were struggling to come up with practical and efficient ways to manufacture the new BJTs. Shockley himself went on to literally put the silicon into Silicon Valley. He moved to Palo Alto and in 1956 founded a company that led the switch from germanium to silicon as the electronic semiconductor of choice. Employees from his company would go on to found Fairchild Semiconductor, and then Intel.

Later in his life, after losing his company because of his terrible management, he became a professor at Stanford and began promulgating ungrounded and unhinged theories about race, genetics, and intelligence. In 1951 Bardeen left Bell Labs to become a professor at the University of Illinois at Urbana-Champaign, where he won a second Nobel Prize for physics, for a theory of superconductivity. (He is the only person to have won two Nobel Prizes in physics.) Brattain stayed at Bell Labs until 1967, when he joined the faculty at Whitman College, in Walla Walla, Wash.

Shockley died a largely friendless pariah in 1989. But his transistor would change the world, though it was still not clear as late as 1953 that the BJT would be the future. In an interview that year, Donald G. Fink, who would go on to help establish the IEEE a decade later, mused, “Is it a pimpled adolescent, now awkward, but promising future vigor? Or has it arrived at maturity, full of languor, surrounded by disappointments?”

It was the former, and all of our lives are so much the better because of it.

This article appears in the December 2022 print issue as “The First Transistor and How it Worked .”


Match ID: 12 Score: 10.00 source: spectrum.ieee.org age: 7 days
qualifiers: 10.00 travel(|ing)

Was the Killing of a Migrant by a Former ICE Warden a Hate Crime or a Terrible Accident?
Sat, 19 Nov 2022 11:00:49 +0000

At Fivemile Tank, a watering hole in the bleak desert of West Texas, two men pulled up in a truck. One aimed a gun into the brush.

The post Was the Killing of a Migrant by a Former ICE Warden a Hate Crime or a Terrible Accident? appeared first on The Intercept.


Match ID: 13 Score: 5.00 source: theintercept.com age: 8 days
qualifiers: 5.00 travel(|ing)

GO for Artemis I
Tue, 15 Nov 2022 16:28:00 +0100
Image:

‘Twas the day before launch and all across the globe, people await liftoff for Artemis I with hope.

NASA’s Space Launch System (SLS) rocket and the Orion spacecraft with its European Service Module, is seen here on Launch Pad 39B at NASA's Kennedy Space Center in Florida, USA, on 12 November.

After much anticipation, NASA launch authorities have given the GO for the first opportunity for launch: tomorrow, 16 November with a two-hour launch window starting at 07:04 CET (06:04 GMT, 1:04 local time).

Artemis I is the first mission in a large programme to send astronauts around and on the Moon sustainably. This uncrewed first launch will see the Orion spacecraft travel to the Moon, enter an elongated orbit around our satellite and then return to Earth, powered by the European-built service module that supplies electricity, propulsion, fuel, water and air as well as keeping the spacecraft operating at the right temperature. 

The European Service Modules are made from components supplied by over 20 companies in ten ESA Member States and USA. As the first European Service Module sits atop the SLS rocket on the launchpad, the second is only 8 km away being integrated with the Orion crew capsule for the first crewed mission – Artemis II. The third and fourth European Service Modules – that will power astronauts to a Moon landing – are in production in Bremen, Germany. 

With a 16 November launch, the three-week Artemis I mission would end on 11 December with a splashdown in the Pacific Ocean. The European Service Module detaches from the Orion Crew Module before splashdown and burns up harmlessly in the atmosphere, its job complete after taking Orion to the Moon and back safely. 

Backup Artemis I launch dates include 19 November. Check ESA’s Orion blog for updates and more details. Watch the launch live on ESA Web TV from 15 Nov, 20:30 GMT (21:30 CET) when the rocket fuelling starts, and from 16 November 00:00 GMT/01:00 CET for the launch coverage. 


Match ID: 14 Score: 5.00 source: www.esa.int age: 12 days
qualifiers: 5.00 travel(|ing)

The EV Transition Explained
Sun, 13 Nov 2022 14:17:59 +0000


From the outside, there is little to tell a basic Ford XL ICE F-150 from the electric Ford PRO F-150 Lightning. Exterior changes could pass for a typical model-year refresh. While there are LED headlight and rear-light improvements along with a more streamlined profile, the Lightning’s cargo box is identical to that of an ICE F-150, complete with tailgate access steps and a jobsite ruler. The Lightning’s interior also has a familiar feel.

But when you pop the Lightning’s hood, you find that the internal combustion engine has gone missing. In its place is a front trunk (“frunk”), while concealed beneath is the new skateboard frame with its dual electric motors (one for each axle) and a big 98-kilowatt-hour standard (and 131-kWh extended-range) battery pack. The combination permits the Lightning to travel 230 miles (370 kilometers) without recharging and go from 0 to 60 miles per hour in 4.5 seconds, making it the fastest F-150 available despite its much heavier weight.

Invisible, too, are the Lightning’s sophisticated computing and software systems. The 2016 ICE F-150 reportedly had about 150 million lines of code. The Lightning’s software suite may even be larger than its ICE counterpart (Ford will not confirm this). The Lightning replaces the Ford F-150 ICE-related software in the electronic control units (ECUs) with new “intelligent” software and systems that control the main motors, manage the battery system, and provide charging information to the driver.

The EV Transition Explained


This is the first in a series of articles presenting just some of the technological and social challenges in moving from vehicles with internal-combustion engines to electric vehicles. These must be addressed at scale before EVs can happen. Each challenge entails a multitude of interacting systems, subsystems, sub-subsystems, and so on. In reviewing each article, readers should bear in mind Nobel Prize–winning physicist Richard Feynman’s admonition: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”

Ford says the Lightning’s software will identify nearby public charging stations and tell drivers when to recharge. To increase the accuracy of the range calculation, the software will draw upon similar operational data communicated from other Lightning owners that Ford will dynamically capture, analyze, and feed back to the truck.

For executives, however, Lightning’s software is not only a big consumer draw but also among the biggest threats to its success. Ford CEO Jim Farley told the New York Times that software bugs worry him most. To mitigate the risk, Ford has incorporated an over-the-air (OTA) software-update capability for both bug fixes and feature upgrades. Yet with an incorrect setting in the Lightning’s tire pressure monitoring system requiring a software fix only a few weeks after its initial delivery, and with some new Ford Mustang Mach-Es recalled because of misconfigured software caused by a “service update or as an over-the-air update,” Farley’s worries probably won’t be soothed for some time.

Ford calls the Lightning a “Model T moment for the 21st century” and the company's US $50 billion investment in EVs is a bet-the-company proposition. Short-term success looks likely, as Ford closed Lightning preorders after reaching 200,000 and with sales expectations of 150,000 a year by 2024.

A construction crew working on a site with a Ford F-150's frunk open showing tools. The F-150 Lightning's front trunk (also known as a frunk) helps this light-duty electric pickup haul even more. Ford

However, long-term success is not guaranteed. “Ford is walking a tightrope, trying at the same time to convince everyone that EVs are the same as ICE vehicles yet different,” says University of Michigan professor emeritus John Leslie King, who has long studied the auto industry. Ford and other automakers will need to convince tens of millions of customers to switch to EVs to meet the Biden Administration’s decarbonization goals of 50 percent new auto sales being non-ICE vehicles by 2030.

King points out that neither Ford nor other automakers can forever act like EVs are merely interchangeable with—but more ecofriendly than—their ICE counterparts. As EVs proliferate at scale, they operate in a vastly different technological, political, and social ecosystem than ICE vehicles. The core technologies and requisite expertise, supply-chain dependencies, and political alliances are different. The expectations of and about EV owners, and their agreement to change their lifestyles, also differ significantly.

Indeed, the challenges posed by the transition from ICE vehicles to EVs at scale are significantly larger in scope and more complex than the policymakers setting the regulatory timeline appreciate. The systems-engineering task alone is enormous, with countless interdependencies that are outside policymakers' control, and resting on optimistic assumptions about promising technologies and wished-for changes in human behavior. The risk of getting it wrong, and the resulting negative environmental and economic consequences created, are high. In this series, we will break down the myriad infrastructure, policy, and social challenges involved learned from discussions with numerous industry insiders and industry watchers. Let's take a look at some of the elemental challenges blocking the road ahead for EVs.

The soft car

For Ford and the other automakers that have shaped the ICE vehicle ecosystem for more than a century, ultimate success is beyond the reach of the traditional political, financial, and technological levers they once controlled. Renault chief executive Luca de Meo, for example, is quoted in the Financial Times as saying that automakers must recognize that “the game has changed,” and they will “have to play by new rules” dictated by the likes of mining and energy companies.

One reason for the new rules, observes professor Deepak Divan, the director of the Center for Distributed Energy at Georgia Tech, is that the EV transition is “a subset of the energy transition” away from fossil fuels. On the other hand, futurist Peter Schwartz contends that the entire electric system is part of the EV supply chain. These alternative framings highlight the strong codependencies involved. Consequently, automakers will be competing against not only other EV manufacturers but also numerous players involved in the energy transition aiming to grab the same scarce resources and talent.

“Ford is walking a tightrope, trying at the same time to convince everyone that EVs are the same as ICE vehicles yet different.” —John Leslie King

EVs represent a new class of cyberphysical systems that unify the physical with information technology, allowing them to sense, process, act, and communicate in real time within a large transportation ecosystem, as I have noted in detail elsewhere. While computing in ICE vehicles typically optimizes a car’s performance at the time of sale, EV-based cyberphysical systems are designed to evolve as they are updated and upgraded, postponing their obsolescence.

“As an automotive company, we’ve been trained to put vehicles out when they’re perfect,” Ford’s Farley told the New York Times. “But with software, you can change it with over-the-air updates.” This allows new features to be introduced in existing models instead of waiting for next year’s model to appear. Farley sees Ford spending much less effort on changing vehicles’ physical properties and devoting more to upgrading their software capabilities in the future.

Systems engineering for holistic solutions

EV success at scale depends on as much, if not more, on political decisions as technical ones. Government decision-makers in the United States at both the state and federal level, for instance, have created EV market incentives and set increasingly aggressive dates to sunset ICE vehicle sales, regardless of whether the technological infrastructure needed to support EVs at scale actually exists. While passing public policy can set a direction, it does not guarantee that engineering results will be available when needed.

“A systems-engineering approach towards managing the varied and often conflicting interests of the many stakeholders involved will be necessary to find a workable solution.” —Chris Paredis

Having committed $1.2 trillion through 2030 so far toward decarbonizing the planet, automakers are understandably wary not only of the fast reconfiguration of the auto industry but of the concurrent changes required in the energy, telecom, mining, recycling, and transportation industries that must succeed for their investments to pay off.

The EV transition is part of an unprecedented, planetary-wide, cyberphysical systems-engineering project with massive potential benefits as well as costs. Considering the sheer magnitude, interconnectedness, and uncertainties presented by the concurrent technological, political, and social changes necessary, the EV transition will undoubtedly be messy.

This chart from the Global EV Outlook 2021, IEA, Paris shows 2020 EV sales in the first column; in the second column, projected sales under current climate-mitigation policies; in the third column, projected sales under accelerated climate-mitigation policies.

“There is a lot that has to go right. And it won’t all go right,” observes Kristin Dziczek, former vice president of research at the Center for Automotive Research and now a policy analyst with the Federal Reserve Bank of Chicago. “We will likely stumble forward in some fashion,” but, she stresses, “it’s not a reason not to move forward.”

How many stumbles and how long the transition will take depend on whether the multitude of challenges involved are fully recognized and realistically addressed.

“Everyone needs to stop thinking in silos. It is the adjacency interactions that are going to kill you.” —Deepak Divan

“A systems-engineering approach towards managing the varied and often conflicting interests of the many stakeholders involved will be necessary to find a workable solution,” says Chris Paredis, the BMW Endowed Chair in Automotive Systems Integration at Clemson University. The range of engineering-infrastructure improvements needed to support EVs, for instance, “will need to be coordinated at a national/international level beyond what can be achieved by individual companies,” he states.

If the nitty gritty but hard-to-solve issues are glossed over or ignored, or if EV expectations are hyped beyond the market’s capability to deliver, no one should be surprised by a backlash against EVs, making the transition more difficult.

Until Tesla proved otherwise, EVs—especially battery EVs (BEVs)—were not believed by legacy automakers to be a viable, scalable approach to transport decarbonization even a decade ago. Tesla’s success at producing more than 3 million vehicles to date has shown that EVs are both technologically and economically feasible, at least for the luxury EV niche.

What has not yet been proven, but is widely assumed, is that BEVs can rapidly replace the majority of the current 1.3 billion-plus light-duty ICE vehicles. The interrelated challenges involving EV engineering infrastructure, policy, and societal acceptance, however, will test how well this assumption holds true.

Therefore, the successful transition to EVs at scale demands a “holistic approach,” emphasizes Georgia Tech’s Deepak Divan. “Everyone needs to stop thinking in silos. It is the adjacency interactions that are going to kill you.”

These adjacency issues involve numerous social-infrastructure obstacles that need to be addressed comprehensively along with the engineering issues, including the interactions and contradictions among them. These issues include the value and impacts of government EV incentives, the EV transition impacts on employment, and the public’s willingness to change its lifestyle behavior when it realizes converting to EVs will not be enough to reach future decarbonization goals.

“We cannot foresee all the details needed to make the EV transition successful,” John Leslie King says. “While there’s a reason to believe we will get there, there’s less reason to believe we know the way. It is going to be hard.”

In the next article in the series, we will look at the complexities introduced by trading our dependence on oil for our dependence on batteries.


Match ID: 15 Score: 5.00 source: spectrum.ieee.org age: 14 days
qualifiers: 5.00 travel(|ing)

Collective Mental Time Travel Can Influence the Future
Wed, 09 Nov 2022 13:00:00 +0000
The way people imagine the past and future of society can sway attitudes and behaviors. How might this be wielded for good?
Match ID: 16 Score: 5.00 source: www.wired.com age: 18 days
qualifiers: 5.00 travel(|ing)

Robotic Falcon Keeps Birds Away From Airports
Sun, 06 Nov 2022 14:00:00 +0000


Collisions with birds are a serious problem for commercial aircraft, costing the industry billions of dollars and killing thousands of animals every year. New research shows that a robotic imitation of a peregrine falcon could be an effective way to keep them out of flight paths.

Worldwide, so-called birdstrikes are estimated to cost the civil aviation industry almost US $1.4 billion annually. Nearby habitats are often deliberately made unattractive to birds, but airports also rely on a variety of deterrents designed to scare them away, such as loud pyrotechnics or speakers that play distress calls from common species.

However, the effectiveness of these approaches tends to decrease over time, as the birds get desensitized by repeated exposure, says Charlotte Hemelrijk, a professor on the faculty of science and engineering at the University of Groningen, in the Netherlands. Live hawks or blinding lasers are also sometimes used to disperse flocks, she says, but this is controversial as it can harm the animals, and keeping and training falcons is not cheap.

“The birds don’t distinguish [RobotFalcon] from a real falcon, it seems.”
—Charlotte Hemelrijk, University of Groningen

In an effort to find a more practical and lasting solution, Hemelrijk and colleagues designed a robotic peregrine falcon that can be used to chase flocks away from airports. The device is the same size and shape as a real hawk, and its fiberglass and carbon-fiber body has been painted to mimic the markings of its real-life counterpart.

Rather than flapping like a bird, the RobotFalcon relies on two small battery-powered propellers on its wings, which allows it to travel at around 30 miles per hour for up to 15 minutes at a time. A human operator controls the machine remotely from a hawk’s-eye perspective via a camera perched above the robot’s head.

To see how effective the RobotFalcon was at scaring away birds, the researchers tested it against a conventional quadcopter drone over three months of field testing, near the Dutch city of Workum. They also compared their results to 15 years of data collected by the Royal Netherlands Air Force that assessed the effectiveness of conventional deterrence methods such as pyrotechnics and distress calls.

Flock-herding Falcon Drone Patrols Airport Flight Paths youtu.be

In a paper published in the Journal of the Royal Society Interface, the team showed that the RobotFalcon cleared fields of birds faster and more effectively than the drone. It also kept birds away from fields longer than distress calls, the most effective of the conventional approaches.

There was no evidence of birds getting habituated to the RobotFalcon over three months of testing, says Hemelrijk, and the researchers also found that the birds exhibited behavior patterns associated with escaping from predators much more frequently with the robot than with the drone. “The way of reacting to the RobotFalcon is very similar to the real falcon,” says Hemelrijk. “The birds don’t distinguish it from a real falcon, it seems.”

Other attempts to use hawk-imitating robots to disperse birds have had less promising results, though. Morgan Drabik-Hamshare, a research wildlife biologist at the DoA, and her colleagues published a paper in Scientific Reports last year that described how they pitted a robotic peregrine falcon with flapping wings against a quadcopter and a fixed-wing remote-controlled aircraft.

They found the robotic falcon was the least effective of the three at scaring away turkey vultures, with the quadcopter scaring the most birds off and the remote-controlled plane eliciting the quickest response. “Despite the predator silhouette, the vultures did not perceive the predator UAS [unmanned aircraft system] as a threat,” Drabik-Hamshare wrote in an email.

Zihao Wang, an associate lecturer at the University of Sydney, in Australia, who develops UAS for bird deterrence, says the RobotFalcon does seem to be effective at dispersing flocks. But he points out that its wingspan is nearly twice the diagonal length of the quadcopter it was compared with, which means it creates a much larger silhouette when viewed from the birds’ perspective. This means the birds could be reacting more to its size than its shape, and he would like to see the RobotFalcon compared with a similar size drone in the future.

The unique design also means the robot requires an experienced and specially trained operator, Wang adds, which could make it difficult to roll out widely. A potential solution could be to make the system autonomous, he says, but it’s unclear how easy this would be.

Hemelrijk says automating the RobotFalcon is probably not feasible, both due to strict regulations around the use of autonomous drones near airports as well as the sheer technical complexity. Their current operator is a falconer with significant experience in how hawks target their prey, she says, and creating an autonomous system that could recognize and target bird flocks in a similar way would be highly challenging.

But while the need for skilled operators is a limitation, Hemelrijk points out that most airports already have full-time staff dedicated to bird deterrence, who could be trained. And given the apparent lack of habituation and the ability to chase birds in a specific direction—so that they head away from runways—she thinks the robotic falcon could be a useful addition to their arsenal.


Match ID: 17 Score: 5.00 source: spectrum.ieee.org age: 21 days
qualifiers: 5.00 travel(|ing)

Business on the move: the new breed of company EV
Fri, 21 Oct 2022 12:48:03 GMT

From virtual showrooms to cutting-edge tech, the all-electric CUPRA Born is showing what the next generation of business travel looks like

Looking at a new company car online and checking one out in a showroom have, up until now, been two very separate experiences – neither of which are ideal. Sitting at home in front of your computer screen will allow you to spec a vehicle. You might be able to give it a 360-degree spin if the manufacturer’s website features all the bells and whistles, but you won’t really get much of a feel for your potential new car; and you’ll have to go digging through the rest of the website to find answers to any specific questions you may have. Visiting a showroom, on the other hand, will get you up close and personal to the vehicle, but you have to physically get to the dealership in the first place.

In a best-of-both worlds approach, CUPRA is combining the website and showroom experiences into one single process. In the market for a new company car, for example the Born all-electric vehicle? Then visit the new CUPRA Virtual Showroom and you’ll be able to get a live tour of the car online – through your computer or phone – with a product expert showing you around the vehicle’s exterior and interior, taking you through its numerous features and answering all the questions you can think of. No waiting around, no wasted time: click the link, set up an appointment and a CUPRA agent will send you a message, connect you to an audio and video session, and you’re ready to go.

You can direct the agent through the car as you wish, and sessions can be as brief or as detailed as you need, lasting from just a few minutes to an hour. It’s totally up to you. And the experience itself is impressive. Being able to guide the agent around the car, essentially via a video call, allows you to see what you want to see of the vehicle in clear, close-up detail, as well as witnessing the interior tech being put to use in real time. In the modern hybrid working landscape, where Zoom calls are now the norm, the CUPRA Virtual Showroom has successfully plugged itself into the zeitgeist.

“It’s pretty innovative,” says Martin Gray, CUPRA’s UK contract hire and leasing manager. “We’ve had great reactions from customers so far. It really works for the Born, as the car is so different from others in its class. Because of the way it looks, and because of its technology and the way the dashboard is set up, people really want to get a good look at it. And in a climate where supply of actual physical vehicles has become a real issue, this gives more people the opportunity to see the Born up close and personal.”

Continue reading...
Match ID: 18 Score: 5.00 source: www.theguardian.com age: 37 days
qualifiers: 5.00 travel(|ing)

Solar-to-Jet-Fuel System Readies for Takeoff
Wed, 03 Aug 2022 17:00:00 +0000


As climate change edges from crisis to emergency, the aviation sector looks set to miss its 2050 goal of net-zero emissions. In the five years preceding the pandemic, the top four U.S. airlines—American, Delta, Southwest, and United—saw a 15 percent increase in the use of jet fuel. Despite continual improvements in engine efficiencies, that number is projected to keep rising.

A glimmer of hope, however, comes from solar fuels. For the first time, scientists and engineers at the Swiss Federal Institute of Technology (ETH) in Zurich have reported a successful demonstration of an integrated fuel-production plant for solar kerosene. Using concentrated solar energy, they were able to produce kerosene from water vapor and carbon dioxide directly from air. Fuel thus produced is a drop-in alternative to fossil-derived fuels and can be used with existing storage and distribution infrastructures, and engines.

Fuels derived from synthesis gas (or syngas)—an intermediate product that is a specific mixture of carbon monoxide and hydrogen—is a known alternative to conventional, fossil-derived fuels. Syngas is produced by Fischer-Tropsch (FT) synthesis, in which chemical reactions convert carbon monoxide and water vapor into hydrocarbons. The team of researchers at ETH found that a solar-driven thermochemical method to split water and carbon dioxide using a metal oxide redox cycle can produce renewable syngas. They demonstrated the process in a rooftop solar refinery at the ETH Machine Laboratory in 2019.

Close-up of a spongy looking material Reticulated porous structure made of ceria used in the solar reactor to thermochemically split CO2 and H2O and produce syngas, a specific mixture of H2 and CO.ETH Zurich

The current pilot-scale solar tower plant was set up at the IMDEA Energy Institute in Spain. It scales up the solar reactor of the 2019 experiment by a factor of 10, says Aldo Steinfeld, an engineering professor at ETH who led the study. The fuel plant brings together three subsystems—the solar tower concentrating facility, solar reactor, and gas-to-liquid unit.

First, a heliostat field made of mirrors that rotate to follow the sun concentrates solar irradiation into a reactor mounted on top of the tower. The reactor is a cavity receiver lined with reticulated porous ceramic structures made of ceria (or cerium(IV) oxide). Within the reactor, the concentrated sunlight creates a high-temperature environment of about 1,500 °C which is hot enough to split captured carbon dioxide and water from the atmosphere to produce syngas. Finally, the syngas is processed to kerosene in the gas-to-liquid unit. A centralized control room operates the whole system.

Fuel produced using this method closes the fuel carbon cycle as it only produces as much carbon dioxide as has gone into its manufacture. “The present pilot fuel plant is still a demonstration facility for research purposes,” says Steinfeld, “but it is a fully integrated plant and uses a solar-tower configuration at a scale that is relevant for industrial implementation.”

“The solar reactor produced syngas with selectivity, purity, and quality suitable for FT synthesis,” the authors noted in their paper. They also reported good material stability for multiple consecutive cycles. They observed a value of 4.1 percent solar-to-syngas energy efficiency, which Steinfeld says is a record value for thermochemical fuel production, even though better efficiencies are required to make the technology economically competitive.

Schematic of the solar tower fuel plant.  A heliostat field concentrates solar radiation onto a solar reactor mounted on top of the solar tower. The solar reactor cosplits water and carbon dioxide and produces a mixture of molecular hydrogen and carbon monoxide, which in turn is processed to drop-in fuels such as kerosene.ETH Zurich

“The measured value of energy conversion efficiency was obtained without any implementation of heat recovery,” he says. The heat rejected during the redox cycle of the reactor accounted for more than 50 percent of the solar-energy input. “This fraction can be partially recovered via thermocline heat storage. Thermodynamic analyses indicate that sensible heat recovery could potentially boost the energy efficiency to values exceeding 20 percent.”

To do so, more work is needed to optimize the ceramic structures lining the reactor, something the ETH team is actively working on, by looking at 3D-printed structures for improved volumetric radiative absorption. “In addition, alternative material compositions, that is, perovskites or aluminates, may yield improved redox capacity, and consequently higher specific fuel output per mass of redox material,” Steinfeld adds.

The next challenge for the researchers, he says, is the scale-up of their technology for higher solar-radiative power inputs, possibly using an array of solar cavity-receiver modules on top of the solar tower.

To bring solar kerosene into the market, Steinfeld envisages a quota-based system. “Airlines and airports would be required to have a minimum share of sustainable aviation fuels in the total volume of jet fuel that they put in their aircraft,” he says. This is possible as solar kerosene can be mixed with fossil-based kerosene. This would start out small, as little as 1 or 2 percent, which would raise the total fuel costs at first, though minimally—adding “only a few euros to the cost of a typical flight,” as Steinfeld puts it

Meanwhile, rising quotas would lead to investment, and to falling costs, eventually replacing fossil-derived kerosene with solar kerosene. “By the time solar jet fuel reaches 10 to 15 percent of the total jet-fuel volume, we ought to see the costs for solar kerosene nearing those of fossil-derived kerosene,” he adds.

However, we may not have to wait too long for flights to operate solely on solar fuel. A commercial spin-off of Steinfeld’s laboratory, Synhelion, is working on commissioning the first industrial-scale solar fuel plant in 2023. The company has also collaborated with the airline SWISS to conduct a flight solely using its solar kerosene.


Match ID: 19 Score: 5.00 source: spectrum.ieee.org age: 116 days
qualifiers: 5.00 travel(|ing)

X-Rays Could Carry Quantum Signals Across the Stars
Mon, 18 Jul 2022 15:07:14 +0000


Quantum signals may possess a number of advantages over regular forms of communication, leading scientists to wonder if humanity was not alone in discovering such benefits. Now a new study suggests that, for hypothetical extraterrestrial civilizations, quantum transmissions using X-rays may be possible across interstellar distances.

Quantum communication relies on a quantum phenomenon known as entanglement. Essentially, two or more particles such as photons that get “linked” via entanglement can, in theory, influence each other instantly no matter how far apart they are.

Entanglement is essential to quantum teleportation, in which data can essentially disappear one place and reappear someplace else. Since this information does not travel across the intervening space, there is no chance the information will be lost.

To accomplish quantum teleportation, one would first entangle two photons. Then, one of the photons—the one to be teleported—is kept at one location while the other is beamed to whatever destination is desired.

Next, the photon at the destination's quantum state—which defines its key characteristics—is analyzed, an act that also destroys its quantum state. Entanglement will lead the destination photon to prove identical to its partner. For all intents and purposes, the photon at the origin point “teleported” to the destination point—no physical matter moved, but the two photons are physically indistinguishable.

And to be clear, quantum teleportation cannot send information faster than the speed of light, because the destination photon must still be transmitted via conventional means.

One weakness of quantum communication is that entanglement is fragile. Still, researchers have successfully transmitted entangled photons that remained stable or “coherent” enough for quantum teleportation across distances as great as 1,400 kilometers.

Such findings led theoretical physicist Arjun Berera at the University of Edinburgh to wonder just how far quantum signals might stay coherent. First, he discovered quantum coherence might survive interstellar distances within our galaxy, and then he and his colleagues found quantum coherence might survive intergalactic distances.

“If photons in Earth’s atmosphere don’t decohere to 100 km, then in interstellar space where the medium is much less dense then our atmosphere, photons won’t decohere up to even the size of the galaxy,” Berera says.

In the new study, the researchers investigated whether and how well quantum communication might survive interstellar distances. Quantum signals might face disruption from a number of factors, such as the gravitational pull of interstellar bodies, they note.

The scientists discovered the best quantum communication channels for interstellar messages are X-rays. Such frequencies are easier to focus and detect across interstellar distances. (NASA has tested deep-space X-ray communication with its XCOM experiment.) The researchers also found that the optical and microwave bands could enable communication across large distances as well, albeit less effectively than X-rays.

Although coherence might survive interstellar distances, Berera does note quantum signals might lose fidelity. “This means the quantum state is sustained, but it can have a phase shift, so although the quantum information is preserved in these states, it has been altered by the effect of gravity.” Therefore, it may “take some work at the receiving end to account for these phase shifts and be able to assess the information contained in the original state.”

Why might an interstellar civilization transmit quantum signals as opposed to regular ones? The researchers note that quantum communication may allow greater data compression and, in some cases, exponentially faster speeds than classical channels. Such a boost in efficiency might prove very useful for civilizations separated by interstellar distances.

“It could be that quantum communication is the main communication mode in an extraterrestrial's world, so they just apply what is at hand to send signals into the cosmos,” Berera says.

The scientists detailed their findings online 28 June in the journal Physical Review D.


Match ID: 20 Score: 5.00 source: spectrum.ieee.org age: 132 days
qualifiers: 5.00 travel(|ing)

The Webb Space Telescope’s Profound Data Challenges
Fri, 08 Jul 2022 18:03:45 +0000


For a deep dive into the engineering behind the James Webb Space Telescope, see our collection of posts here.

When the James Webb Space Telescope (JWST) reveals its first images on 12 July, they will be the by-product of carefully crafted mirrors and scientific instruments. But all of its data-collecting prowess would be moot without the spacecraft’s communications subsystem.

The Webb’s comms aren’t flashy. Rather, the data and communication systems are designed to be incredibly, unquestionably dependable and reliable. And while some aspects of them are relatively new—it’s the first mission to use Ka-band frequencies for such high data rates so far from Earth, for example—above all else, JWST’s comms provide the foundation upon which JWST’s scientific endeavors sit.


As previous articles in this series have noted, JWST is parked at Lagrange point L2. It’s a point of gravitational equilibrium located about 1.5 million kilometers beyond Earth on a straight line between the planet and the sun. It’s an ideal location for JWST to observe the universe without obstruction and with minimal orbital adjustments.

Being so far away from Earth, however, means that data has farther to travel to make it back in one piece. It also means the communications subsystem needs to be reliable, because the prospect of a repair mission being sent to address a problem is, for the near term at least, highly unlikely. Given the cost and time involved, says Michael Menzel, the mission systems engineer for JWST, “I would not encourage a rendezvous and servicing mission unless something went wildly wrong.”

According to Menzel, who has worked on JWST in some capacity for over 20 years, the plan has always been to use well-understood K a-band frequencies for the bulky transmissions of scientific data. Specifically, JWST is transmitting data back to Earth on a 25.9-gigahertz channel at up to 28 megabits per second. The Ka-band is a portion of the broader K-band (another portion, the Ku-band, was also considered).

An illustration depicting different Lagrange points and where the Webb Telescope is. The Lagrange points are equilibrium locations where competing gravitational tugs on an object net out to zero. JWST is one of three craft currently occupying L2 (Shown here at an exaggerated distance from Earth). IEEE Spectrum

Both the data-collection and transmission rates of JWST dwarf those of the older Hubble Space Telescope. Compared to Hubble, which is still active and generates 1 to 2 gigabytes of data daily, JWST can produce up to 57 GB each day (although that amount is dependent on what observations are scheduled).

Menzel says he first saw the frequency selection proposals for JWST around 2000, when he was working at Northrop Grumman. He became the mission systems engineer in 2004. “I knew where the risks were in this mission. And I wanted to make sure that we didn’t get any new risks,” he says.

IEEE Spectrum

Besides, K a-band frequencies can transmit more data than X-band (7 to 11.2 GHz) or S-band (2 to 4 GHz), common choices for craft in deep space. A high data rate is a necessity for the scientific work JWST will be undertaking. In addition, according to Carl Hansen, a flight systems engineer at the Space Telescope Science Institute (the science operations center for JWST), a comparable X-band antenna would be so large that the spacecraft would have trouble remaining steady for imaging.

Although the 25.9-GHz K a-band frequency is the telescope’s workhorse communication channel, it also employs two channels in the S-band. One is the 2.09-GHz uplink that ferries future transmission and scientific observation schedules to the telescope at 16 kilobits per second. The other is the 2.27-GHz, 40-kb/s downlink over which the telescope transmits engineering data—including its operational status, systems health, and other information concerning the telescope’s day-to-day activities.

Any scientific data the JWST collects during its lifetime will need to be stored on board, because the spacecraft doesn’t maintain round-the-clock contact with Earth. Data gathered from its scientific instruments, once collected, is stored within the spacecraft’s 68-GB solid-state drive (3 percent is reserved for engineering and telemetry data). Alex Hunter, also a flight systems engineer at the Space Telescope Science Institute, says that by the end of JWST’s 10-year mission life, they expect to be down to about 60 GB because of deep-space radiation and wear and tear.

The onboard storage is enough to collect data for about 24 hours before it runs out of room. Well before that becomes an issue, JWST will have scheduled opportunities to beam that invaluable data to Earth.

JWST will stay connected via the Deep Space Network (DSN)—a resource it shares with the Parker Solar Probe, Transiting Exoplanet Survey Satellite, the Voyager probes, and the entire ensemble of Mars rovers and orbiters, to name just a few of the other heavyweights. The DSN consists of three antenna complexes: Canberra, Australia; Madrid, Spain; and Barstow, Calif. JWST needs to share finite antenna time with plenty of other deep-space missions, each with unique communications needs and schedules.

IEEE Spectrum

Sandy Kwan, a DSN systems engineer, says that contact windows with spacecraft are scheduled 12 to 20 weeks in advance. JWST had a greater number of scheduled contact windows during its commissioning phase, as instruments were brought on line, checked, and calibrated. Most of that process required real-time communication with Earth.

All of the communications channels use the Reed-Solomon error-correction protocol—the same error-correction standard as used in DVDs and Blu-ray discs as well as QR codes. The lower data-rate S-band channels use binary phase-shift key modulation—involving phase shifting of a signal’s carrier wave. The K-band channel, however, uses a quadrature phase-shift key modulation. Quadrature phase-shift keying can double a channel’s data rate, at the cost of more complicated transmitters and receivers.

JWST’s communications with Earth incorporate an acknowledgement protocol—only after the JWST gets confirmation that a file has been successfully received will it go ahead and delete its copy of the data to clear up space.

The communications subsystem was assembled along with the rest of the spacecraft bus by Northrop Grumman, using off-the-shelf components sourced from multiple manufacturers.

JWST has had a long and often-delayed development, but its communications system has always been a bedrock for the rest of the project. Keeping at least one system dependable means it’s one less thing to worry about. Menzel can remember, for instance, ideas for laser-based optical systems that were invariably rejected. “I can count at least two times where I had been approached by people who wanted to experiment with optical communications,” says Menzel. “Each time they came to me, I sent them away with the old ‘Thank you, but I don’t need it. And I don’t want it.’”


Match ID: 21 Score: 5.00 source: spectrum.ieee.org age: 142 days
qualifiers: 5.00 travel(|ing)

Pentagon Aims to Demo a Nuclear Spacecraft Within 5 Years
Thu, 09 Jun 2022 16:44:41 +0000


In the latest push for nuclear power in space, the Pentagon’s Defense Innovation Unit (DIU) awarded a contract in May to Seattle-based Ultra Safe Nuclear to advance its nuclear power and propulsion concepts. The company is making a soccer ball–size radioisotope battery it calls EmberCore. The DIU’s goal is to launch the technology into space for demonstration in 2027.

Ultra Safe Nuclear’s system is intended to be lightweight, scalable, and usable as both a propulsion source and a power source. It will be specifically designed to give small-to-medium-size military spacecraft the ability to maneuver nimbly in the space between Earth orbit and the moon. The DIU effort is part of the U.S. military’s recently announced plans to develop a surveillance network in cislunar space.

Besides speedy space maneuvers, the DIU wants to power sensors and communication systems without having to worry about solar panels pointing in the right direction or batteries having enough charge to work at night, says Adam Schilffarth, director of strategy at Ultra Safe Nuclear. “Right now, if you are trying to take radar imagery in Ukraine through cloudy skies,” he says, “current platforms can only take a very short image because they draw so much power.”

Radioisotope power sources are well suited for small, uncrewed spacecraft, adds Christopher Morrison, who is leading EmberCore’s development. Such sources rely on the radioactive decay of an element that produces energy, as opposed to nuclear fission, which involves splitting atomic nuclei in a controlled chain reaction to release energy. Heat produced by radioactive decay is converted into electricity using thermoelectric devices.

Radioisotopes have provided heat and electricity for spacecraft since 1961. The Curiosity and Perseverance rovers on Mars, and deep-space missions including Cassini, New Horizons, and Voyager all use radioisotope batteries that rely on the decay of plutonium-238, which is nonfissile—unlike plutonium-239, which is used in weapons and power reactors.

For EmberCore, Ultra Safe Nuclear has instead turned to medical isotopes such as cobalt-60 that are easier and cheaper to produce. The materials start out inert, and have to be charged with neutrons to become radioactive. The company encapsulates the material in a proprietary ceramic for safety.

Cobalt-60 has a half-life of five years (compared to plutonium-238’s 90 years), which is enough for the cislunar missions that the DOD and NASA are looking at, Morrison says. He says that EmberCore should be able to provide 10 times as much power as a plutonium-238 system, providing over 1 million kilowatt-hours of energy using just a few pounds of fuel. “This is a technology that is in many ways commercially viable and potentially more scalable than plutonium-238,” he says.

One downside of the medical isotopes is that they can produce high-energy X-rays in addition to heat. So Ultra Safe Nuclear wraps the fuel with a radiation-absorbing metal shield. But in the future, the EmberCore system could be designed for scientists to use the X-rays for experiments. “They buy this heater and get an X-ray source for free,” says Schilffarth. “We’ve talked with scientists who right now have to haul pieces of lunar or Martian regolith up to their sensor because the X-ray source is so weak. Now we’re talking about a spotlight that could shine down to do science from a distance.”

Ultra Safe Nuclear’s contract is one of two awarded by the DIU—which aims to speed up the deployment of commercial technology through military use—to develop nuclear power and propulsion for spacecraft. The other contract was awarded to Avalanche Energy, which is making a lunchbox-size fusion device it calls an Orbitron. The device will use electrostatic fields to trap high-speed ions in slowly changing orbits around a negatively charged cathode. Collisions between the ions can result in fusion reactions that produce energetic particles.

Both companies will use nuclear energy to power high-efficiency electric propulsion systems. Electric propulsion technologies such as ion thrusters, which use electromagnetic fields to accelerate ions and generate thrust, are more efficient than chemical rockets, which burn fuel. Solar panels typically power the ion thrusters that satellites use today to change their position and orientation. Schilffarth says that the higher power from EmberCore should give a greater velocity change of 10 kilometers per second in orbit than today’s electric propulsion systems.

Ultra Safe Nuclear is also one of three companies developing nuclear fission thermal propulsion systems for NASA and the Department of Energy. Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is seeking companies to develop a fission-based nuclear thermal rocket engine, with demonstrations expected in 2026.

This article appears in the August 2022 print issue as “Spacecraft to Run on Radioactive Decay.”


Match ID: 22 Score: 5.00 source: spectrum.ieee.org age: 171 days
qualifiers: 5.00 travel(|ing)

Filter efficiency 97.013 (23 matches/770 results)

ABOUT THE PROJECT

RSS Rabbit links users to publicly available RSS entries.
Vet every link before clicking! The creators accept no responsibility for the contents of these entries.

Relevant

Fresh

Convenient

Agile

CONTACT

We're not prepared to take user feedback yet. Check back soon!

rssRabbit quadric