logo RSS Rabbit quadric
News that matters, fast.
Good luck, have news.
Happy scrolling!

Categories



Date/Time of Last Update: Mon Nov 28 06:00:32 2022 UTC




********** UNIVERSITY **********
return to top



How the Graphical User Interface Was Invented
Sun, 20 Nov 2022 20:00:00 +0000


Mice, windows, icons, and menus: these are the ingredients of computer interfaces designed to be easy to grasp, simplicity itself to use, and straightforward to describe. The mouse is a pointer. Windows divide up the screen. Icons symbolize application programs and data. Menus list choices of action.

But the development of today’s graphical user interface was anything but simple. It took some 30 years of effort by engineers and computer scientists in universities, government laboratories, and corporate research groups, piggybacking on each other’s work, trying new ideas, repeating each other’s mistakes.


This article was first published as “Of Mice and menus: designing the user-friendly interface.” It appeared in the September 1989 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The photographs and diagrams appeared in the original print version.


Throughout the 1970s and early 1980s, many of the early concepts for windows, menus, icons, and mice were arduously researched at Xerox Corp.’s Palo Alto Research Center (PARC), Palo Alto, Calif. In 1973, PARC developed the prototype Alto, the first of two computers that would prove seminal in this area. More than 1200 Altos were built and tested. From the Alto’s concepts, starting in 1975, Xerox’s System Development Department then developed the Star and introduced it in 1981—the first such user-friendly machine sold to the public.

In 1984, the low-cost Macintosh from Apple Computer Inc., Cupertino, Calif., brought the friendly interface to thousands of personal computer users. During the next five years, the price of RAM chips fell enough to accommodate the huge memory demands of bit-mapped graphics, and the Mac was followed by dozens of similar interfaces for PCs and workstations of all kinds. By now, application programmers are becoming familiar with the idea of manipulating graphic objects.

The Mac’s success during the 1980s spurred Apple Computer to pursue legal action over ownership of many features of the graphical user interface. Suits now being litigated could assign those innovations not to the designers and their companies, but to those who first filed for legal protection on them.

The GUI started with Sketchpad


The grandfather of the graphical user interface was Sketchpad [see photograph]. Massachusetts Institute of Technology student Ivan E. Sutherland built it in 1962 as a Ph.D. thesis at MIT’s Lincoln Laboratory in Lexington, Mass. Sketchpad users could not only draw points, line segments, and circular arcs on a cathode ray tube (CRT) with a light pen—they could also assign constraints to, and relationships among, whatever they drew.

Arcs could have a specified diameter, lines could be horizontal or vertical, and figures could be built up from combinations of elements and shapes. Figures could be moved, copied, shrunk, expanded, and rotated, with their constraints (shown as onscreen icons) dynamically preserved. At a time when a CRT monitor was a novelty in itself, the idea that users could interactively create objects by drawing on a computer was revolutionary.


Man sitting in front of a round cathode ray display with a white square and triangle on a black background

Moreover, to zoom in on objects, Sutherland wrote the first window-drawing program, which required him to come up with the first clipping algorithm. Clipping is a software routine that calculates which part of a graphic object is to be displayed and displays only that part on the screen. The program must calculate where a line is to be drawn, compare that position to the coordinates of the window in use, and prevent the display of any line segment whose coordinates fall outside the window.

Though films of Sketchpad in operation were widely shown in the computer research community, Sutherland says today that there was little immediate fallout from the project. Running on MIT’s TX-2 mainframe, it demanded too much computing power to be practical for individual use. Many other engineers, however, see Sketchpad’s design and algorithms as a primary influence on an entire generation of research into user interfaces.

The origin of the computer mouse


The light pens used to select areas of the screen by interactive computer systems of the 1950s and 1960s—including Sketchpad—had drawbacks. To do the pointing, the user’s arm had to be lifted up from the table, and after a while that got tiring. Picking up the pen required fumbling around on the table or, if it had a holder, taking the time after making a selection to put it back.

Sensing an object with a light pen was straightforward: the computer displayed spots of light on the screen and interrogated the pen as to whether it sensed a spot, so the program always knew just what was being displayed. Locating the position of the pen on the screen required more sophisticated techniques—like displaying a cross pattern of nine points on the screen, then moving the cross until it centered on the light pen.

In 1964, Douglas Engelbart, a research project leader at SRI International in Menlo Park, Calif., tested all the commercially available pointing devices, from the still-popular light pen to a joystick and a Graphicon (a curve-tracing device that used a pen mounted on the arm of a potentiometer). But he felt the selection failed to cover the full spectrum of possible pointing devices, and somehow he should fill in the blanks.

Then he remembered a 1940s college class he had taken that covered the use of a planimeter to calculate area. (A planimeter has two arms, with a wheel on each. The wheels can roll only along their axes; when one of them rolls, the other must slide.)

If a potentiometer were attached to each wheel to monitor its rotation, he thought, a planimeter could be used as a pointing device. Engelbart explained his roughly sketched idea to engineer William English, who with the help of the SRI machine shop built what they quickly dubbed “the mouse.”



This first mouse was big because it used single-turn potentiometers: one rotation of the wheels had to be scaled to move a cursor from one side of the screen to the other. But it was simple to interface with the computer: the processor just read frequent samples of the potentiometer positioning signals through analog-to-digital converters.

The cursor moved by the mouse was easy to locate, since readings from the potentiometer determined the position of the cursor on the screen-unlike the light pen. But programmers for later windowing systems found that the software necessary to determine which object the mouse had selected was more complex than that for the light pen: they had to compare the mouse’s position with that of all the objects displayed onscreen.

The computer mouse gets redesigned—and redesigned again

Engelbart’s group at SRI ran controlled experiments with mice and other pointing devices, and the mouse won hands down. People adapted to it quickly, it was easy to grab, and it stayed where they put it. Still, Engelbart wanted to tinker with it. After experimenting, his group had concluded that the proper ratio of cursor movement to mouse movement was about 2:1, but he wanted to try varying that ratio—decreasing it at slow speeds and raising it at fast speeds—to improve user control of fine movements and speed up larger movements. Some modern mouse-control software incorporates this idea, including that of the Macintosh.

The mouse, still experimental at this stage, did not change until 1971. Several members of Engelbart’s group had moved to the newly established PARC, where many other researchers had seen the SRI mouse and the test report. They decided there was no need to repeat the tests; any experimental systems they designed would use mice.

Said English, “This was my second chance to build a mouse; it was obvious that it should be a lot smaller, and that it should be digital.” Chuck Thacker, then a member of the research staff, advised PARC to hire inventor Jack Hawley to build it.

Hawley decided the mouse should use shaft encoders, which measure position by a series of pulses, instead of potentiometers (both were covered in Engelbart’s 1970 patent), to eliminate the expensive analog-to-digital converters. The basic principle, of one wheel rolling while the other slid, was licensed from SRI.

The ball mouse was the “easiest patent I ever got. It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”
—Ron Rider

In 1972, the mouse changed again. Ron Rider, now vice president of systems architecture at PARC but then a new arrival, said he was using the wheel mouse while an engineer made excuses for its asymmetric operation (one wheel dragging while one turned). “I suggested that they turn a trackball upside down, make it small, and use it as a mouse instead,” Rider told IEEE Spectrum. This device came to be known as the ball mouse. “Easiest patent I ever got,” Rider said. “It took me five minutes to think of, half an hour to describe to the attorney, and I was done.”

Defining terms


Bit map

The pixel pattern that makes up the graphic display on a computer screen.

Clicking

The motion of pressing a mouse button to Initiate an action by software; some actions require double-clicking.

Graphical user interface (GUI)

The combination of windowing displays, menus, icons, and a mouse that is increasingly used on personal computers and workstations.

Icon

An onscreen drawing that represents programs or data.

Menu

A list of command options currently available to the computer user; some stay onscreen, while pop-up or pull-down menus are requested by the user.

Mouse

A device whose motion across a desktop or other surface causes an on-screen cursor to move commensurately; today’s mice move on a ball and have one, two, or three buttons.

Raster display

A cathode ray tube on which Images are displayed as patterns of dots, scanned onto the screen sequentially in a predetermined pattern of lines.

Vector display

A cathode ray tube whose gun scans lines, or vectors, onto the screen phosphor.

Window

An area of a computer display, usually one of several, in which a particular program is executing.


In the PARC ball mouse design, the weight of the mouse is transferred to the ball by a swivel device and on one or two casters at the end of the mouse farthest from the wire “tail.” A prototype was built by Xerox’s Electronics Division in El Segundo, Calif., then redesigned by Hawley. The rolling ball turned two perpendicular shafts, with a drum on the end of each that was coated with alternating stripes of conductive and nonconductive material. As the drum turned, the stripes transmitted electrical impulses through metal wipers.

When Apple Computer decided in 1979 to design a mouse for its Lisa computer, the design mutated yet again. Instead of a metal ball held against the substrate by a swivel, Apple used a rubber ball whose traction depended on the friction of the rubber and the weight of the ball itself. Simple pads on the bottom of the case carried the weight, and optical scanners detected the motion of the internal wheels. The device had loose tolerances and few moving parts, so that it cost perhaps a quarter as much to build as previous ball mice.

How the computer mouse gained and lost buttons

The first, wooden, SRI mouse had only one button, to test the concept. The plastic batch of SRI mice bad three side-by-side buttons—all there was room for, Engelbart said. The first PARC mouse bad a column of three buttons-again, because that best fit the mechanical design. Today, the Apple mouse has one button, while the rest have two or three. The issue is no longer 1950—a standard 6-by-10-cm mouse could now have dozens of buttons—but human factors, and the experts have strong opinions.

Said English, now director of internationalization at Sun Microsystems Inc., Mountain View, Calif.: “Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.” He sees two buttons as the minimum because two functions are basic to selecting an object: pointing to its start, then extending the motion to the end of the object.

William Verplank, a human factors specialist in the group that tested the graphical interface at Xerox from 1978 into the early 1980s, concurred. He told Spectrum that with three buttons, Alto users forgot which button did what. The group’s tests showed that one button was also confusing, because it required actions such as double-clicking to select and then open a file.

“We have agonizing videos of naive users struggling” with these problems, Verplank said. They concluded that for most users, two buttons (as used on the Star) are optimal, if a button means the same thing in every application. English experimented with one-button mice at PARC before concluding they were a bad idea.


“Two or three buttons, that’s the debate. Apple made a bad choice when they used only one.”
—William English


A computer monitor with a chunky white keyboard sitting on a desk

But many interface designers dislike multiple buttons, saying that double-clicking a single button to select an item is easier than remembering which button points and which extends. Larry Tesler, formerly a computer scientist at PARC, brought the one-button mouse to Apple, where he is now vice president of advanced technology. The company’s rationale is that to attract novices to its computers one button was as simple as it could get.

More than two million one-button Apple mice are now in use. The Xerox and Microsoft two-button mice are less common than either Apple’s ubiquitous one-button model or the three-button mice found on technical workstations. Dozens of companies manufacture mice today; most are slightly smaller than a pack of cigarettes, with minor variations in shape.

How windows first came to the computer screen


In 1962, Sketchpad could split its screen horizontally into two independent sections. One section could, for example, give a close-up view of the object in the other section. Researchers call Sketchpad the first example of tiled windows, which are laid out side by side. They differ from overlapping windows, which can be stacked on top of each other, or overlaid, obscuring all or part of the lower layers.

Windows were an obvious means of adding functionality to a small screen. In 1969, Engelbart equipped NLS (as the On-Line System he invented at SRI during the 1960s was known, to distinguish it from the Off-Line System known as FLS) with windows. They split the screen into multiple parts horizontally or vertically, and introduced cross-window editing with a mouse.

By 1972, led by researcher Alan Kay, the Smalltalk programming language group at Xerox PARC had implemented their version of windows. They were working with far different technology from Sutherland or Engelbart: by deciding that their images had to be displayed as dots on the screen, they led a move from vector to raster displays, to make it simple to map the assigned memory location of each of those spots. This was the bit map invented at PARC, and made viable during the 1980s by continual performance improvements in processor logic and memory speed.

Experimenting with bit-map manipulation, Smalltalk researcher Dan Ingalls developed the bit-block transfer procedure, known as BitBlt. The BitBlt software enabled application programs to mix and manipulate rectangular arrays of pixel values in on-screen or off-screen memory, or between the two, combining the pixel values and storing the result in the appropriate bit-map location.

BitBlt made it much easier to write programs to scroll a window (move an image through it), resize (enlarge or contract) it, and drag windows (move them from one location to another on screen). It led Kay to create overlapping windows. They were soon implemented by the Smalltalk group, but made clipping harder.

Some researchers question whether overlapping windows offer more benefits than tiled on the grounds that screens with overlapping windows become so messy the user gets lost.

In a tiling system, explained researcher Peter Deutsch, who worked with the Smalltalk group, the clipping borders are simply horizontal or vertical lines from one screen border to another, and software just tracks the location of those lines. But overlapping windows may appear anywhere on the screen, randomly obscuring bits and pieces of other windows, so that quite irregular regions must be clipped. Thus application software must constantly track which portions of their windows remain visible.

Some researchers still question whether overlapping windows offer more benefits than tiled, at least above a certain screen size, on the grounds that screens with overlapping windows become so messy the user gets lost. Others argue that overlapping windows more closely match users’ work patterns, since no one arranges the papers on their physical desktop in neat horizontal and vertical rows. Among software engineers, however, overlapping windows seem to have won for the user interface world.

So has the cut-and-paste editing model that Larry Tesler developed, first for the Gypsy text editor he wrote at PARC and later for Apple. Charles Irby—who worked on Xerox’s windows and is now vice president of development at Metaphor Computer Systems Inc., Mountain View, Calif.—noted, however, that cut-and-paste worked better for pure text-editing than for moving graphic objects from one application to another.

The origin of the computer menu bar


Menus—functions continuously listed onscreen that could be called into action with key combinations—were commonly used in defense computing by the 1960s. But it was only with the advent of BitBlt and windows that menus could be made to appear as needed and to disappear after use. Combined with a pointing device to indicate a user’s selection, they are now an integral part of the user-friendly interface: users no longer need to refer to manuals or memorize available options.

Instead, the choices can be called up at a moment’s notice whenever needed. And menu design has evolved. Some new systems use nested hierarchies of menus; others offer different menu versions—one with the most commonly used commands for novices, another with all available commands for the experienced user.

Among the first to test menus on demand was PARC researcher William Newman, in a program called Markup. Hard on his heels, the Smalltalk group built in pop-up menus that appeared on screen at the cursor site when the user pressed one of the mouse buttons.

Implementation was on the whole straightforward, recalled Deutsch. The one exception was determining whether the menu or the application should keep track of the information temporarily obscured by the menu. In the Smalltalk 76 version, the popup menu saved and restored the screen bits it overwrote. But in today’s multitasking systems, that would not work, because an application may change those bits without the menu’s knowledge. Such systems add another layer to the operating system: a display manager that tracks what is written where.

The production Xerox Star, in 1981, featured a further advance: a menu bar, essentially a row of words indicating available menus that could be popped up for each window. Human factors engineer Verplank recalled that the bar was at first located at the bottom of its window. But the Star team found users were more likely to associate a bar with the window below it, so it was moved to the top of its window.

Apple simplified things in its Lisa and Macintosh with a single bar placed at the top of the screen. This menu bar relates only to the window in use: the menus could be ‘‘pulled down” from the bar, to appear below it. Designer William D. Atkinson received a patent (assigned to Apple Computer) in August 1984 for this innovation.

One new addition that most user interface pioneers consider an advantage is the tear-off menu, which the user can move to a convenient spot on the screen and “pin” there, always visible for ready access.

Many windowing interfaces now offer command-key or keyboard alternatives for many commands as well. This return to the earliest of user interfaces—key combinations—neatly supplements menus, providing both ease of use for novices and for the less experienced, and speed for those who can type faster than they can point to a menu and click on a selection.

How the computer “icon” got its name


Sketchpad had on-screen graphic objects that represented constraints (for example, a rule that lines be the same length), and the Flex machine built in 1967 at the University of Utah by students Alan Kay and Ed Cheadle had squares that represented programs and data (like today’s computer “folders”). Early work on icons was also done by Bell Northern Research, Ottawa, Canada, stemming from efforts to replace the recently legislated bilingual signs with graphic symbols.

But the concept of the computer “icon” was not formalized until 1975. David Canfield Smith, a computer science graduate student at Stanford University in California, began work on his Ph.D. thesis in 1973. His advisor was PARC’s Kay, who suggested that he look at using the graphics power of the experimental Alto not just to display text, but rather to help people program.

David Canfield Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents.

Smith took the term icon from the Russian Orthodox church, where an icon is more than an image, because it embodies properties of what it represents: a Russian icon of a saint is holy and is to be venerated. Smith’s computer icons contained all the properties of the programs and data represented, and therefore could be linked or acted on as if they were the real thing.

After receiving his Ph.D. in 1975, Smith joined Xerox in 1976 to work on Star development. The first thing he did, he said, was to recast his concept of icons in office terms. “I looked around my office and saw papers, folders, file cabinets, a telephone, and bookshelves, and it was an easy translation to icons,” he said.

Xerox researchers developed, tested, and revised icons for the Star interface for three years before the first version was complete. At first they attempted to make the icons look like a detailed photographic rendering of the object, recalled Irby, who worked on testing and refining the Xerox windows. Trading off label space, legibility, and the number of icons that fit on the screen, they decided to constrain icons to a 1-inch (2.5-centimeter) square of 64 by 64 pixels, or 512 eight-bit bytes.

Then, Verplank recalls, they discovered that because of a background pattern based on two-pixel dots, the right-hand side of the icons appeared jagged. So they increased the width of the icons to 65 pixels, despite an outcry from programmers who liked the neat 16-bit breakdown. But the increase stuck, Verplank said, because they had already decided to store 72 bits per side to allow for white space around each icon.

After settling on a size for the icons, the Star developers tested four sets developed by two graphic designers and two software engineers. They discovered that, for example, resizing may cause problems. They shrunk the icon for a person—a head and shoulders—in order to use several of them to represent a group, only to hear one test subject say the screen resolution made the reduced icon look like a cross above a tombstone. Computer graphics artist Norm Cox, now of Cox & Hall, Dallas, Texas, was finally hired to redesign the icons.

Icon designers today still wrestle with the need to make icons adaptable to the many different system configurations offered by computer makers. Artist Karen Elliott, who has designed icons for Microsoft, Apple, Hewlett-Packard Co., and others, noted that on different systems an icon may be displayed in different colors, several resolutions, and a variety of gray shades, and it may also be inverted (light and dark areas reversed).

In the past few years, another concern has been added to icon designers’ tasks: internationalization. Icons designed in the United States often lack space for translations into languages other than English. Elliott therefore tries to leave space for both the longer words and the vertical orientation of some languages.


A square white macintosh computer with a white keyboard, in a separate image below, computer icons and the text address book, address, addresses

The main rule is to make icons simple, clean, and easily recognizable. Discarded objects are placed in a trash can on the Macintosh. On the NeXT Computer System, from NeXT Inc., Palo Alto, Calif.—the company formed by Apple cofounder Steven Jobs after he left Apple—they are dumped into a Black Hole. Elliott sees NeXT’s black hole as one of the best icons ever designed: ”It is distinct; its roundness stands out from the other, square icons, and this is important on a crowded display. It fits my image of information being sucked away, and it makes it clear that dumping something is serious.

English disagrees vehemently. The black hole “is fundamentally wrong,” he said. “You can dig paper out of a wastebasket, but you can’t dig it out of a black hole.” Another critic called the black hole familiar only to “computer nerds who read mostly science fiction and comics,” not to general users.

With the introduction of the Xerox Star in June 1981, the graphical user interface, as it is known today, arrived on the market. Though not a commercial triumph, the Star generated great interest among computer users, as the Alto before it had within the universe of computer designers.

Even before the Star was introduced, Jobs, then still at Apple, had visited Xerox PARC in November 1979 and asked the Smalltalk researchers dozens of questions about the Alto’s internal design. He later recruited Larry Tesler from Xerox to design the user interface of the Apple Lisa.

With the Lisa and then the Macintosh, introduced in January 1983 and January 1984 respectively, the graphical user interface reached the low-cost, high-volume computer market.

At almost $10,000, buyers deemed the Lisa too expensive for the office market. But aided by prizewinning advertising and its lower price, the Macintosh took the world by storm. Early Macs had only 128K bytes of RAM, which made them slow to respond because it was too little memory for heavy graphic manipulation. Also, the time needed for programmers to learn its Toolbox of graphics routines delayed application packages until well into 1985. But the Mac’s ease of use was indisputable, and it generated interest that spilled over into the MS-DOS world of IBM PCs and clones, as well as Unix-based workstations.

Who owns the graphical user interface?


The widespread acceptance of such interfaces, however, has led to bitter lawsuits to establish exactly who owns what. So far, none of several litigious companies has definitively established that it owns the software that implements windows, icons, or early versions of menus. But the suits continue.

Virtually all the companies that make and sell either wheel or ball mice paid license fees to SRI or to Xerox for their patents. Engelbart recalled that SRI patent attorneys inspected all the early work on the interface, but understood only hardware. After looking at developments like the implementation of windows, they told him that none of it was patentable.

At Xerox, the Star development team proposed 12 patents having to do with the user interface. The company’s patent committee rejected all but two on hardware—one on BitBlt, the other on the Star architecture. At the time, Charles Irby said, it was a good decision. Patenting required full disclosure, and no precedents then existed for winning software patent suits.


A computer screen in blue and white with multiple open windows


Three computer windows with greyscale images on a dark grey background


Computer windows tinted blue on a black background partially obscuring a planet and starfield


The most recent and most publicized suit was filed in March 1988, by Apple, against both Microsoft and Hewlett-Packard Co., Palo Alto, Calif. Apple alleges that HP’s New Wave interface, requiring version 2.03 of Microsoft’s Windows program, embodies the copyrighted “audio visual computer display” of the Macintosh without permission; that the displays of Windows 2.03 are illegal copies of the Mac’s audiovisual works; and that Windows 2.03 also exceeds the rights granted in a November 198S agreement in which Microsoft acknowledged that the displays in Windows 1.0 were derivatives of those in Apple’s Lisa and Mac.

In March 1989, U.S. District Judge William W. Schwarzer ruled Microsoft had exceeded the bounds of its license in creating Windows 2.03. Then in July 1989 Schwarzer ruled that all but 11 of the 260 items that Apple cited in its suit were, in fact, acceptable under the 1985 agreement. The larger issue—whether Apple’s copyrights are valid, and whether Microsoft and HP infringed on them—will not now be examined until 1990.

Among those 11 are overlapping windows and movable icons. According to Pamela Samuelson, a noted software intellectual property expert and visiting professor at Emory University Law School, Atlanta, Ga., many experts would regard both as functional features of an interface that cannot be copyrighted, rather than “expressions” of an idea protectable by copyright.

But lawyers for Apple—and for other companies that have filed lawsuits to protect the “look and feel’’ of their screen displays—maintain that if such protection is not granted, companies will lose the economic incentive to market technological innovations. How is Apple to protect its investment in developing the Lisa and Macintosh, they argue, if it cannot license its innovations to companies that want to take advantage of them?

If the Apple-Microsoft case does go to trial on the copyright issues, Samuelson said, the court may have to consider whether Apple can assert copyright protection for overlapping windows-an interface feature on which patents have also been granted. In April 1989, for example, Quarterdeck Office Systems Inc., Santa Monica, Calif., received a patent for a multiple windowing system in its Desq system software, introduced in 1984.

Adding fuel to the legal fire, Xerox said in May 1989 it would ask for license fees from companies that use the graphical user interface. But it is unclear whether Xerox has an adequate claim to either copyright or patent protection for the early graphical interface work done at PARC. Xerox did obtain design patents on later icons, noted human factors engineer Verplank. Meanwhile, both Metaphor and Sun Microsystems have negotiated licenses with Xerox for their own interfaces.

To Probe Further

The September 1989 IEEE Computer contains an article, “The Xerox ‘Star’: A Retrospective,” by Jeff Johnson et al., covering development of the Star. “Designing the Star User Interface,’’ [PDF] by David C. Smith et al., appeared in the April 1982 issue of Byte.

The Sept. 12, 1989, PC Magazine contains six articles on graphical user interfaces for personal computers and workstations. The July 1989 Byte includes ‘‘A Guide to [Graphical User Interfaces),” by Frank Hayes and Nick Baran, which describes 12 current interfaces for workstations and personal computers. “The Interface of Tomorrow, Today,’’ by Howard Reingold, in the July 10, 1989, InfoWorld does the same. “The interface that launched a thousand imitations,” by Richard Rawles, in the March 21, 1989, MacWeek covers the Macintosh interface.

The human factors of user interface design are discussed in The Psychology of Everyday Things, by Donald A. Norman (Basic Books Inc., New York, 1988). The January 1989 IEEE Software contains several articles on methods, techniques, and tools for designing and implementing graphical interfaces. The Way Things Work, by David Macaulay (Houghton Mifflin Co., Boston, 1988), contains a detailed drawing of a ball mouse.

The October 1985 IEEE Spectrum covered Xerox PARC’s history in “Research at Xerox PARC: a founder’s assessment,” by George Pake (pp. 54-61) and “Inside the PARC: the ‘information architects,’“ by Tekla Perry and Paul Wallich (pp. 62-75).

William Atkinson received patent no. 4,464,652 for the pulldown menu system on Aug. 8, 1984, and assigned it to Apple. Gary Pope received patent no. 4,823,108, for an improved system for displaying images in “windows” on a computer screen, on April 18, 1989, and assigned it to Quarterdeck Office Systems.

The wheel mouse patent, no. 3,541,541, “X-Y position indicator for a display system,” was issued to Douglas Engelbart on Nov. 17, 1970, and assigned to SRI International. The ball mouse patent, no. 3,835,464, was issued to Ronald Rider on Sept. 10, 1974, and assigned to Xerox.

The first selection device tests to include a mouse are covered in “Display-Selection Techniques for Text Manipulation,” by William English, Douglas Engelbart, and Melvyn Berman, in IEEE Transactions on Human Factors in Electronics, March 1967.

Sketchpad: A Man-Machine Graphical Communication System, by Ivan E. Sutherland (Garland Publishing Inc., New York City and London, 1980), reprints his 1963 Ph.D. thesis.










Match ID: 0 Score: 2.86 source: spectrum.ieee.org age: 7 days
qualifiers: 2.86 school

Video Friday: Little Robot, Big Stairs
Fri, 18 Nov 2022 16:43:36 +0000


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs, and even operate in the dark.

[ CMU ]

This robot is designed as a preliminary platform for humanoid robot research. The platform will be further extended with soles as well as upper limbs. In this video, the current lower limb version of the platform shows its capability in traversing uneven terrains without an active or passive ankle joint. The underactuation nature of the robot system has been well addressed with our locomotion-control framework, which also provides a new perspective on the leg design of bipedal robots.

[ CLEAR Lab ]

Thanks, Zejun!

Inbiodroid is a startup “dedicated to the development of fully immersive telepresence technologies that create a deeper connection between people and their environment.” Hot off the ANA Avatar XPrize competition, they’re doing a Kickstarter to fund the next generation of telepresence robots.

[ Kickstarter ] via [ Inbiodroid ]

Thanks, Alejandro!

A robot that can feel what a therapist feels when treating a patient, that can adjust the intensity of rehabilitation exercises at any time according to the patient's abilities and needs, and that can thus go on for hours without getting tired: It seems like fiction, and yet researchers from the Vrije Universiteit Brussel and Imec have now finished a prototype that unites all these skills in one robot.

[ VUB ]

Thanks, Bram!

Self-driving bikes present some special challenges, as this excellent video graphically demonstrates.

[ Paper ]

Pickle robots unload trucks. This is a short overview of the Pickle Robot Unload System in action at the end of October 2022—autonomously picking floor-loaded freight to unload a trailer. As a robotic system built on AI and advanced sensors, the system gets better and faster all the time.

[ Pickle ]

Learning agile skills can be challenging with reward shaping. Imitation learning provides an alternative solution by assuming access to decent expert references. However, such experts are not always available. We propose Wasserstein Adversarial Skill Imitation (WASABI), which acquires agile behaviors from partial and potentially physically incompatible demonstrations. In our work, Solo, a quadruped robot, learns highly dynamic skills (for example, backflips) from only handheld human demonstrations.

WASABI!

[ WASABI ]

NASA and the European Space Agency are developing plans for one of the most ambitious campaigns ever attempted in space: bringing the first samples of Mars material safely back to Earth for detailed study. The diverse set of scientifically curated samples now being collected by NASA’s Mars Perseverance rover could help scientists answer the question of whether ancient life ever arose on the Red Planet.

I thought I was promised some helicopters?

[ NASA ]

A Sanctuary general-purpose robot picks up and sorts medicine pills.

Remotely controlled, if that wasn’t clear.

[ Sanctuary ]

I don’t know what’s going on here, but it scares me.

[ KIMLAB ]

The Canadian Space Agency plans to send a rover to the moon as early as 2026 to explore a polar region. The mission will demonstrate key technologies and accomplish meaningful science. Its objectives are to gather imagery, measurements, and data on the surface of the moon, as well as to have the rover survive an entire night on the moon. Lunar nights, which last about 14 Earth days, are extremely cold and dark, posing a significant technological challenge.

[ CSA ]

Covariant Robotic Induction automates previously manual induction processes. This video shows the Covariant Robotic Induction solution picking a wide range of item types from totes, scanning bar codes, and inducting items onto a unit sorter. Note the robot’s ability to effectively handle items that are traditionally difficult to pick, such as transparent polybagged apparel and small, oddly shaped health and beauty items, and place them precisely onto individual trays.

[ Covariant ]

The solution will integrate Boston Dynamics’ Spot robot; the ExynPak, powered by ExynAI; and the Trimble X7 total station. It will enable fully autonomous missions inside complex and dynamic construction environments, which can result in consistent and precise reality capture for production and quality-control workflows.

[ Exyn ]

Our most advanced programmable robot yet is back and better than ever. Sphero RVR+ includes an advanced gearbox to improve torque and payload capacity; enhanced sensors, including an improved color sensor; and an improved rechargeable and swappable battery.

$279.

[ Sphero ]

I’m glad Starship is taking this seriously, although it’s hard to know from this video how well the robots behave when conditions are less favorable.

[ Starship ]

Complexity, cost, and power requirements for the actuation of individual robots can play a large factor in limiting the size of robotic swarms. Here we present PCBot, a minimalist robot that can precisely move on an orbital shake table using a bi-stable solenoid actuator built directly into its PCB. This allows the actuator to be built as part of the automated PCB manufacturing process, greatly reducing the impact it has on manual assembly.

[ Paper ]

Drone-racing world champion Thomas Bitmatta designed an indoor drone-racing track for ETH Zurich’s autonomous high-speed racing drones, and in something like half an hour, the autonomous drones were able to master the track at superhuman speeds (with the aid of a motion-capture system).

[ ETH RSL ] via [ BMS Racing ]

Thanks, Paul!

Moravec’s paradox is the observation that many things that are difficult for robots to do come easily to humans, and vice versa. Stanford University professor Chelsea Finn has been tasked to explain this concept to 5 different people: a child, a teen, a college student, a grad student, and an expert.

[ Wired ]

Roberto Calandra from Meta AI gives a talk about “Perceiving, Understanding, and Interacting Through Touch.”

[ UPenn ]

AI advancements have been motivated and inspired by human intelligence for decades. How can we use AI to expand our knowledge and understanding of the world and ourselves? How can we leverage AI to enrich our lives? In his Tanner Lecture, Eric Horvitz, chief science officer at Microsoft, will explore these questions and more, tracing the arc of intelligence from its origins and evolution in humans to its manifestations and prospects in the tools we create and use.

[ UMich ]


Match ID: 1 Score: 1.43 source: spectrum.ieee.org age: 9 days
qualifiers: 1.43 school

Filter efficiency 99.740 (2 matches/770 results)


********** ENTERTAINMENT **********
return to top



The Wonderful Wizard of Oz review – a joyful bad-taste romp
Sun, 27 Nov 2022 19:00:03 GMT

Tron theatre, Glasgow
Johnny McKnight’s irreverent and very funny panto is full of local colour and packed with put-downs – though never at the expense of the original film

Hollywood repeats itself, first as musical, second as panto. If you have never imagined Dorothy in drag, well, you have never reckoned on Johnny McKnight, the spangly gold standard of pantomime dames, swapping denim for gingham and following the yellow brick road out of Tronsis. It stretches only as far as the West End of Glasgow but packs in a dust-storm of diversions en route.

Somehow the writer, director and star takes one of cinema’s most sensitive tales and makes it raucous, irreverent and very, very funny. And he does so without mocking the original. In fact the movie is about the only thing that doesn’t get mocked in a show packed with put-downs of everything from rival Christmas shows to Alan Cumming in Burn, not forgetting the dazzling cast themselves.

Continue reading...
Match ID: 0 Score: 20.00 source: www.theguardian.com age: 0 days
qualifiers: 20.00 movie

Irene Cara, Oscar-winning singer of hits including ‘Fame,’ dies at 63
Sat, 26 Nov 2022 11:45:55 EST
The Oscar- and Grammy-winning singer and actress was best known for the title tracks of the movies "Fame" and "Flashdance."
Match ID: 1 Score: 20.00 source: www.washingtonpost.com age: 1 day
qualifiers: 20.00 movie

A Criminal Ratted Out His Friend to the FBI. Now He's Trying to Make Amends.
Sat, 26 Nov 2022 12:00:23 +0000

The FBI paid a convicted sex offender $90,000 to set up his friend and his friend’s mentally ill buddy in a terrorism sting.

The post A Criminal Ratted Out His Friend to the FBI. Now He’s Trying to Make Amends. appeared first on The Intercept.


Match ID: 2 Score: 20.00 source: theintercept.com age: 1 day
qualifiers: 20.00 movie

The 15 Best Holiday Movies to Stream This Season: Netflix, Hulu, Amazon, Disney+
Fri, 25 Nov 2022 14:00:00 +0000
Need help navigating all the services to find something good? Here’s what to watch to pass the time—and get into the spirit.
Match ID: 3 Score: 20.00 source: www.wired.com age: 2 days
qualifiers: 20.00 movie

The Revelatory Portraits of Paolo Di Paolo
Fri, 25 Nov 2022 11:00:00 +0000
The new documentary “The Treasure of His Youth,” directed by Bruce Weber, features interviews with the Italian photographer, whose subjects include Anna Magnani.
Match ID: 4 Score: 20.00 source: www.newyorker.com age: 2 days
qualifiers: 20.00 movie

Life in the Movies Versus Real Life
Fri, 25 Nov 2022 11:00:00 +0000
In the movies, Prince Charming will sweep you off your feet. In real life, podcasts have made it so that every man is unbearable.
Match ID: 5 Score: 20.00 source: www.newyorker.com age: 2 days
qualifiers: 20.00 movie

"Tantura" Exposes the Lie at the Heart of Israel's Founding Myth
Fri, 25 Nov 2022 11:00:32 +0000

A new documentary challenges Israel’s narrative about 1948 and the forced displacement of Palestinians.

The post “Tantura” Exposes the Lie at the Heart of Israel’s Founding Myth appeared first on The Intercept.


Match ID: 6 Score: 20.00 source: theintercept.com age: 2 days
qualifiers: 20.00 movie

Thanksgiving Movies Leave a Lot to Be Desired. America Needs a New One—Let's Make It 'Coco'
Thu, 24 Nov 2022 14:00:00 +0000
It's the best all-ages movie in years—and it's ready to stream. Watch it this week with your own cartoonish family.
Match ID: 7 Score: 17.14 source: www.wired.com age: 3 days
qualifiers: 17.14 movie

What “Tár” Knows About the Artist as Abuser
Thu, 24 Nov 2022 11:00:00 +0000
Todd Field’s film about the downfall of a world-famous conductor shows the toll that untouchability takes even on the person it supposedly benefits.
Match ID: 8 Score: 17.14 source: www.newyorker.com age: 3 days
qualifiers: 17.14 movie

Amazon plans to pour $1 billion annually on theater movies
Wed, 23 Nov 2022 19:07:50 GMT

Amazon.com Inc. will spend more than $1 billion annually on 12 to 15 movies that initially play in theaters, according to a Bloomberg report Wednesday. The ambitious outlay, believed to be the most by an internet company on theater-first movies, comes as Amazon dukes it out with video-streaming rivals Walt Disney Co. , Apple Inc. , and Netflix Inc. who are plowing billions of dollars a year to create movie and episodic content. Amazon had no immediate comment.

Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.


Match ID: 9 Score: 14.29 source: www.marketwatch.com age: 4 days
qualifiers: 14.29 movie

The Stubborn, Enduring Vision of Jean-Marie Straub
Wed, 23 Nov 2022 11:00:00 +0000
He was one of the least known of great filmmakers, and among the most original of his time.
Match ID: 10 Score: 14.29 source: www.newyorker.com age: 4 days
qualifiers: 14.29 movie

10 Things for Americans to Be Grateful for at Thanksgiving 2022
Wed, 23 Nov 2022 11:00:34 +0000

Elon Musk, functioning elections, “Andor,” and death — Americans can genuinely give thanks for all these things and more this holiday.

The post 10 Things for Americans to Be Grateful for at Thanksgiving 2022 appeared first on The Intercept.


Match ID: 11 Score: 14.29 source: theintercept.com age: 4 days
qualifiers: 14.29 movie

How the First Transistor Worked
Sun, 20 Nov 2022 16:00:00 +0000


The vacuum-tube triode wasn’t quite 20 years old when physicists began trying to create its successor, and the stakes were huge. Not only had the triode made long-distance telephony and movie sound possible, it was driving the entire enterprise of commercial radio, an industry worth more than a billion dollars in 1929. But vacuum tubes were power-hungry and fragile. If a more rugged, reliable, and efficient alternative to the triode could be found, the rewards would be immense.

The goal was a three-terminal device made out of semiconductors that would accept a low-current signal into an input terminal and use it to control the flow of a larger current flowing between two other terminals, thereby amplifying the original signal. The underlying principle of such a device would be something called the field effect—the ability of electric fields to modulate the electrical conductivity of semiconductor materials. The field effect was already well known in those days, thanks to diodes and related research on semiconductors.


A photo of a cutaway of a point-contact of a transistor.  In the cutaway photo of a point-contact, two thin conductors are visible; these connect to the points that make contact with a tiny slab of germanium. One of these points is the emitter and the other is the collector. A third contact, the base, is attached to the reverse side of the germanium.AT&T ARCHIVES AND HISTORY CENTER

But building such a device had proved an insurmountable challenge to some of the world’s top physicists for more than two decades. Patents for transistor-like devices had been filed starting in 1925, but the first recorded instance of a working transistor was the legendary point-contact device built at AT&T Bell Telephone Laboratories in the fall of 1947.

Though the point-contact transistor was the most important invention of the 20th century, there exists, surprisingly, no clear, complete, and authoritative account of how the thing actually worked. Modern, more robust junction and planar transistors rely on the physics in the bulk of a semiconductor, rather than the surface effects exploited in the first transistor. And relatively little attention has been paid to this gap in scholarship.

It was an ungainly looking assemblage of germanium, plastic, and gold foil, all topped by a squiggly spring. Its inventors were a soft-spoken Midwestern theoretician, John Bardeen, and a voluble and “ somewhat volatile” experimentalist, Walter Brattain. Both were working under William Shockley, a relationship that would later prove contentious. In November 1947, Bardeen and Brattain were stymied by a simple problem. In the germanium semiconductor they were using, a surface layer of electrons seemed to be blocking an applied electric field, preventing it from penetrating the semiconductor and modulating the flow of current. No modulation, no signal amplification.


Sometime late in 1947 they hit on a solution. It featured two pieces of barely separated gold foil gently pushed by that squiggly spring into the surface of a small slab of germanium.

Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate. Indeed, the current edition of that bible of undergraduate EEs, The Art of Electronics by Horowitz and Hill, makes no mention of the point-contact transistor at all, glossing over its existence by erroneously stating that the junction transistor was a “Nobel Prize-winning invention in 1947.” But the transistor that was invented in 1947 was the point-contact; the junction transistor was invented by Shockley in 1948.

So it seems appropriate somehow that the most comprehensive explanation of the point-contact transistor is contained within John Bardeen’s lecture for that Nobel Prize, in 1956. Even so, reading it gives you the sense that a few fine details probably eluded even the inventors themselves. “A lot of people were confused by the point-contact transistor,” says Thomas Misa, former director of the Charles Babbage Institute for the History of Science and Technology, at the University of Minnesota.

Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate.

A year after Bardeen’s lecture, R. D. Middlebrook, a professor of electrical engineering at Caltech who would go on to do pioneering work in power electronics, wrote: “Because of the three-dimensional nature of the device, theoretical analysis is difficult and the internal operation is, in fact, not yet completely understood.”

Nevertheless, and with the benefit of 75 years of semiconductor theory, here we go. The point-contact transistor was built around a thumb-size slab of n-type germanium, which has an excess of negatively charged electrons. This slab was treated to produce a very thin surface layer that was p-type, meaning it had an excess of positive charges. These positive charges are known as holes. They are actually localized deficiencies of electrons that move among the atoms of the semiconductor very much as a real particle would. An electrically grounded electrode was attached to the bottom of this slab, creating the base of the transistor. The two strips of gold foil touching the surface formed two more electrodes, known as the emitter and the collector.

That’s the setup. In operation, a small positive voltage—just a fraction of a volt—is applied to the emitter, while a much larger negative voltage—4 to 40 volts—is applied to the collector, all with reference to the grounded base. The interface between the p-type layer and the n-type slab created a junction just like the one found in a diode: Essentially, the junction is a barrier that allows current to flow easily in only one direction, toward lower voltage. So current could flow from the positive emitter across the barrier, while no current could flow across that barrier into the collector.

A photo of rows of people sitting in front of microscopes and stacks of transistors. The Western Electric Type-2 point-contact transistor was the first transistor to be manufactured in large quantities, in 1951, at Western Electric’s plant in Allentown, Pa. By 1960, when this photo was taken, the plant had switched to producing junction transistors.AT&T ARCHIVES AND HISTORY CENTER

Now, let’s look at what happens down among the atoms. First, we’ll disconnect the collector and see what happens around the emitter without it. The emitter injects positive charges—holes—into the p-type layer, and they begin moving toward the base. But they don’t make a beeline toward it. The thin layer forces them to spread out laterally for some distance before passing through the barrier into the n-type slab. Think about slowly pouring a small amount of fine powder onto the surface of water. The powder eventually sinks, but first it spreads out in a rough circle.

Now we connect the collector. Even though it can’t draw current by itself through the barrier of the p-n junction, its large negative voltage and pointed shape do result in a concentrated electric field that penetrates the germanium. Because the collector is so close to the emitter, and is also negatively charged, it begins sucking up many of the holes that are spreading out from the emitter. This charge flow results in a concentration of holes near the p-n barrier underneath the collector. This concentration effectively lowers the “height” of the barrier that would otherwise prevent current from flowing between the collector and the base. With the barrier lowered, current starts flowing from the base into the collector—much more current than what the emitter is putting into the transistor.

The amount of current depends on the height of the barrier. Small decreases or increases in the emitter’s voltage cause the barrier to fluctuate up and down, respectively. Thus very small changes in the the emitter current control very large changes at the collector, so voilà! Amplification. (EEs will notice that the functions of base and emitter are reversed compared with those in later transistors, where the base, not the emitter, controls the response of the transistor.)

Ungainly and fragile though it was, it was a semiconductor amplifier, and its progeny would change the world. And its inventors knew it. The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil, with that tiny slit separating the emitter and collector contacts. This configuration gave reliable power gain, and the duo knew then that they had succeeded. In his carpool home that night, Brattain told his companions he’d just done “the most important experiment that I’d ever do in my life” and swore them to secrecy. The taciturn Bardeen, too, couldn’t resist sharing the news. As his wife, Jane, prepared dinner that night, he reportedly said, simply, “We discovered something today.” With their children scampering around the kitchen, she responded, “That’s nice, dear.

It was a transistor, at last, but it was pretty rickety. The inventors later hit on the idea of electrically forming the collector by passing large currents through it during the transistor’s manufacturing. This technique enabled them to get somewhat larger current flows that weren’t so tightly confined within the surface layer. The electrical forming was a bit hit-or-miss, though. “They would just throw out the ones that didn’t work,” Misa notes.

Nevertheless, point-contact transistors went into production at many companies, under license to AT&T, and, in 1951, at AT&T’s own manufacturing arm, Western Electric. They were used in hearing aids, oscillators, telephone-routing gear, in an experimental TV receiver built at RCA, and in the Tradic, the first airborne digital computer, among other systems. In fact, point-contact transistors remained in production until 1966, in part due to their superior speed compared with the alternatives.

The fateful day was 16 December 1947, when Brattain hit on the idea of using a plastic triangle belted by a strip of gold foil…

The Bell Labs group wasn’t alone in its successful pursuit of a transistor. In Aulnay-sous-Bois, a suburb northeast of Paris, two German physicists, Herbert Mataré and Heinrich Welker, were also trying to build a three-terminal semiconductor amplifier. Working for a French subsidiary of Westinghouse, they were following up on very intriguing observations Mataré had made while developing germanium and silicon rectifiers for the German military in 1944. The two succeeded in creating a reliable point-contact transistor in June 1948.

They were astounded, a week or so later, when Bell Labs finally revealed the news of its own transistor, at a press conference on 30 June 1948. Though they were developed completely independently, and in secret, the two devices were more or less identical.

Here the story of the transistor takes a weird turn, breathtaking in its brilliance and also disturbing in its details. Bardeen’s and Brattain’s boss, William Shockley, was furious that his name was not included with Bardeen’s and Brattain’s on the original patent application for the transistor. He was convinced that Bardeen and Brattain had merely spun his theories about using fields in semiconductors into their working device, and had failed to give him sufficient credit. Yet in 1945, Shockley had built a transistor based on those very theories, and it hadn’t worked.

A photo of a man in a jacket placing a transistor in a device. In 1953, RCA engineer Gerald Herzog led a team that designed and built the first "all-transistor" television (although, yes, it had a cathode-ray tube). The team used point-contact transistors produced by RCA under a license from Bell Labs. TRANSISTOR MUSEUM JERRY HERZOG ORAL HISTORY

At the end of December, barely two weeks after the initial success of the point-contact transistor, Shockley traveled to Chicago for the annual meeting of the American Physical Society. On New Year’s Eve, holed up in his hotel room and fueled by a potent mix of jealousy and indignation, he began designing a transistor of his own. In three days he scribbled some 30 pages of notes. By the end of the month, he had the basic design for what would become known as the bipolar junction transistor, or BJT, which would eventually supersede the point-contact transistor and reign as the dominant transistor until the late 1970s.

A photo of a group of transistors With insights gleaned from the Bell Labs work, RCA began developing its own point-contact transistors in 1948. The group included the seven shown here—four of which were used in RCA's experimental, 22-transistor television set built in 1953. These four were the TA153 [top row, second from left], the TA165 [top, far right], the TA156 [bottom row, middle] and the TA172 [bottom, right].TRANSISTOR MUSEUM JONATHAN HOPPE COLLECTION

The BJT was based on Shockley’s conviction that charges could, and should, flow through the bulk semiconductors rather than through a thin layer on their surface. The device consisted of three semiconductor layers, like a sandwich: an emitter, a base in the middle, and a collector. They were alternately doped, so there were two versions: n-type/p-type/n-type, called “NPN,” and p-type/n-type/p-type, called “PNP.”

The BJT relies on essentially the same principles as the point-contact, but it uses two p-n junctions instead of one. When used as an amplifier, a positive voltage applied to the base allows a small current to flow between it and the emitter, which in turn controls a large current between the collector and emitter.

Consider an NPN device. The base is p-type, so it has excess holes. But it is very thin and lightly doped, so there are relatively few holes. A tiny fraction of the electrons flowing in combines with these holes and are removed from circulation, while the vast majority (more than 97 percent) of electrons keep flowing through the thin base and into the collector, setting up a strong current flow.

But those few electrons that do combine with holes must be drained from the base in order to maintain the p-type nature of the base and the strong flow of current through it. That removal of the “trapped” electrons is accomplished by a relatively small flow of current through the base. That trickle of current enables the much stronger flow of current into the collector, and then out of the collector and into the collector circuit. So, in effect, the small base current is controlling the larger collector circuit.

Electric fields come into play, but they do not modulate the current flow, which the early theoreticians thought would have to happen for such a device to function. Here’s the gist: Both of the p-n junctions in a BJT are straddled by depletion regions, in which electrons and holes combine and there are relatively few mobile charge carriers. Voltage applied across the junctions sets up electric fields at each, which push charges across those regions. These fields enable electrons to flow all the way from the emitter, across the base, and into the collector.

In the BJT, “the applied electric fields affect the carrier density, but because that effect is exponential, it only takes a little bit to create a lot of diffusion current,” explains Ioannis “John” Kymissis, chair of the department of electrical engineering at Columbia University.

An illustration of a point-contact transistor. The very first transistors were a type known as point contact, because they relied on metal contacts touching the surface of a semiconductor. They ramped up output current—labeled “Collector current” in the top diagram—by using an applied voltage to overcome a barrier to charge flow. Small changes to the input, or “emitter,” current modulate this barrier, thus controlling the output current.

An illustration of a Bipolar Junction Transistor The bipolar junction transistor accomplishes amplification using much the same principles but with two semiconductor interfaces, or junctions, rather than one. As with the point-contact transistor, an applied voltage overcomes a barrier and enables current flow that is modulated by a smaller input current. In particular, the semiconductor junctions are straddled by depletion regions, across which the charge carriers diffuse under the influence of an electric field.Chris Philpot

The BJT was more rugged and reliable than the point-contact transistor, and those features primed it for greatness. But it took a while for that to become obvious. The BJT was the technology used to make integrated circuits, from the first ones in the early 1960s all the way until the late 1970s, when metal-oxide-semiconductor field-effect transistors (MOSFETs) took over. In fact, it was these field-effect transistors, first the junction field-effect transistor and then MOSFETs, that finally realized the decades-old dream of a three-terminal semiconductor device whose operation was based on the field effect—Shockley’s original ambition.

Such a glorious future could scarcely be imagined in the early 1950s, when AT&T and others were struggling to come up with practical and efficient ways to manufacture the new BJTs. Shockley himself went on to literally put the silicon into Silicon Valley. He moved to Palo Alto and in 1956 founded a company that led the switch from germanium to silicon as the electronic semiconductor of choice. Employees from his company would go on to found Fairchild Semiconductor, and then Intel.

Later in his life, after losing his company because of his terrible management, he became a professor at Stanford and began promulgating ungrounded and unhinged theories about race, genetics, and intelligence. In 1951 Bardeen left Bell Labs to become a professor at the University of Illinois at Urbana-Champaign, where he won a second Nobel Prize for physics, for a theory of superconductivity. (He is the only person to have won two Nobel Prizes in physics.) Brattain stayed at Bell Labs until 1967, when he joined the faculty at Whitman College, in Walla Walla, Wash.

Shockley died a largely friendless pariah in 1989. But his transistor would change the world, though it was still not clear as late as 1953 that the BJT would be the future. In an interview that year, Donald G. Fink, who would go on to help establish the IEEE a decade later, mused, “Is it a pimpled adolescent, now awkward, but promising future vigor? Or has it arrived at maturity, full of languor, surrounded by disappointments?”

It was the former, and all of our lives are so much the better because of it.

This article appears in the December 2022 print issue as “The First Transistor and How it Worked .”


Match ID: 12 Score: 5.71 source: spectrum.ieee.org age: 7 days
qualifiers: 5.71 movie

The Stereotype of the Woke Teen Is 'Tárring' Art
Wed, 16 Nov 2022 14:00:00 +0000
An archetype culled from the depths of social media seems to be short-circuiting screenwriters’ creativity.
Match ID: 13 Score: 2.86 source: www.wired.com age: 11 days
qualifiers: 2.86 movie

Celebrating the Art of Code
Tue, 15 Nov 2022 17:32:36 +0000


We use software as an engine to power and move our digital world. We create software from code—that is, special languages that we can read and write, and that determine what computers and digital technologies actually do. Writing code is an art, a creative human activity undertaken by both individuals and by teams, and using sophisticated tools. The code that people create can be art in another sense also: For those who are familiar with these special languages, they can see beauty within them, much as we all can appreciate the design of a stunning garden or public park.


The Computer History Museum invites you to accompany us for a yearlong tour through our “Art of Code” exhibition, which will provide many opportunities to explore remarkable stories, events, and historical source-code releases. We will hear from experts and luminaries about how software is created and the important consequences it has for society. We will explore firsthand the source code behind major developments in computing. And we will engage in discussions about critical issues today and their relationship to code.

A cassette tape with a label that says "apple computer inc." with handwritten "DOS 13 source" and "5/30/78 3:44pm Woz". This cassette contains the Apple II’s DOS source code, which CHM released in 2013.Computer History Museum

What’s in store during CHM’s Art of Code

In September, we kicked off the Art of Code by marking the 50th anniversary of the breakthrough software language and environment Smalltalk. Smalltalk embraced a fresh, modular, and dynamic approach to the art of code called object-oriented programming. It was also a major step in the use of computers by children. The reverberations of each are still felt today. You can watch the full program of our Smalltalk event here and read a blog recap here.

Black and white photo of children sitting in front an older desktop computer. Children animating horses in Smalltalk-72 on an Alto computer. In 2020, CHM began hosting the Smalltalk Zoo, a collection of historical versions of Smalltalk from 1972 to 1995 in an in-browser emulation.PARC Library

In October, we celebrated this year’s new CHM Fellows, who all have made remarkable contributions to the art of code, including Smalltalk, the pathbreaking Plato system of online communities, learning, and collaboration, and the development of the Internet itself. Watch the full program here and read a blog recap here.

We will announce other events for the Art of Code throughout the coming year.

 Black and white photo of two men working with a mainframe computer Ken Thompson [seated] and Dennis Ritchie work with a DEC PDP-11, to which they migrated the UNIX software in 1971. In 2019, CHM released the source code for the earliest version of UNIX.Computer History Museum

Upcoming source code releases: PostScript, Apple Lisa

For programmers, developers, coders, and other students of code, Art of Code will have a remarkable series of historical source-code releases over the coming year. We will begin with the public release of the source code for PostScript, the innovative software behind printing as we know it, the rise of Adobe, and PDF. Closely following will be CHM’s public release of the source code for the Apple Lisa computer, a remarkably influential project by Apple that did so much to bring the graphical user interface and “What You See Is What You Get” approaches into personal computing.

A set of 5 computer floppy disks in sleeves labelled \u201cMicrosoft.\u201d This set of floppy disks contains Microsoft’s MS-DOS 2.0. In 2010, CHM released the source code for MS-DOS 1.1 and 2.0.Computer History Museum

Later in the year, look for events and offerings marking the 50th anniversaries of some of the pathbreaking developments in networked personal computing at Xerox PARC: the revolutionary Xerox Alto system and the era-defining network technology of Ethernet. CHM will also be releasing a digital archive of PARC’s extraordinary historical contributions, including source code, documents, images, and more.

Join us!

Please join the Computer History Museum for this Art of Code journey by signing up here for news and updates, and begin your own journey through our Art of Code resources by watching this movie, exploring this playlist of CHM’s historical source-code releases, and visiting Make Software, Change the World! in person and online.

Editor's note: This post originally appeared on the blog of the Computer History Museum.


Match ID: 14 Score: 2.86 source: spectrum.ieee.org age: 12 days
qualifiers: 2.86 movie

NASA’s DART Mission Aims to Save the World
Fri, 23 Sep 2022 15:52:53 +0000


Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon.

Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe.

Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live.

DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs.

“Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA.

“If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.”

So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone.

The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium.

An infographic showing the orientation of Didymos,  Dimorphos, DART, and LICIACube. The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA

But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test.

NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second.

The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in.

A golden cubesat with a bright light and lines The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA

What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.”

So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago.

“In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.”

There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact.

“We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”


Match ID: 15 Score: 2.86 source: spectrum.ieee.org age: 65 days
qualifiers: 2.86 movie

Most Frequently Asked Questions About NFTs(Non-Fungible Tokens)
Sun, 06 Feb 2022 10:04:00 +0000

 

NFTs

Non-fungible tokens (NFTs) are the most popular digital assets today, capturing the attention of cryptocurrency investors, whales and people from around the world. People find it amazing that some users spend thousands or millions of dollars on a single NFT-based image of a monkey or other token, but you can simply take a screenshot for free. So here we share some freuently asked question about NFTs.

1) What is an NFT?

NFT stands for non-fungible  token, which is a cryptographic token on a blockchain with unique identification codes that distinguish it from other tokens. NFTs are unique and not interchangeable, which means no two NFTs are the same. NFTs can be a unique artwork, GIF, Images, videos, Audio album. in-game items, collectibles etc.

2) What is Blockchain?

A blockchain is a distributed digital ledger that allows for the secure storage of data. By recording any kind of information—such as bank account transactions, the ownership of Non-Fungible Tokens (NFTs), or Decentralized Finance (DeFi) smart contracts—in one place, and distributing it to many different computers, blockchains ensure that data can’t be manipulated without everyone in the system being aware.

3) What makes an NFT valuable?


The value of an NFT comes from its ability to be traded freely and securely on the blockchain, which is not possible with other current digital ownership solutionsThe NFT points to its location on the blockchain, but doesn’t necessarily contain the digital property. For example, if you replace one bitcoin with another, you will still have the same thing. If you buy a non-fungible item, such as a movie ticket, it is impossible to replace it with any other movie ticket because each ticket is unique to a specific time and place.

4) How do NFTs work?

One of the unique characteristics of non-fungible tokens (NFTs) is that they can be tokenised to create a digital certificate of ownership that can be bought, sold and traded on the blockchain. 

As with crypto-currency, records of who owns what are stored on a ledger that is maintained by thousands of computers around the world. These records can’t be forged because the whole system operates on an open-source network. 

NFTs also contain smart contracts—small computer programs that run on the blockchain—that give the artist, for example, a cut of any future sale of the token.

5) What’s the connection between NFTs and cryptocurrency?

Non-fungible tokens (NFTs) aren't cryptocurrencies, but they do use blockchain technology. Many NFTs are based on Ethereum, where the blockchain serves as a ledger for all the transactions related to said NFT and the properties it represents.5) How to make an NFT?

Anyone can create an NFT. All you need is a digital wallet, some ethereum tokens and a connection to an NFT marketplace where you’ll be able to upload and sell your creations

6) How to validate the authencity of an NFT?

When you purchase a stock in NFT, that purchase is recorded on the blockchain—the bitcoin ledger of transactions—and that entry acts as your proof of ownership.

7) How is an NFT valued? What are the most expensive NFTs?

The value of an NFT varies a lot based on the digital asset up for grabs. People use NFTs to trade and sell digital art, so when creating an NFT, you should consider the popularity of your digital artwork along with historical statistics.

In the year 2021, a digital artist called Pak created an artwork called The Merge. It was sold on the Nifty Gateway NFT market for $91.8 million.

8) Can NFTs be used as an investment?

Non-fungible tokens can be used in investment opportunities. One can purchase an NFT and resell it at a profit. Certain NFT marketplaces let sellers of NFTs keep a percentage of the profits from sales of the assets they create.

9) Will NFTs be the future of art and collectibles?

Many people want to buy NFTs because it lets them support the arts and own something cool from their favorite musicians, brands, and celebrities. NFTs also give artists an opportunity to program in continual royalties if someone buys their work. Galleries see this as a way to reach new buyers interested in art.

10) How do we buy an NFTs?

There are many places to buy digital assets, like opensea and their policies vary. On top shot, for instance, you sign up for a waitlist that can be thousands of people long. When a digital asset goes on sale, you are occasionally chosen to purchase it.

11) Can i mint NFT for free?

To mint an NFT token, you must pay some amount of gas fee to process the transaction on the Etherum blockchain, but you can mint your NFT on a different blockchain called Polygon to avoid paying gas fees. This option is available on OpenSea and this simply denotes that your NFT will only be able to trade using Polygon's blockchain and not Etherum's blockchain. Mintable allows you to mint NFTs for free without paying any gas fees.

12) Do i own an NFT if i screenshot it?

The answer is no. Non-Fungible Tokens are minted on the blockchain using cryptocurrencies such as Etherum, Solana, Polygon, and so on. Once a Non-Fungible Token is minted, the transaction is recorded on the blockchain and the contract or license is awarded to whoever has that Non-Fungible Token in their wallet.

12) Why are people investing so much in NFT?


 Non-fungible tokens have gained the hearts of people around the world, and they have given digital creators the recognition they deserve. One of the remarkable things about non-fungible tokens is that you can take a screenshot of one, but you don’t own it. This is because when a non-fungible token is created, then the transaction is stored on the blockchain, and the license or contract to hold such a token is awarded to the person owning the token in their digital wallet.

You can sell your work and creations by attaching a license to it on the blockchain, where its ownership can be transferred. This lets you get exposure without losing full ownership of your work. Some of the most successful projects include Cryptopunks, Bored Ape Yatch Club NFTs, SandBox, World of Women and so on. These NFT projects have gained popularity globally and are owned by celebrities and other successful entrepreneurs. Owning one of these NFTs gives you an automatic ticket to exclusive business meetings and life-changing connections.

Final Saying

That’s a wrap. Hope you guys found this article enlightening. I just answer some question with my limited knowledge about NFTs. If you have any questions or suggestions, feel free to drop them in the comment section below. Also I have a question for you, Is bitcoin an NFTs? let me know in The comment section below






Match ID: 16 Score: 2.86 source: www.crunchhype.com age: 294 days
qualifiers: 2.86 movie

Filter efficiency 97.792 (17 matches/770 results)

ABOUT THE PROJECT

RSS Rabbit links users to publicly available RSS entries.
Vet every link before clicking! The creators accept no responsibility for the contents of these entries.

Relevant

Fresh

Convenient

Agile

CONTACT

We're not prepared to take user feedback yet. Check back soon!

rssRabbit quadric