It seems that you can hardly go to a computer conference without seeing a videotape of a futuristic computer system that talks to you from a wall, desk, or some random appliance. Is this the interface of the future? Over the last decade, Ben Shneiderman, head of the University of Maryland's Human-Computer Interaction Laboratory and author of Designing the User Interface: Strategies for Effective Human-Computer Interaction (Addison-Wesley, 1992), has been the most forceful voice against anthropomorphic interfaces. He argues that users want a sense of direct and immediate control over computers that differs from how they interact with people. He presents several examples of these predictable and controllable interfaces developed in the lab at UM.
THE VISION OF COMPUTERS AS INTELLIGENT machines is giving way to one based on the use of predictable and controllable user interfaces. The computer appears to vanish, and users directly manipulate screen representations of familiar objects and actions to accomplish their goals. Predictable and controllable interfaces have certain desirable qualities that let users
+ Have a clear mental model of what is possible and what will happen in response to each action.
+ Repeat desired sequences of action to achieve their goals.
+ Recover from errors easily.
+ Alter the interface to suit their needs.
None of these qualities are found to the same degree in intelligent machines. Indeed, users often don't know what the machine is going to do next.
But a more troubling issue is the choice of "intelligent" as a label for technology. The obvious comparison is to humans. But is this necessarily a good thing? The metaphors and terminology we choose can shape the thoughts of everyone from researchers and designers to members of Congress and the press. We have a responsibility to choose the best metaphor possible for the technology we create.
WHY NOT INTELLIGENT? I am opposed to labeling computers as "intelligent" for several reasons. First, such a classification limits the imagination. We should have much greater ambition than to make a computer behave like an intelligent butler or other human agent. Computer-supported cooperative work, hypertext/hypermedia, multimedia, information visualization, and virtual reality are powerful technologies that enable human users to accomplish tasks that no human has ever done. If we describe computers in human terms, we run the risk of limiting our ambition and creativity in the design of future computer capabilities. In the same way that most of us have learned to use terminology not specific to any gender, we should now learn not to limit designers of computers with the tag "intelligent" or "smart."
Second, the qualities of predictability and control are desirable. If machines are intelligent or adaptive, they may have less of these qualities. Usability studies at the University of Maryland show that users want the feelings of mastery, competence, and understanding that come from a predictable and controllable interface. Most users seek a sense of accomplishment at the end of the day, not the sense that some intelligent machine magically did their job for them.
Another reason I'm concerned about this label is that it limits or even eliminates human responsibility. I am concerned that if designers are successful in convincing the users that computers are intelligent, then the users will have a reduced sense of responsibility for failures. The tendency to blame the machine is already widespread and I think we will be on dangerous ground if we encourage this trend. As part of my work, I collect newspapers articles about computers, some of which bear the headlines "Victims of Computer Error Go Hungry," "IRS Computers Err on Refund Reports,'' and "Computers That 'Hear' Taking Jobs"-all of which seem to absolve human operators by implicating the machine.
Finally, I have a basic philosophical objection to the ''intelligent" label. Machines are not people, nor can they ever become so. For me, computers have no more intelligence than a wooden pencil. If you confuse the way you treat machines with the way you treat people, you may end up treating people like machines, which devalues human emotional experiences, creativity, individuality, and relationships of trust. I know that many of my colleagues are quite happy to call machines intelligent and knowledgeable, but I prefer to treat and think about machines in very different ways from the way I treat and think about people.
LEARN FROM HISTORY. While some productive work has been done under the banner of "intelligent," often those who use this term reveal how little they know about what users want or need. The user's goal is not to interact with an intelligent machine, but to create, communicate, explore, plan, draw, compose, design, or learn. Ample evidence exists of the misguided directions brought by intelligent machines.
+ Natural-language interaction seems clumsy and slow compared to direct manipulation and information-visualization methods that use rapid, high-resolution, color displays with pointing devices. Lotus HAL is gone, Artificial Intelligence Corp.'s Intellect hangs on but is not catching on. Although there are some interesting directions for tools that support human work through natural-language processing (aiding human translators, parsing texts, and generating reports from structured databases) this is different from natural-language interaction.
+ Speech I/O in talking cars and vending machines has not flourished. Voice recognition is fine for handicapped users and special situations, but doesn't seem to be viable for widespread use in office, home, or school settings. Our recent studies suggest that speech I/O has a greater interference with short term and working memory than hand-eye coordination for menu selection by mouse. Voice store and forward, phone-based information retrieval, and voice annotation have great potential but these are not intelligent applications.
+ Adaptive interfaces may be unstable and unpredictable, often leading users to worry about what will change next. I see only a modest chance for success in user modeling to recognize the level of expertise and automatically revise the interface accordingly- can anyone point to successful studies or commercial products? By contrast, user-controlled adaptation through control panels, cruise control for cars, and remote controls for TV are success stories. While algorithms to deal with dynamic issues in network and disk-space management are needed, the user should directly control the application program's task-domain and user-interface issues.
+ Intelligent computer-assisted instruction, as compared to traditional CAI, served only to prolong the point at which users felt they were victims of the machine. Newer variations such as intelligent tutoring systems are giving way to interactive learning environments, in which students are in control and actively creating or exploring
+ Intelligent, talking robots with five-fingered hands and human facial features (quaint fantasy that did well in Hollywood but not in Detroit) are mostly gone in favor of flexible manufacturing systems that enable supervisors to specify behavior with predictable results.
It seems that some designers continue to ignore this historical pattern and still dream of creating intelligent machines. It is an ancient and primitive fantasy, and it seems most new technologies must pass through this child-like animistic phase. Lewis Mumford identified this pattern (Technics and Civilization, Harcourt Brace, 1934) when he wrote "the most ineffective kind of machine is the realistic mechanical imitation of a man or another animal ... for thousands of years animism has stood in the way of ... development."
REALIZING A NEW VISION. I see a future filled with powerful, but predictable and controllable computers that will genuinely serve human needs. Visual, animated, colorful, high-resolution interfaces will be built on promising strategies like informative and continuous feedback, meaningful control panels, appropriate preference boxes, user-selectable toolbars, rapid menu selection, easy-to-create macros, and comprehensible shortcuts. Users will be able to specify rapidly, accurately, and confidently how they want their e-mail filtered, what documents they want retrieved and in what order, and how their documents will be formatted.
APPLICATION. Our Human-Computer Interaction Laboratory has applied these principles to information-visualization methods that give users X-ray vision to see through their mountains of data. Techniques include tree maps and dynamic queries.
-+ Tree maps. Tree maps let users see (and hear) two to three thousand nodes of hierarchically structured information by using every pixel on the display. Each node is represented by a rectangle whose location preserves the logical tree structure and whose area is proportional to one of its attributes. Color represents a second attribute and sound a third.
Brian Johnson and David Turo of UM are applying tree maps to Macintosh directory browsing. Figure 1 shows a screen from TreeViz, an interface that uses this technique. Users can set area to file size, color to application type, and sound to file age.
When users first try TreeViz they usually discover duplicate or misplaced files, redundant and chaotic directories, and many useless files or applications because they can now see all their files at once. They can then apply their human perceptual skills to detect patterns and exceptions with remarkable speed.
Tree maps have also been applied to the management of stock-market portfolios, sales data, voting patterns, and even sports (in basketball alone, there are 48 statistics on 459 NBA players, in 27 teams, in four divisions) .
+ Dynamic Queries. These animations let you rapidly adjust query parameters and immediately display updated result sets, which makes them
very effective when a visual environment like a map, calendar, or schematic diagram is available. The immediate display of results lets users more easily develop intuitions, discover patterns, spot trends, find exceptions, and see anomalies.
Figure 2 shows a screen from Dynamic HomeFinder, a prototype interface for real-estate agents that uses dynamic queries, written by Christopher Williamson of UM. Users can adjust the cost, number of bedrooms, and location of the A and B markers, among other characteristics, and points of light appear on a map to indicate a home that matches their specifications. Clicking on a point of light brings up a home description or image.
Users of Dynamic HomeFinder can execute up to 100 queries per second (rather than one query per 100 seconds as is typical in a database query language), producing a revealing animated view of where high- or low-price homes are found-and there are no syntax errors.
Our empirical study of 18 users showed Dynamic HomeFinder to be more effective than a natural-language interface using Q&A from Symantec (C. WIlliamson and B. Shneiderman, The Dynamic HomeFinder: Evaluating Dynamic Queries in a Real-Estate Information Exploration System," Proc. SIG Information Retrieval, ACM Press, 1992, pp. 338-346).
Dynamic queries can also be easily applied with standard text-file output, as Figure 3 shows. Dynamic queries exemplify the future of interaction; You don't need to describe your goals, negotiate with an intelligent agent, and wait for a response, you JUST DO IT! Furthermore, dynamically seeing the results enables you to explore and rapidly reformulate your goals in an engaging videogame-like manner.
Open problems in information visualization include screen organization, widget design, algorithms for rapid search and display, use of color and sound, and strategies to accommodate human perceptual skills. We also see promise in expanding macro makers into the graphical environment with visual triggers based on the controlled replay of desired actions-the general idea is PITUI to DWID (programming in the user interface to do what I did).
CONCLUSION. If you agree with this design philosophy-and especially if you disagree - I hope that you will add to our scientific knowledge by conducting well-designed empirical studies of learning time, measuring performance time for appropriate tasks, recording error rates, evaluating human retention of interface features, and assessing subjective satisfaction. There's much work to be done to make computing accessible, effective, and enjoyable.
I especially want to encourage the exploration of new metaphors and visions of how computers can empower people by presenting information, allowing rapid selection, supporting personally specified automation and providing relevant feedback. Metaphors related to controlling tools or machines such as driving, steering, flying, directing, conducting, piloting, or operating seem more generative of effective and acceptable interfaces than intelligent machines.
This column was prompted by discussion between Mark Weiser and Bill Hefley; stimulated by lively e-mail and personal discussion with Paul Resnick, Tom Malone, and Christopher Fry at MIT,; and refined by comments from Catherine Plaisant, Rick Chimera, Brian Johnson, David Turo, Richard Huddleston, and Richard Potter at the Human-Computer Interaction Lab at the University of Maryland. I also appreciate Bill Curtis's support of this vision. Thanks to all.
Figure 1. A treemap showing more than 600 files using TreeViz (written by Brian Johnson of UM and available from UM's Office Technology Liaison, (301) 405-4210). Alphabetical order and directory structure are preserved, color shows file type, and area is proportional to file size.
Figure 2. Dynamic HomeFinder lets users adjust sliders to express queries and see points of light, which represent homes for sale, come and go dynamically (written by Christopher Williamson of UM).
Figure 3. Dynamic queries method applied to home-real-estate database but with textual output that is rewritten after each slider is adjusted (written by Vinit Jain of UM).