Originally published in the December 1995 issue of:

scnlogo.gif - 4.5 K

When the Medium Hides the Message

by Barry McKinnon

of Mc2Systems Design Group

Multimedia based presentation systems in education, corporate facilities, and even retail outlets are a much more common sight now than even a year or two ago. The display hardware costs are dropping substantially with each passing month, it seems, as more capable and feature-laden hardware hits the market. Steady advances in computer technology are allowing the multimedia sources to be brought together on one platform. There is more software available, and apparently more reasons to use the hardware and software. Where the world of multimedia presentation was once restricted to large media-rich corporations, now virtually anyone with a competent desk top computer can put together a mind-blowing visual extravaganza and show it on an affordable display and delivery system. Icon

What does the growth of digital sourced multimedia going to mean to the systems contractors and consultants in the presentation system market. We've already seen some changes in the balance of hardware that is being attached to video display systems. A scant five years ago, the use of 3/4" SMPTE equipped U-Matic source deck for a video wall would have been the norm. Then Laserdisc became the best choice for a source when it became economical to burn short run discs without huge pre-production costs. Fast optical media storage is still dropping in cost, and huge SCSI hard-drives are dirt cheap (who would have believed a 1.1GB hardrive for less than $400 even a year ago?). Combine that with advances in video compression and the common place nature of PC based video editing systems and you gain a very different future view of what will be involved in the presentation hardware. What I'm curious to know is; are we finally going to be able to take the user interface to the next level to make it easier to navigate and operate these systems?

Many of the basic visual design criteria for development of presentation material can also be applied to the development of the interface between the presenter and the system. Sorting through and combining the print, graphics, audio, film/video and even educational planning elements into the new hybrid of electronic multimedia is going to take some time. As a culture, we have been exposed to so much media, in so many varieties, we have become blind to how we interact with it, or the reasons why some media is prefered over others. As multimedia display system designers and installers, we have seldom had the training in those individual specialties, instead we often bring our own technical knowledge bias to the layout of the user interface. The steady move to a digital computer platform multimedia source also brings new opportunities to develop a true user interface instead of an "operator interface."

One of the major issues we'll have to address is neatly described by a relatively new term (Jargon Alert!) "Cognitive Bandwidth." This is a good bit of new terminology because it describes in human terms what we have understood to be a design limitation in technical systems, bandwidth limitations determine how much information we can deliver. Cognitive bandwidth implies that there is limit to how much information our brain can process at once, and it is easy to have an understanding of the effects of bandwidth limitation. When driving a car you have to process a huge amount of simultaneous information, and in busy environments with dense, high speed traffic, pedestrians, bicycles, motorcycles, and traffic lights and Avenue signs, you have to limit, or filter out, advertising signs and blinking lights and other irrelevant data to continue to process the important data. The aerospace industries have been developing audible information and warning systems for fighter pilots because their visual bandwidth limits have already been exceeded. Cognitive bandwidth is a major factor in the development of the control interface for multimedia presentation systems as well as the media delivered on the systems, but is often not recognized as such. What the user wants is the relevant information presented to them when they need it. The relevant information should be in the form most easily digested by the user at the time it is needed. The concept is not new but the application of the concept to a wider technology base is relatively new.

The next thing we have to think about is the notion of "interactivity." Interactivity is a word that has been stretched and tweaked to apply to hundreds of situations, many of which aren't really interactive at all. Some may try to make you believe that a light switch is interactive, since you have to activate it or deactivate it, but that does not really imply interactivity, as it only has two states it can be in, on or off, and there is not a lot of opportunity for either the user or the system to negotiate other states. You interact with other people when you both have some common goal or purpose.(Stand up comics also experience interactivity with hecklers, and that can be both irritating and unpleasant for the comic, so we might keep that idea in mind for its negative imagery of what interactivity shouldn't be) Interactive technology implies a number of degrees of freedom in the system, and that the system configuration and the user's needs should converge to complete some common goal. A system that was truly interactive would reduce the cognitive bandwidth required to operate it, since the system would already be moving towards the common goal the user and the system share. Icon

The computer world has held this as a goal for quite some time. Nicholas Negroponte (the founder of MIT's Media Lab) described a very interactive system in his 1968 book "The Architecture Machine." This book described an advanced version of the smart agent technology that is currently on the market as a Smart House, where the integrated control system in the house learns the patterns of use of the various systems, and can anticipate the likely requirements of the users, providing the system settings most often selected by the users. This greatly reduces the cognitive bandwidth required of the user. This is the kind of transparency of operation of everyday objects we strive for. When you get in your car in the morning, you really don't want to think about the operation of the carburetor, pistons, transmissions, and all the other mechanical bits that work together to help you get from point A to point B.

Think about the computer software you use most often, as part of your learning curve, you battle through a period that has much more interactivity than you have patience for. You have to make dozens of choices to sort out the features, approach or method most suitable for your application. New default settings get stored, you find out which features you don't need and de-activate them. You eventually get to a point where you can make the software do just what you want, and you don't have to think about all those hundreds of options again until something different comes up. That is interactivity, but you may want to ask yourself why you have the learning curve, and why your cognitive bandwidth was being consumed, when all you want is the end product, be it a document or a drawing. Why is it that the interactive technology makes the person using it adapt to the user interface? My favorite example of a smart interactive interface showed up in a book published in 1979. This interface had the screen prompt "Don't Panic" whenever the situation was dire and the cognitive bandwidth of the user was most limited. The book was Douglas Adam's fictional (and really funny) "The Hitchhiker's Guide to the Galaxy," and the Guide described could almost be the benchmark used by people developing the concept of the Personal Digital Assistant. It would present the information you need when you needed it, and not waste a lot of cognitive bandwidth with irrelevant choices and information.

I recently read an article written by George E. Lewis, titled "Singing the Alternative Interactivity Blues" in the paper version of FRONT,a Vancouver arts magazine. Lewis is a well known individual working in the computer musical instrument interface field, and the interactive performance arts. It was an interesting look at interactivity from a different perspective, and he had some interesting comments about interactivity in general. "Computer programs, like all texts, are not "objective" or "universal", but instead represent the ideas of their particular creators; a closer look at a given software system reveals the charactersitics of the community of thought and culture that produced it," writes Lewis. Later in the article he writes,"For now, let me say that when I interact with a computer, I'm looking for an experience that will let me explore my many worlds. I don't want to be patronized or talked down to or directed or mystified or imposed upon or propagandized or followed around in some silly way." While Mr. Lewis was describing an interactive artistic media interface, it would be safe to say that he has described what any user expects when using an interactive interface. As designers and vendors of presentation system interfaces, we need to keep those comments in mind as we incorporate our own technical culture biases and viewpoints into our designs. It is very easy to forget that the user of a system almost certainly has a different technical-culture background from ourselves. The interface should allow them to access the function of the system without consuming a lot of cognitive bandwidth in figuring out the process. The ideal interface would have a zero time learning curve, it would be completely intuitive and anticipatory.

There is a belief that Icon based control interfaces (the infamous GUI, Graphic User Interface) are the route to this goal. The advantage to icons is that they can be culturally neutral, or at least minimize the cultural interpretation errors. As a culture, we now recognize the icons for both Mens' and Ladies' washrooms, but we're still having difficulty with the handicapped parking icon. Have you taken a good look at the icons that are now part of most PC software? There are many features in the software and each one needs an icon, so the icons are becoming increasingly cryptic in describing what they represent. The simple days of 'open a file' and 'close a file' icons have been surpassed by what amounts to a computerese version of Kanji, a new pictographic language that, once again, requires a learning curve to determine the relationship between image and function. You do have the choice of waiting for the text callout to pop up and tell you what the icon means. To the rational interface designer, this should be the signal that the information density limitations of the icon has been exceeded. If the icon requires text to describe the function, then perhaps the function should just be described by text, and reduce the cognitive bandwidth required by the user to find and activate the desired function. Now that PC based touch screen control software is being introduced by the major control system manufacturers, the ability will soon exist to add elaborate on-screen graphics where only simple text boxes and buttons existed before. We have to be careful to ensure that more elaborate control interfaces do not slow down the user's cognitive process of making the system produce output.

The graphics and text used in a control interface can be described as fitting into two general categories of media; active and passive. These are descriptors of the viewer's involvement, not the media itself. An active media, like the written word, requires the reader to be actively involved, to read and understand the material presented and to form their own mental images and fit the concepts together into the reader's own world view. The reader has to take an active role in acquiring the information. A passive media, like a music video, provides the viewer with the images and the context at an established speed and in sequence, with neither the requirement, or the available time, to think about the images or what the context may mean to the viewer. The viewer assumes a passive role in accepting the information. Now, if we evaluate the elements of active and passive media from the view of cognitive bandwidth, you can see that active media requires more cognitive bandwidth, and less for passive media. This is an important part of development of a control interface with a short learning curve. This is also very important in the development of truly interactive interfaces, as there are always functions that require the context of their operation to be defined in a larger context(active), and then there are functions whose context can travel with the media, be it a word or picture (passive).

The slowest and most tiresome interface is one that requires a lot of time and effort to determine what relationship exists between the words, buttons or images and the desired outcome. The most satisfying interface is one that tells the user at a glance what functions are available and invites the user to step right up and operate the system. For "techies" this kind of interface can be difficult to develop, because the technology of the presentation system is well known and understood, it's already transparent to the system designer. It can be very difficult to step away from the hardware know-how and look at the interface from the point of view of a system user, and try to make the system operation transparent, instead of the hardware.

This is the one area that offers the greatest promise in the growth of single platform multimedia presentation systems. As the computer hardware becomes more capable of delivering both the audio and visual portions of a multimedia presentation, more of the control interface can become transparent. The current HTML language, and the newer JAVA language, used for the manipulation of text and graphics on a Web page, both offer some interesting promise for a friendlier control interface. The JAVA language has some very interesting possibilities since the interface could actually be different in both appearance and function for each type of presentation. The JAVA language is intended to call up and run tiny programs (applets) which could tailor the interface to the presentation. While these protocols are designed for WWW network use, they would also work nicely on a host multimedia presentation computer, with or without a network connection. System functions that need to be placed in context to be understood can easily be hypertext linked to the functions they relate to. A sentence that describes the context that the command words are used in can have the command words activate the commands. System functions that can stand alone (carry their own context) can be represented by graphic elements that can be as complex (information dense) as required, and can be mapped to call any function. Instead of having to choose one form of interface (small cryptic icon or text prompt based), it will be possible to combine graphic command elements with text command elements in an optimum combination of active and passive media. The control interface will become true multimedia, both active and passive, used where they provide the most benefit.

Return to the Systems Contractor News Article Index

Return to Barry McKinnon's bio

Return to the Mc2Systems Design Group main page

Systems Contractor News
a United Entertainment Media Publication

United Entertainment Media Inc.
460 Park Avenue South, 9th Floor
New York, NY, 10016
Ph. 212-378-0400
FAX 212-378-2160
by e-mail