Windows 95 turns 20 – and new ways of interacting show up desktop's age
The arrival of Microsoft Windows 95 on August 24 1995 brought about a desktop PC boom. With an easier and more intuitive graphical user interface than previous versions it appealed to more than just business, and Bill Gates’ stated aim of one PC per person per desk was set in motion. This was a time of 320Mb hard drives, 8Mb RAM and 15” inch CRT monitors. For most home users, the internet had only just arrived.
Windows 95 introduced the start menu, powered by a button in the bottom-left corner of the desktop. This gives a central point of entry into menus from which to choose commands and applications. The simplicity of this menu enables users to easily find commonly used documents and applications. All subsequent versions of Windows have kept this menu, with the notable exception of Windows 8, a change which prompted an enormous backlash.
We take these intuitive graphic interfaces for granted today, but earlier operating systems such as DOS and CP/M allowed the user to interact using only typed text commands. This all changed in the 1970s, with Ivan Sutherland’s work with Sketchpad and the use of lightpens to control CRT displays, Douglas Engelbart’s development of the computer mouse, and the Xerox PARC research team’s creation of the Windows Icon Menu Pointer graphical interfaces paradigm (WIMP) – the combination of mouse pointer, window and icons that remains standard to this day. By the early 1980s, Apple had developed graphical operating systems for its Lisa (released 1983) and Macintosh (1984) computers, and Microsoft had released Windows (1985).
Imagining a desktop
All these interfaces rely on the central idea of the desktop, a comprehensible metaphor for a computer. We work with information in files and organise them in folders, remove unwanted information to the trash can, and note something of interest with a bookmark.
Metaphors are useful. They enable users to grasp concepts faster, but rely on the metaphor remaining comprehensible to the user and useful for the designer and programmer putting it into effect – without stretching it beyond belief. The advantage is that the pictures used to represent functions (icons) look similar to those in the workplace, and so the metaphor is readily understandable.
But 20 years after Windows 95, the world has changed. We have smartphones and smart televisions, we use the internet prolifically for practically everything. Touchscreens are now almost more ubiquitous than the classic mouse-driven interface approach, and screen resolution is so high individual pixels can be difficult to see. We still have Windows, but things are changing. Indeed, they need to change.
The desktop metaphor has been the metaphor of choice for so long, and this ubiquity has helped computers find a place within households as a common, familiar tool rather than as specialist, computerised equipment. But is it still appropriate? After all, few of us sit in an office today with paper-strewn desks; books are read on a tablet or phone rather than hard-copies; printing emails is discouraged; most type their own letters and write their own emails; files are electronic not physical; we search the internet for information rather than flick through reference books; and increasingly the categorisation and organisation of data has taken second place to granular search.
Mouse-driven interfaces rely on a single point of input, but we’re increasingly seeing touch-based interfaces that accept swipes, touches and shakes in various combinations. We are moving away from the dictatorship of the mouse pointer. Dual-finger scrolling and pinch-to-zoom are new emerging metaphors – natural user interfaces (NUI) rather than graphical user interfaces.
What does the next 20 years hold?
It’s hard to tell but one thing that is certain is that interfaces will make use of more human senses to display information and to control the computer. Interfaces will become more transparent, more intuitive and less set around items such as boxes, arrows or icons. Human gestures will be more commonplace. And such interfaces will be incorporated into technology throughout the world, through virtual reality and augmented reality.
These interfaces will be appear and feel more natural. Some suitable devices already exist, such as ShiverPad, that provide shear forces on surfaces that provide a frictional feel to touch devices. Or Geomagic’s Touch X (formerly the Sensible Phantom Desktop) that delivers three-dimensional forces to make 3D objects feel solid.
Airborne haptics are another promising technology that develop tactile interfaces in mid-air. Through ultrasound, users can feel acoustic radiation fields that emanate from devices, without needing to touch any physical surface. Videogame manufacturers have led the way with these interfaces, including the Microsoft Kinect and Hololens that allow users to use body gestures to control the interface, or with their eyes through head-mounted displays.
Once interaction with a computer or device can be commanded using natural gestures, movements of the body or spoken commands, the necessity for the Windows-based metaphor of computer interaction begins to look dated – as old as it is.
Jonathan Roberts is Senior Lecturer in Computer Science at Bangor University
This article was originally published on The Conversation. Read the original article.