Talk:Multi-touch interface/archive history: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Pat Palmer
No edit summary
 
imported>Pat Palmer
mNo edit summary
Line 1: Line 1:
This is an archive of the [[Multi-touch_interface]] history section.
This is an archive of the [[Multi-touch_interface]] history section. This page links from the [[Talk:Multi-touch_interface]] page.


----
----

Revision as of 08:44, 7 September 2010

This is an archive of the Multi-touch_interface history section. This page links from the Talk:Multi-touch_interface page.


One of the first, and most significant, inventions relative to the creation of multi-touch technology was the development of a touch screen able to recognize a single touch. The first accepted example of this technology is the PLATO IV. PLATO IV was developed in the 1970’s by the University of Illinois Computer-based Education Research Laboratory. This device was a computer assisted educational device which recognized single touch, non-pressure sensitive inputs from the user[1]. The touch panel consisted of a 16 X 16 grid of sensors and infrared-light emitting diodes around the plasma panel, allowing the computer to identify from any of the 256 regions (Sherwood, 1972)[2].

In 1983, Myron Krueger completed a prototype of the VIDEOPLACE which allowed a user to interact with images on a vision based system via a set of predefined gestures. Though the system did not rely on touch, the significance of this work was that it was able to recognize a number of gestures, such as, pointing, pinching, and dragging[3].

In 1985, the first multi-touch tablet using capacitive touch screens was developed by the Input Research Group at the University of Toronto. This tablet was capable of sensing more than one point of contact at a time, determining their locations and measuring their degree of contact. The additional significance of this tablet, at the time, was its method of scanning user input and its ability to detect contact with a high degree of resolution[4].

In 1991, a multi-touch desktop display was developed that aimed to give the user’s desk and workspace the properties of an electronic workspace was developed by Rank Xerox EuroPARC. The significance of this project is that rather than using capacitive touch screens, it used a computer controlled camera in combination with a projector above it to sense the user input. In the set up, the camera detected the location of the user’s input and what they were pointing to. The projector displayed feedback and electronic objects onto the surface[5].

In 1992, IBM and Bell South released the first touch screen smart phone called Simon. The input depended on both stylus and touch replacing physical buttons with a virtual (or soft) keyboard on the screen. Simon’s functionality included that of a phone and a PDA. One of the limitations of this technology was that it could not differentiate pen contact from finger contact[6].

In 1997, the T3 was invented by Alias|Wavefront which allowed the user to use more gestures including navigating, panning, rotating, and zooming that responded quickly to the user input[7]. Additionally, the user could combine these basic movements to form more complex onesCite error: Closing </ref> missing for <ref> tag. This company was later bought by Apple who used FingerWorks designed technologies in their multi-touch devices.

In 2004, the first (commercially available) transparent multi-touch capable screen was released. This device, called the Jazz Mutant, was targeted towards musicians[8][1].

In 2007, two commercially successful multi-touch devices were introduced. These two devices were the Apple iPhone and Microsoft Surface. The iPhone was a smart phone that used multi-touch technology and was capable of detecting two points of contact simultaneously. The iPhone was also capable of recognizing basic gestures including, but not limited to, the pinch and swipe. This phone used a capacitive-sensing touch screen[9][1]. The iPhone was geared for the general public and was integral in making multi-touch technology ubiquitous. Microsoft Surface was a table surface able to sense multiple user touches and gestures simultaneously on a single surface. Because of its price of $14,000, it was geared towards corporations and institutions, where it was successful. Microsoft Surface used optical technology to detect input.[10].

  1. 1.0 1.1 1.2 Cite error: Invalid <ref> tag; no text was provided for refs named Buxton
  2. Status of PLATO IV.
  3. VIDEOPLACE—an artificial reality.
  4. http://doi.acm.org/10.1145/1165385.317461 A multi-touch three dimensional touch-sensitive tablet.]
  5. The DigitalDesk calculator: tangible manipulation on a desk top display.
  6. Manual deskterity: an exploration of simultaneous pen + touch direct input.
  7. [ http://doi.acm.org/10.1145/258549.258574 The design of a GUI paradigm based on tablets, two-hands, and transparency.]
  8. < http://www.jazzmutant.com/behindthelemur.php >
  9. iPhone Design. Web 8 Aug. 2010. <http://www.apple.com/iphone/design/>
  10. What is Microsoft Surface. Web. 8 Aug 2010. <http://www.microsoft.com/surface/en/us/Pages/Product/WhatIs.aspx >