Talk:Multi-touch interface/archive history: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Pat Palmer
No edit summary
 
m (refs)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
This is an archive of the [[Multi-touch_interface]] history section.
This is an archive of the [[Multi-touch_interface]] history section. This page links from the [[Talk:Multi-touch_interface]] page.


----
----


One of the first, and most significant, inventions relative to the creation of multi-touch technology was the development of a touch screen able to recognize a single touch. The first accepted example of this technology is the [[PLATO IV]].  [[PLATO IV]] was developed in the 1970’s by the University of Illinois Computer-based Education Research Laboratory. This device was a computer assisted educational device which recognized single touch, non-pressure sensitive inputs from the user<ref name=Buxton />.  The touch panel consisted of a 16 X 16 grid of sensors and infrared-light emitting diodes around the plasma panel, allowing the computer to identify from any of the 256 regions (Sherwood, 1972)<ref>[http://doi.acm.org/10.1145/965887.965888 Status of PLATO IV.]</ref>.   
One of the first, and most significant, inventions relative to the creation of multi-touch technology was the development of a touch screen able to recognize a single touch. The first accepted example of this technology is the [[PLATO IV]].  [[PLATO IV]] was developed in the 1970’s by the University of Illinois Computer-based Education Research Laboratory. This device was a computer assisted educational device which recognized single touch, non-pressure sensitive inputs from the user.  The touch panel consisted of a 16 X 16 grid of sensors and infrared-light emitting diodes around the plasma panel, allowing the computer to identify from any of the 256 regions (Sherwood, 1972)<ref>[http://doi.acm.org/10.1145/965887.965888 Status of PLATO IV.]</ref>.   


In 1983, [[Myron Krueger]] completed a prototype of the [[VIDEOPLACE]] which allowed a user to interact with images on a vision based system via a set of predefined gestures. Though the system did not rely on touch, the significance of this work was that it was able to recognize a number of gestures, such as, pointing, pinching, and dragging<ref>[http://doi.acm.org/10.1145/1165385.317463 VIDEOPLACE—an artificial reality.]</ref>.  
In 1983, [[Myron Krueger]] completed a prototype of the [[VIDEOPLACE]] which allowed a user to interact with images on a vision based system via a set of predefined gestures. Though the system did not rely on touch, the significance of this work was that it was able to recognize a number of gestures, such as, pointing, pinching, and dragging<ref>[http://doi.acm.org/10.1145/1165385.317463 VIDEOPLACE—an artificial reality.]</ref>.  
Line 13: Line 13:
In 1992, [[IBM]] and [[Bell South]] released the first touch screen smart phone called [[Simon]]. The input depended on both stylus and touch replacing physical buttons with a virtual (or soft) keyboard on the screen.  Simon’s functionality included that of a phone and a PDA.  One of the limitations of this technology was that it could not differentiate pen contact from finger contact<ref>[http://doi.acm.org/10.1145/1753846.1753865 Manual deskterity: an exploration of simultaneous pen + touch direct input.]</ref>.  
In 1992, [[IBM]] and [[Bell South]] released the first touch screen smart phone called [[Simon]]. The input depended on both stylus and touch replacing physical buttons with a virtual (or soft) keyboard on the screen.  Simon’s functionality included that of a phone and a PDA.  One of the limitations of this technology was that it could not differentiate pen contact from finger contact<ref>[http://doi.acm.org/10.1145/1753846.1753865 Manual deskterity: an exploration of simultaneous pen + touch direct input.]</ref>.  


In 1997, the T3 was invented by Alias|Wavefront which allowed the user to use more gestures including navigating, panning, rotating, and zooming that responded quickly to the user input<ref>[  http://doi.acm.org/10.1145/258549.258574 The design of a GUI paradigm based on tablets, two-hands, and transparency.]</ref>.  Additionally, the user could combine these basic movements to form more complex ones<ref name=Buxton>.   
In 1997, the T3 was invented by Alias|Wavefront which allowed the user to use more gestures including navigating, panning, rotating, and zooming that responded quickly to the user input<ref>[  http://doi.acm.org/10.1145/258549.258574 The design of a GUI paradigm based on tablets, two-hands, and transparency.]</ref>.  Additionally, the user could combine these basic movements to form more complex ones.   


In 1998, [[FingerWorks]] made a number of multi-touch sensing devices that could recognize a number of multi-touch inputs. It’s first significant, and relevant to multi-touch, invention was a keyless key pad that allowed the user to press the keys without having to exert as much pressure as they would using a normal keyboard. Additionally, it also supported a number of gestures. This device was primarily successful for people who suffered from Carpel Tunnel or other repetitive stress injuries and allowed them to interact with the computer<ref>McGrane, S. “No Press, No Stress: When Fingers Fly.”The New York Times. 24 Jan 2002. Web. 7 Aug 2010. <http://www.nytimes.com/2002/01/24/technology/no-press-no-stress-when-fingers-fly.html></ref>.  This company was later bought by Apple who used FingerWorks designed technologies in their multi-touch devices.  
In 1998, [[FingerWorks]] made a number of multi-touch sensing devices that could recognize a number of multi-touch inputs. It’s first significant, and relevant to multi-touch, invention was a keyless key pad that allowed the user to press the keys without having to exert as much pressure as they would using a normal keyboard. Additionally, it also supported a number of gestures. This device was primarily successful for people who suffered from Carpel Tunnel or other repetitive stress injuries and allowed them to interact with the computer<ref>McGrane, S. “No Press, No Stress: When Fingers Fly.”The New York Times. 24 Jan 2002. Web. 7 Aug 2010. <http://www.nytimes.com/2002/01/24/technology/no-press-no-stress-when-fingers-fly.html></ref>.  This company was later bought by Apple who used FingerWorks designed technologies in their multi-touch devices.  


In 2004, the first (commercially available) transparent multi-touch capable screen was released. This device, called the Jazz Mutant, was targeted towards musicians<ref>< http://www.jazzmutant.com/behindthelemur.php ></ref><ref name=Buxton />.   
In 2004, the first (commercially available) transparent multi-touch capable screen was released. This device, called the Jazz Mutant, was targeted towards musicians<ref>< http://www.jazzmutant.com/behindthelemur.php ></ref>.   


In 2007, two commercially successful multi-touch devices were introduced. These two devices were the Apple [[iPhone]] and [[Microsoft Surface]]. The [[iPhone]] was a smart phone that used multi-touch technology and was capable of detecting two points of contact simultaneously.  The [[iPhone]] was also capable of recognizing basic gestures including, but not limited to, the pinch and swipe. This phone used a capacitive-sensing touch screen<ref>iPhone Design. Web 8 Aug. 2010. <http://www.apple.com/iphone/design/></ref><ref name=Buxton />.  The [[iPhone]] was geared for the general public and was integral in making multi-touch technology ubiquitous. [[Microsoft Surface]] was a table surface able to sense multiple user touches and gestures simultaneously on a single surface. Because of its price of $14,000, it was geared towards corporations and institutions, where it was successful. [[Microsoft Surface]] used optical technology to detect input.<ref>What is Microsoft Surface. Web. 8 Aug 2010. <http://www.microsoft.com/surface/en/us/Pages/Product/WhatIs.aspx ></ref>.
In 2007, two commercially successful multi-touch devices were introduced. These two devices were the Apple [[iPhone]] and [[Microsoft Surface]]. The [[iPhone]] was a smart phone that used multi-touch technology and was capable of detecting two points of contact simultaneously.  The [[iPhone]] was also capable of recognizing basic gestures including, but not limited to, the pinch and swipe. This phone used a capacitive-sensing touch screen<ref>iPhone Design. Web 8 Aug. 2010. <http://www.apple.com/iphone/design/></ref>.  The [[iPhone]] was geared for the general public and was integral in making multi-touch technology ubiquitous. [[Microsoft Surface]] was a table surface able to sense multiple user touches and gestures simultaneously on a single surface. Because of its price of $14,000, it was geared towards corporations and institutions, where it was successful. [[Microsoft Surface]] used optical technology to detect input.<ref>What is Microsoft Surface. Web. 8 Aug 2010. <http://www.microsoft.com/surface/en/us/Pages/Product/WhatIs.aspx ></ref>.

Latest revision as of 04:56, 13 September 2024

This is an archive of the Multi-touch_interface history section. This page links from the Talk:Multi-touch_interface page.


One of the first, and most significant, inventions relative to the creation of multi-touch technology was the development of a touch screen able to recognize a single touch. The first accepted example of this technology is the PLATO IV. PLATO IV was developed in the 1970’s by the University of Illinois Computer-based Education Research Laboratory. This device was a computer assisted educational device which recognized single touch, non-pressure sensitive inputs from the user. The touch panel consisted of a 16 X 16 grid of sensors and infrared-light emitting diodes around the plasma panel, allowing the computer to identify from any of the 256 regions (Sherwood, 1972)[1].

In 1983, Myron Krueger completed a prototype of the VIDEOPLACE which allowed a user to interact with images on a vision based system via a set of predefined gestures. Though the system did not rely on touch, the significance of this work was that it was able to recognize a number of gestures, such as, pointing, pinching, and dragging[2].

In 1985, the first multi-touch tablet using capacitive touch screens was developed by the Input Research Group at the University of Toronto. This tablet was capable of sensing more than one point of contact at a time, determining their locations and measuring their degree of contact. The additional significance of this tablet, at the time, was its method of scanning user input and its ability to detect contact with a high degree of resolution[3].

In 1991, a multi-touch desktop display was developed that aimed to give the user’s desk and workspace the properties of an electronic workspace was developed by Rank Xerox EuroPARC. The significance of this project is that rather than using capacitive touch screens, it used a computer controlled camera in combination with a projector above it to sense the user input. In the set up, the camera detected the location of the user’s input and what they were pointing to. The projector displayed feedback and electronic objects onto the surface[4].

In 1992, IBM and Bell South released the first touch screen smart phone called Simon. The input depended on both stylus and touch replacing physical buttons with a virtual (or soft) keyboard on the screen. Simon’s functionality included that of a phone and a PDA. One of the limitations of this technology was that it could not differentiate pen contact from finger contact[5].

In 1997, the T3 was invented by Alias|Wavefront which allowed the user to use more gestures including navigating, panning, rotating, and zooming that responded quickly to the user input[6]. Additionally, the user could combine these basic movements to form more complex ones.

In 1998, FingerWorks made a number of multi-touch sensing devices that could recognize a number of multi-touch inputs. It’s first significant, and relevant to multi-touch, invention was a keyless key pad that allowed the user to press the keys without having to exert as much pressure as they would using a normal keyboard. Additionally, it also supported a number of gestures. This device was primarily successful for people who suffered from Carpel Tunnel or other repetitive stress injuries and allowed them to interact with the computer[7]. This company was later bought by Apple who used FingerWorks designed technologies in their multi-touch devices.

In 2004, the first (commercially available) transparent multi-touch capable screen was released. This device, called the Jazz Mutant, was targeted towards musicians[8].

In 2007, two commercially successful multi-touch devices were introduced. These two devices were the Apple iPhone and Microsoft Surface. The iPhone was a smart phone that used multi-touch technology and was capable of detecting two points of contact simultaneously. The iPhone was also capable of recognizing basic gestures including, but not limited to, the pinch and swipe. This phone used a capacitive-sensing touch screen[9]. The iPhone was geared for the general public and was integral in making multi-touch technology ubiquitous. Microsoft Surface was a table surface able to sense multiple user touches and gestures simultaneously on a single surface. Because of its price of $14,000, it was geared towards corporations and institutions, where it was successful. Microsoft Surface used optical technology to detect input.[10].