This position paper describes the authors’ vision of the future of computing. Once Ubiquitous Computing matures as a new paradigm for computing three major issues arise: how can these systems interact with people, environments and other systems, how can all the information collected by the system be smartly represented so it can be useful and how can the system use this information to make intelligent inferences over context. Nontraditional interfaces, Semantic Web technologies and Multi-Agent systems are discussed as possible solutions for these problems. 1. Computing in the 21 st Century Computers are everywhere. Recent data from [The Economist 2008] magazine estimates that in 2009 there was, on average, more than one personal computer for every five people in the world. This estimate, however, only considers what we nowadays call a computer. The truth is we are all surrounded by computers, but these are located inside things we wouldn‟t normally call a computer. The mass production of electronic circuitry enabled the augmentation of everyday appliances, such as mobile phones, video game consoles, vehicle control systems, television sets, household appliances, etc. In order to improve user experience, these devices come with low cost, low power, multi-functional embedded sensors, actuators and microcontrollers that collect information from the user and the environment, process it, and give some feedback. If we take into account that [Barr 2006] states that less than 1% of the 9 billions of microprocessors manufactured each year find their way into multi-application programmable computers, one can only image what sorts of devices might ship with some kind of embedded system in the near future. Once sensors become available in a wide spectrum of devices, there will also be a trend in tagging objects for them to control. One of the most promising technologies in this sense is Radio Frequency Identification (RFID). [Glover 2007] defines it as any identification system in which an electronic device that uses radio frequencies or magnetic field variations to communicate is attached to an item. [Roussos 2008] adds to that the ability to automatically identify objects, locations and individuals to computing systems without any need of human intervention. An RFID tag costs less than a dollar and with new production technologies (such as nanotechnology) these prices tend to get even cheaper. Nowadays, bar codes tag classes of products (like a milk carton), but RFID will enable item level tagging (like the specific milk carton you bought yesterday), carrying information specific to that item (like when it was produced, when it expires, from which farm does it come, even from which cows!). According to the situation, devices will be interconnected, forming a dense network of all sorts of appliances. The technology to enable the addressing of all these nodes is the Internet Protocol version 6 (IPv6). This new implementation will include 128 bits long addresses (4 times longer than IPv4 addresses, which are 32 bits long). Hence, the IPv6 address spaces supports addresses (approximately addresses), which will permit every object around us to have its own IP address, forming the Internet of Things. There is also a movement towards empowering people to design and implement devices themselves. Low cost prototyping kits such as Arduino and Wiring enable people from different backgrounds other than engineering to build electronic circuits and create functional devices. According to their website (www.arduino.cc), Arduino is „an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software and it‟s intended for artists, designers, hobbyists and everyone interested in creating interactive objects or environments.‟ Similarly, in their website (www.wiring.org.co), Wiring is defined as „an open source programming environment and electronics I/O board for exploring the electronic arts, tangible media, teaching and learning computer programming and prototyping with electronics.‟ Supporting this movement there is also a huge community of do-it-yourself enthusiasts that gather around websites like Instructables.com and Makezine.com and offer tutorials and blueprints for all kinds of projects. We should expect to see very interesting devices created by people from otherwise unexpected backgrounds. All this technological movement is in accordance to Mark Weiser‟s vision of the major trends in computing. According to [Weiser 1996], the computing history can be divided into four eras. First, the Mainframe Era, when many people used to share one computer, which was mostly run by experts behind closed doors. After that the Personal Computer Era took place, when each person had his own computer. Then, we entered into the transition from this era to the next: The Internet and Distributed Computing Era, when computers were still personal, but were connected with each other. We are now stepping into the next one, the Ubiquitous Computing Era, when many computers will share each one of us. As [Weiser 1993] defines it, Ubiquitous Computing (in short, UbiComp) is the method of enhancing computer usage by making many computers available throughout the physical environment, but making them effectively invisible to the user. [Poslad 2009] defines it as information and communication technology systems that enable information and tasks to be made available everywhere and to support intuitive human usage, appearing invisible to the user. This new paradigm of computing will evoke new forms of interactions. In the next section, we will show some of the current research directions in the area. 2. Post Desktop Interaction In the current model of Human Computer Interaction (HCI), the Interaction Designer only has to deal with the user and the system. However when computing becomes situated, a new dimension is added to the equation: the environment. [Poslad 2009] lists several types of interaction in this framework: human-tohuman interaction (HHI), human-computer interaction (HCI), human-physical world interaction (HPI) and computer-physical world interaction (CPI). As soon as everyday devices become augmented and interconnected, new interfaces will arise, exploring all of our senses. [Kortum 2008] lists some of them: haptic, gesture, locomotion, auditory, speech, interactive voice response, olfactory, taste. Haptic interfaces provide feedback through the sensation of touch. Such type of interface use a manipulator, like the PHANToM desktop haptic interface, to control a virtual or physical environment and the device provides the user with realistic touch sensations [Gupta 2008]. Gesture interfaces use face expressions and hand movements as input and can be implemented by mechanical, tactile and computer vision technologies [Nielsen 2008]. Locomotion interfaces enable users to move virtual spaces while sensing that they are moving in the physical world. They involve large scale movement and navigation, in contrast to gesture interfaces, that involve small scale movements [Whitton 2008]. Auditory interfaces involve sounds as means of feedback. They have been used for a long time, but new challenges have appeared. Some of them are how to present information to visually impaired people, how to provide an additional information channel for people whose eyes are busy with a different task, how to alert people to error or emergencies, how to provide information with limited capacity to display visual information. All of these must be achieved trying to minimize problems like annoyance, privacy, auditory overload, interference, low resolution, impermanence and lack of familiarity [Peres 2008]. Speech interfaces use voice recognition systems as means of input. It must capture what the user has said and decode it to machine understandable data [Hura 2008]. On the other side of the interaction are interactive voice response interfaces, in which a pre-recorded or machine generated voice is the means of feedback to the user. Olfactory interfaces involve scent as input or output. They involve devices that provide users with information through smell. These smells can be generated vaporizing and blending odours [Yanagida 2008]. Also, they can sense smells to make inferences over the environment. Taste interfaces simulate tastes like sweetness, bitterness, sourness, saltiness and the least known umami taste. Challenges in designing this type of interfaces include how to sense the perceived taste by the tongue and how to simulate food textures [Iwata 2008]. With these types of interfaces, input can be combined with output, like in touch screens (where the position of contact of the finger or pen with the screen can be detected and used as input and the image displayed on the screen is used as output); tangible interfaces (interfaces that augment physical devices to receive input and provide feedback); wearable computer interaction (clothes, garments and accessories with embedded systems), and others. As important as designing new types of interfaces is combining them to enhance user experience. Any one of these interfaces won‟t be adequate for all situations alone, so the skilful designer will know when and where to use and combine any of them. While we were stuck with the GUI paradigm, interaction designers used to struggle to fit functionality in WIMP (window, icon, menu, pointing device) format, lots of times achieving a poor result. In the near future, we should expect to see more natural, implicit and adequate interfaces (the right tool for the job). Since new ways to interact will start to appear, how can the amount of input and output involved in the interactive process not overload the user‟s attention and cognitive capacity? 3. Information to Empower, not Overwhelm Naturally, one can imagine the amount of information to be generated by an UbiComp infrastructure. Sensors will be everywhere acquiring and sending data, microcontroller
[1]
Albrecht Schmidt,et al.
There is more to context than location
,
1999,
Comput. Graph..
[2]
John Seely Brown,et al.
The coming age of calm technolgy
,
1997
.
[3]
Thomas R. Gruber,et al.
A translation approach to portable ontology specifications
,
1993,
Knowl. Acquis..
[4]
N. F. Noy,et al.
Ontology Development 101: A Guide to Creating Your First Ontology
,
2001
.
[5]
Albrecht Schmidt,et al.
Implicit human computer interaction through context
,
2000,
Personal Technologies.
[6]
Albrecht Schmidt,et al.
Multi-Sensor Context-Awareness in Mobile Devices and Smart Artifacts
,
2002,
Mob. Networks Appl..
[7]
Huajun Chen,et al.
The Semantic Web
,
2011,
Lecture Notes in Computer Science.
[8]
Hiroo Iwata,et al.
Haptic interfaces
,
2002
.
[9]
Leslie Pack Kaelbling,et al.
Action and planning in embedded agents
,
1990,
Robotics Auton. Syst..
[10]
Michael Barr,et al.
Programming embedded systems - with C and GNU development tools: thinking inside the box: includes real-time and Linux examples (2. ed.)
,
2006
.
[11]
Susan L. Hura,et al.
Voice User Interfaces
,
2022,
Encyclopedia of Big Data.
[12]
Bill Glover,et al.
RFID essentials
,
2006
.
[13]
BeiglMichael,et al.
Multi-sensor context-awareness in mobile devices and smart artifacts
,
2002
.
[14]
Philip Kortum,et al.
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
,
2008
.
[15]
Harry Chen,et al.
Semantic Web in the context broker architecture
,
2004,
Second IEEE Annual Conference on Pervasive Computing and Communications, 2004. Proceedings of the.
[16]
Mark Weiser,et al.
Some computer science issues in ubiquitous computing
,
1993,
CACM.
[17]
Stefan Poslad,et al.
Ubiquitous Computing: Smart Devices, Environments and Interactions
,
2009
.
[18]
Donald D. Cowan,et al.
Agents in object‐oriented software engineering
,
2004,
Softw. Pract. Exp..