March/April 2008, pp. 15-18, vol. 23DOI Bookmark: 10.1109/MIS.2008.19Keywords
Ambient Intelligence, Artificial Intelligence, Computer Vision, Data Mining, Human Computer Interaction, Incompleteness And Uncertainty, Intelligent Agents, Knowledge Based Systems, Knowledge Representation, Machine Learning, Multiagent Systems, Natural Language Processing, Neural Networks, Planning, Robots, Speech RecognitionAuthors
Daniel Shapiro, Institute for the Study of Learning and Expertise
Juan Carlos Augusto, University of Ulster
Carlos Ramos, Polytechnic of PortoAbstract
Ambient intelligence (AmI) deals with a new world of ubiquitous computing devices, where physical environments interact intelligently and unobtrusively with people. These environments should be aware of people’s needs, customizing requirements and forecasting behaviors. AmI environments can be diverse, such as homes, offices, meeting rooms, schools, hospitals, control centers, vehicles, tourist attractions, stores, sports facilities, and music devices. Artificial intelligence research aims to include more intelligence in AmI environments, allowing better support for humans and access to the essential knowledge for making better decisions when interacting with these environments. This article, which introduces a special issue on AmI, views the area from an artificial intelligence perspective.
The European Commission’s Information Society Technologies Advisory Group (ISTAG) introduced the concept of ambient intelligence. 1–3 Basically, AmI refers to a digital environment that proactively, but sensibly, supports people in their daily lives. 4IEEE Intelligent Systems was one of the first publications to emphasize AmI’simportance, with Nigel Shadbolt’s editorial in the July/August 2003 issue. 5 Other concepts such as ubiquitous computing, pervasive computing, context awareness, and embedded systems overlap with AmI, but there are distinctive differences. 6
The ISTAG reports define AmI at a conceptual level and identify important technologies for achieving it. In Ambient Intelligence: From Vision to Reality, ISTAG refers to these AmI components: smart materials, microelectromechanical systems and sensor technologies, embedded systems, ubiquitous communications, I/O device technology, and adaptive software. 3 The report also mentions these intelligence components: media management and handling, natural interaction, computational intelligence, context awareness, and emotional computing. In our opinion, achieving AmI will require borrowing much more from artificial intelligence.
So, artificial intelligence is important for AmI, but why is AmI important for AI? We claim that AmI is a new challenge for AI and is the next step in AI’s evolution.
Figure 1 illustrates AI’s evolution. In the beginning, researchers applied AI to hardware, such as Marvin Minsky and Dean Edmonds’ SNARC (Stochastic Neural Analog Reinforcement Computer). Neural nets were one of the technologies implemented on such systems. The MYCIN expert system is a good example from AI’s second phase, where AI centered on computers. The third phase focused on networks; a landmark application here was American Express’s Authorizer’s Assistant. During the ’90s, the Web boom produced several search engines and recommender systems using intelligent agents and, more recently, ontologies.
Figure 1 The evolution of artificial intelligence.
So, what comes next? Current trends point to incorporating intelligence into our environments. AmI is the way to achieve this.
Today, some systems treat AmI like a buzz-word, incorporating only a limited amount of intelligence. Some researchers are building AmI systems without AI, concentrating on the operational technologies, such as sensors, actuators, communications, and ubiquitous computing. However, sooner or later, that low level of intelligence will be a clear drawback. AmI’s acceptability will result from a balanced combination of operational technologies and AI.
Figure 2 shows our vision of AmI, highlighting AI’s importance. AmI environments might be very diverse—for example, your home, car, or office, or a museum you’re visiting. AmI systems are inserted in these environments, receiving information, interacting with users, performing elaborate reasoning, and ordering actions on the environment. Sensing captures information, through humans using their senses or through automatic systems such as ultrasonic devices, cameras, and microphones. Action on these environments occurs through human decisions and actions and through automatic systems such as robots and agents. In addition, persons or agents not directly interacting with the system might change the environment, and unexpected events might occur.
Figure 2 The ambient-intelligence vision from an artificial intelligence perspective.
To deal with all this, AmI systems employ the operational technologies we mentioned previously (the operational layer). And, if intelligence is to be more than just a buzzword, these systems will incorporate AI methods and techniques (the intelligent layer).
In AmI environments and scenarios, AI methods and techniques can help accomplish the following important tasks.
The technologies for this task involve analyzing various sensing inputs.
Humans most often interact through written or spoken language. So, it’s clear that they will also expect this kind of interaction with AmI environments. Speech recognition and natural language processing are different and complementary problems, using different techniques.
Speech recognition obtains an electric signal from a microphone. The first step is identifying phonemes in this signal, which involves signal processing and pattern recognition. The next step is joining phonemes and identifying words. Several speech recognition systems are available and are more or less successful, depending on how the user speaks.
Natural language input is a written sequence, resulting from a speech recognition system or obtained from a keyboard or even a written document. Natural language processing aims to understand this input. The first step is syntax analysis, followed by semantic analysis. Knowledge representation plays an important role in NLP. Automatic-translation systems are one of the most studied areas of NLP, using statistical and knowledge-based approaches.
Vision is humans’ richest sensorial input. So, the ability to automate vision is important. Basically, computer vision is a geometric-reasoning problem. Computer vision comprises many areas, such as image acquisition, image processing, object recognition (2D and 3D), scene analysis, and image-flow analysis. Computer vision can be used in different situations in AmI. For example, intelligent transportation systems can use it to identify traffic problems, traffic patterns, or approaching vehicles. Computer vision can also identify either human gestures to control equipment or human facial expressions to identify emotional states.
The processing of data acquired by many other sensorial sources (for example, raw sensors, RFID, and GPS) can also benefit from AI techniques.
AmI environments involve real-world problems, which are characterized by incompleteness and uncertainty. Generally, we deal with information; some part of it might be correct, some part might be incorrect, and some part might be missing. The question is how to proceed with an elaborated reasoning process dealing with these information problems. To handle this situation, researchers have used many techniques, such as Bayesian networks, fuzzy logic, and rough sets.
Knowledge representation is one of the most important areas in AI. Expert systems have achieved tremendous success in areas such as medicine, industry, and business. During the ’90s, with the strong development of the Internet and the birth of the Web, humans faced a critical problem. The amount of information became huge, and the mapping between information and knowledge became urgent. The AI community started paying attention to information retrieval, text mining, ontologies, and the Semantic Web. Early experience in intelligent systems development shows us that intelligence isn’t possible without knowledge; this is also true for AmI.
People expect agents to support features such as sensing capabilities, autonomy, reactive and proactive reasoning, social abilities, and learning. Multiagent systems emphasize social abilities, such as communication, cooperation, conflict resolution, negotiation, argumentation, and emotion. Multiagent systems rapidly became the main paradigm in AI. After the Web boom, agents received even more attention.
Multiagent systems are especially good at modeling real-world and social systems, where problems can be solved in a concurrent and cooperative way without needing optimal solutions (for example, in traffic or manufacturing).
In AmI environments, agents are a good way to model, simulate, and represent meaningful entities such as rooms, cars, or even persons.
Planning assists problem solving by producing a plan of action to achieve a particular goal. AI planning deals with all the aspects of general planning. Plans can be established before they execute (offline) or while they execute (online). They can be deliberative (planning and executing what was planned without considering unexpected events), reactive (reacting to stimulus in a much more basic way), or hybrid (combining the best of deliberative and reactive policies).
Planning is particularly linked with intelligence. Convincing someone that a system is intelligent is difficult if that system can’t plan how to solve problems. Consequently, AmI environments must support planning to give intelligent advice to users. A clear example is in intelligent transportation systems—both inside vehicles, where intelligent driving systems will help drivers, and on the road, where route planning will consider constraints related to traffic, time, and cost.
Most often, planning is associated with some kind of optimization. Here, combining AI and operations research makes sense. Some computational-intelligence and bio-inspired methods such as genetic algorithms, ant colonies, particle swarm intelligence, taboo search, and simulated annealing are useful.
Machine learning has received attention from the AI community from the beginning. Since the ’70s, neural networks have had great success, being applied in many real-world problems such as classification. Techniques that use more high-level descriptions—for example, inductive learning, case-based reasoning, and decision-tree-based methods—have also seen success.
During the ’80s, the term « data mining » started appearing. Many database researchers have used this term to refer to machine learning techniques (together with some statistics methods such as k-means) employed in knowledge discovery. Data mining constitutes one phase of knowledge discovery (selection, cleaning, and preprocessing are phases before data mining, while interpretation and evaluation come after data mining).
Nowadays, machine learning is widely used, so AmI will likely also need to handle this technology. One requirement for AmI is to learn by observing users. Several systems understand user commands, but they’re not intelligent enough to avoid doing things that the user doesn’t want. Basic machine learning methods will enable AmI systems to learn by observing users, thus making these systems more acceptable to them.
AmI systems should be able to interact intelligently with humans. Such interaction requires context awareness. In AmI systems, context awareness will involve such factors as mixed-initiative interfaces, adapting to users and situations, learning by observing users, consciousness of the current situation, and scalable intelligence. We’ve already discussed interaction through natural language and gestures.
Because AmI systems deal with humans, they will need to consider all pertinent social and emotional factors. For example, a person might not be interested in watching his or her favorite TV program, a soccer game, because friends who don’t like soccer are visiting (a social aspect) or because he or she is in a bad mood (an emotional aspect). Current AI research on affective computing and social computing is important for incorporating such capabilities into AmI systems.
As we mentioned before, automated devices such as robots could perform actions. Cognitive-robotics research can provide benefits for AmI environments such as smart homes. This is especially true when persons live alone, are elderly, or have health problems. The creation of intelligent robots that can perform several tasks or just act as companions is important. However, in the current state of the art, we can create robots that operate well only for specific tasks. Creating robots with the flexibility to do different tasks, as humans can do, is too complex. This limitation is due primarily to physical constraints.
AmI can’t be achieved without AI. So, AmI environments provide the next stimulating challenge for the AI community. Here we’ve mentioned many AI methods and techniques useful for AmI. Our discussion complements previous attempts to highlight AI’s importance to AmI. 7–10 The other articles in this special issue present some of the AI community’s current research in AmI prototypes and systems. This research involves AI methods and techniques such as multiagent systems, case-based planning, fuzzy systems, logic programming, hidden Markov models, and ontologies. The target environments include hospitals, geriatric residences, homes, workplaces, cultural-heritage sites, and tourist attractions.
Carlos Ramos is the director of GECAD (the Knowledge Engineering and Decision Support Research Centre) and coordinator professor at the Polytechnic of Porto’s Institute of Engineering. His main areas of interest are ambient intelligence, knowledge-based systems, decision support systems, multiagent systems, and planning. He received his PhD in electrical and computer engineering from the University of Porto. He’s a member of the IEEE. Contact him at ISEP, Rua Dr. António Bernardino de Almeida, 431, 4200-072 Porto, Portugal; firstname.lastname@example.org.Juan Carlos Augusto is a lecturer at the University of Ulster’s School of Computing and Mathematics. His research interests are ambient intelligence and smart environments. He’s the editor in chief of the book series Ambient Intelligence and Smart Environments (IOS Press). He received his PhD in computer science from the Universidad Nacional del Sur. He’s a member of the AAAI and ACM. Contact him at the School of Computing and Mathematics, Univ. of Ulster and CSRI, Newtownabbey, BT37 0QB, UK; email@example.com.Daniel Shapiro is the executive director of the Institute for the Study of Learning and Expertise (ISLE) and the president of Applied Reactivity (ARi). He works on cognitive-agent architectures at ISLE and on applications of reactive-control and discrete-logic-control systems that learn at ARi. He’s an affiliate of the Computational Learning Laboratory and formerly a senior researcher at the Center for the Study of Language and Information, both at Stanford University. He received his PhD in management science and engineering from Stanford University. Contact him at ISLE, 2164 Staunton Ct., Palo Alto, CA 94306; firstname.lastname@example.orgDéfiler vers le haut