Introduction to AI

Alexander Neville

2023-06-01

Artificial intelligence is the study or creation of machine algorithms which may make observations and inferences in order to perform actionable decision making, as a human or animal might. In wider society, Machine learning and AI are two often conflated terms. ML is generally considered to be a specific discipline of AI, concerned with improving an algorithm's performance using past experience.

Philosophy of AI

There is not a universal consensus on the definition of AI (which is a relatively ambiguous term), a dispute revolving around the meaning of intelligence. It could be argued that the explicit goal of artificial intelligence is to simulate human performance. If a more abstract form of intelligence - artificial or otherwise - is adopted, then AI is not required to simulate human performance at all. Philosophy and logic attempt to codify the laws of thought; rationality and correct reasoning according to these rules could be achieved by a human or artificially by a machine. In some cases, intelligence is said to be an internal property, as in a thought process or the reasoning employed, while in others it is evidenced in outward behaviour. These two conditions form four common interpretations of AI:

The Turing test is passed if a human evaluator is unable to tell apart a human and AI algorithm through interrogation. The effectiveness, or outward intelligence, of the algorithm is measured by its ability to generate human like responses (its behaviour), independent of the underlying thought process. Many academics additionally agree that the most pragmatic approach to AI is not simulating the human thought process directly. The often cited analogy is that artificial flight was not achieved by imitating the flight of birds exactly (Russell & Norvig 2021). Thus, artificial intelligence is widely agreed to be the study and creation of algorithms capable of acting rationally, or working to reach the best possible outcome. Sometimes this is called the standard model for AI.

Agents

An agent is an entity, of any type, capable of observing its environment and acting upon it. The concept of an agent is not strict, it is just a method of introspection or analysis of a system. An agent's methods of perceiving its environment are called sensors, while its methods of interacting with its environment are called actuators. Just like an agent itself, the environment of an agent could be anything. The environment is usually limited to the thing that an agent perceives or acts upon.

As an example a person's environment is the Earth; their sensors include eyes and ears and their actuators include limbs and digits. A machine might have cameras, microphones and thermometers as sensors and a collection of motors as actuators, acting within the environment of a warehouse. A computer program has sensors and actuators in the form of common IO interfaces, operating on a computer file system as its environment.

The term percept is the information an agent's sensors are currently perceiving, while an agent's percept sequence is the history of all the information the agent has perceived. The behaviour of an agent is given by the agent function which maps all possible percept sequences (an infinite list) to an action, an example of a mathematical function. The binary relation which constitutes the agent function is an external model of an agent's behaviour. The action taken by an agent at any point is determined by a concrete agent program, rather than a mathematical model. An agent program typically takes the current percept as an argument and returns an action.

Rationality

In order to exhibit intelligence, an agent must attempt to make the correct decision. The performance of an agent is evaluated by a performance measure. A rational agent selects the action expected to maximise its performance measure over a sequence of actions and states. An omniscient agent knows the exact outcome of its actions, there is no uncertainty in its behaviour. A rational agent will not perform perfectly, but in most cases it should perform well.

Agent Structure

Simple reflex agents respond directly to the current percept, regardless of state of the percept sequence. Theses agents struggle in partially (un)observable environments, often becoming stuck in an infinite loop. Sometimes randomisation is used to break these loops, but this type of agent is fundamentally unable to maintain its own understanding of the environment.

FUNCTION simple-reflex-agent(percept) -> action:
    STATIC rules;
    state <- interpret(percept);
    action <- lookup-action(state, rules);
    RETURN action;
The structure of a reflex agent

Models and Goals

Model-based reflex agents maintain internal state and a model of its interaction with the environment, facilitating rational behaviour in only partially observable environments. The agent's representation of the environment is derived from the agent's sensor model, how its percept represents the state of the environment, and its transition model - the effect observed in the environment by an action of the agent.

FUNCTION model-based-reflex-agent(percept) -> action:
    STATIC rules, state, last-action transition-model, sensor-model;
    state <- interpret(state, last-action,
                       percept,
                       transition-model,
                       sensor-model);
    action <- lookup-action(state, rules);
    RETURN action;
The structure of a model-based agent

Basic reflex agents implement a form of if-then behaviour, dictated by the rules that relate the agent's current understanding of the environment with actions. Model-based agents extend the capabilities of simple reflex agents by maintaining an internal representation of the wider environment and having some understanding of how its percepts and actions represent or affect the environment. Goal-based agents further extend the structure of model-based agents by requiring information about desirable goal states. These agents combine the transition model with the current state to select actions which achieve the chosen goal.

The structure of a goal-based agent

An agent's utility function is an internalisation of the performance measure. While many action sequences may satisfy a goal, utility-based agents seek to maximise their utility function and hence performance measure (optimisation). Utility-based agents have some internal sense of what the performance measure is, though this is not required for an agent to be rational. In very simple scenarios, rational behaviour can be programmed into a reflex agent in the form condition-action rules. More complex agents are generally more flexible and have the ability to learn and improve their performance. In each state a utility-based agent is able to assess the desirability of a state resulting from an action in the current state, using its utility function.

The structure of a Utility-based agent

Learning Agents

In all cases seen so far, an agent selects actions under certain conditions. This model does not explain how an agent is constructed. Simple agents may be programmed explicitly to behave rationally. Another strategy for creating agents is through learning. Learning agents are divided into the learning element and performance element, which dictates the actions the agent chooses as before. The learning element determines how the state and models of the performance element are modified.

State Representation

Depending on the complexity of the problem, the representation of the current state varies. In the most simple case, the state is represented atomically - there is no internal structure of the state, it is not composed of many variables. The only property of the state is its relationship with other states. Increasing in complexity, a factored representation divides each state into a set of variables with values. Being in such a state is a consequence of all these values combined, a change to one or more of them will result in a different state. Structured representations are more complicated again, incorporating objects and relationships between them, rather than just a set of individual properties.

Task Environment

The environment of an agent is the space in which it perceives and operates. The task environment of an agent is the "problem" which it is designed to solve. The task environment is composed of the performance measure, the environment itself and the agent's sensors and actuators. The task environment and hence the required agent can be categorised in a few key ways:

See Also

Or return to the index.