Aritficial IntelligenceMachine Learning

Environment in Artificial Intelligence

Environment in Artificial Intelligence

An environment in Artificial Intelligence is the surrounding of the agent. The agent takes input from the environment through sensors and delivers the output to the environment through actuators.

Types of environments in Artificial Intelligence

  1. Fully Observable vs. partially observable
  2. Deterministic vs. Stochastic
  3. Episodic vs. Sequential
  4. Static vs. Dynamic
  5. Discrete vs. Continuous
  6. Single agent vs. Multi agent

Fully Observable vs. Partially Observable

Fully observable – Agent can access to the complete state of the environment at each point in time –
Agent can detect all aspect that are relevant to the choice of action.

E.g. (Partially observable)
A vacuum agent with only local dirt sensor doesn’t know the situation at the other square
An automated taxi driver can’t see what other drivers are thinking

Deterministic vs. Stochastic

If the next state of the environment is completely determined by the current state and the agent’s current action then we say the environment is deterministic; otherwise, it is stochastic

  • The vacuum world is deterministic, but stochastic when randomly appearing dirt (due to unreliable suction mechanism).
  • The taxi-driving environment is stochastic: never predict the behavior of traffic exactly.

Episodic vs. Sequential

In an episodic task environment, the agent’s experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes.

For example, an agent that has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions; moreover, the current decision doesn’t affect whether the next part is defective.

In sequential environments, on the other hand, the current decision could affect all future decisions.

Chess and taxi driving are sequential: in both cases, short-term actions can have long-term consequences. Episodic environments are much simpler than sequential environments because the agent does not need to think ahead.

Static vs. Dynamic

If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise, it is static.
Static environments are easy to deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time.

  • Taxi-driving is dynamic
  • Crossword puzzle is static

Discrete vs. continuous

The discrete/continuous distinction applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent.

For example, the chess environment has a finite number of distinct states (excluding the clock). Chess also has a discrete set of percepts and actions.

Taxi driving is a continuous-state and continuous-time problem: the speed and location of the taxi and of the other vehicles sweep through a range of continuous values and do so smoothly over time. Taxi-driving actions are also continuous (steering angles, etc.). Input from digital cameras is discrete, strictly speaking, but is typically treated as representing continuously varying intensities and locations.

Single agent vs. multi-agent

The distinction between single-agent and Multiagent environments may seem simple enough. For example, an agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing chess is in a two agent environment.
Two kinds of multi-agent environment

  • Cooperative
  • E.g., taxing-driving is partially cooperative (avoiding Collisions, etc.)

  • Competitive
  • E.g., chess-playing

Conclusion

In conclusion Artificial Intelligence is a truly fascinating field. It deals with exciting but hard problems. A goal of AI is to build intelligent agents that act so as to optimize performance.

  • An agent perceives and acts in an environment, has an architecture, and is implemented by an agent program.
  • An ideal agent always chooses the action which maximizes its expected performance, given its percept sequence so far.
  • An autonomous agent uses its own experience rather than built-in knowledge of the environment by the designer.
  • An agent program maps from percept to action and updates its internal state.
  • Reflex agents respond immediately to percepts.
  • Goal-based agents act in order to achieve their goal(s).
  • Utility-based agents maximize their own utility function.
  • Representing knowledge is important for successful agent design.
  • The most challenging environments are partially observable, stochastic, sequential, dynamic, and continuous, and contain multiple intelligent agents.

Back to top button