Aritficial IntelligenceMachine Learning

Introduction to Artificial Intelligence

Artificial Intelligence

Artificial Intelligence

is a technology and a branch of computer science that deals with the study and development of intelligent machines and software.”

Artificial Intelligence

A.I. is a branch of computer science that studies the computational requirements for tasks such as perception, reasoning and learning and develop systems to perform those tasks”

Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans”

What’s involved in Intelligence?

  • Ability to interact with the world (speech, vision, motion, manipulation)
  • Ability to model the world and to reason about it
  • Ability to learn and to adapt
  • Ability to frame task schedule
  • Ability to Enhance Calculation

Goals in AI

  • To build systems that exhibit intelligent behavior
  • To understand intelligence in order to model it
  • To Implement Human Intelligence in Machines Creating systems that understand, think, learn, and behave like humans.
  • “Can a machine think and behave like humans do?”

Major Branches Of AI

Perceptive system A system that approximates the way a human sees, hears, and feels objects

Vision system Capture, store, and manipulate visual images and pictures

Robotics Mechanical and computer devices that perform tedious tasks with high precision

Expert system Stores knowledge and makes inferences

History Of Artificial Intelligence

  • Alan Turing was an English mathematician who is often referred to as the father of modern computer science. Born in 1911, he showed great skill with mathematics and after graduating from college he published a paper “On Computable Numbers, with an Application to the Entscheidungs problem” in which he proposed what would later be known as a Turing Machine – a computer capable of computing any computable function.
  • In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[31] He noted that “thinking” is difficult to define and devised his famous Turing Test.
    AI term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology
    In 1958, John McCarthy (MIT) invented the Lisp language.
  • 1980’s, Lisp Machines developed and marketed.
  • In May, 1997,an IBM super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match.
    1990’s
  • Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics.

What is an Agents

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

An agent program runs in cycles of: (1) Perceive (2) Think (3) Act.

Human Agent:
Sensors: eyes, ears, and other organs.
Actuators: hands, legs, mouth, and other body parts.

Robotic Agent:
Sensors: Cameras and infrared range finders.
Actuators: Various motors.

Intelligent Agent

An Intelligent Agent must sense, must act, must be autonomous (to some extent),. It also must be rational. AI is about building rational agents. A rational agent always does the right thing.
They may be very simple or very complex:

  • a reflex machine such as a thermostat is an intelligent agent,
  • a human being,
  • a community of human beings working together towards a goal.

For example, autonomous programs used for operator assistance or data mining are also called “intelligent agents”.

The Structure of Intelligent Agents

An intelligent agent is a combination of Agent Program and Architecture.

Intelligent Agent = Agent Program + Architecture

Agent Program is a function that implements the agent mapping from percepts to actions. There exists a variety of basic agent program designs, reflecting the kind of information made explicit and used in the decision process.
The designs vary in efficiency, compactness, and flexibility. The appropriate design of the agent program depends on the nature of the environment.
Architecture is a computing device used to run the agent program.

To perform the mapping task four types of agent programs are there. They are:

  1. Simple Reflex Agents
  2. Model-Based Reflex Agents
  3. Goal-Based Agents
  4. Utility-Based Agents

We then explain in general terms how to convert all these into learning agents.

Simple Reflex Agents

These agents select actions on the basis of the current percept, ignoring the rest of the percept history. For example, the vacuum agent is a simple reflex agent, because its decision is based only on the current location and on whether that location contains dirt.

The agent function is based on the condition-action rule. A condition-action rule is a rule that maps a state i.e, condition to an action. If the condition is true, then the action is taken, else not.

This agent function only succeeds when the environment is fully observable.

For example, if car-in-front-is-braking then initiate braking.

Fig. Schematic Diagram of Simple Reflex Agent

Model-Based or State Based Reflex Agents

It works by finding a rule whose condition matches the current situation. A model-based agent can handle partially observable environments by use of model about the world. The agent has to keep track of internal state which is adjusted by each percept and that depends on the percept history. The current state is stored inside the agent which maintains some kind of structure describing the part of the world which cannot be seen. Updating the state requires the information about :

  • how the world evolves in-dependently from the agent, and
  • how the agent actions affects the world.

Fig. Schematic Diagram of Model-based Reflex Agents

Goal Based Agents

The goal based agent has some goal which forms a basis of its actions. Such agents work as follows:

  • information comes from sensors – percepts
  • changes the agents current state of the world
  • based on state of the world and knowledge (memory) and goals/intentions, it chooses actions and does them through the effectors.

Goal formulation based on the current situation is a way of solving many problems and search is a universal problem solving mechanism in AI. The sequence of steps required to solve a problem is not known a priori and must be determined by a systematic exploration of the alternatives.

Fig. Schematic Diagram of Goal Based Agents

Utility Based Agents

Goals alone are not enough to generate high-quality behavior in most environments. For example, many action sequences will get the taxi to its destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper than others.

They choose actions based on a preference (utility) for each state.. Agent happiness should be taken into consideration. Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real number which describes the associated degree of happiness.

Fig. Schematic Diagram of Utility-based Agents

Learning agents

The learning task allows the agent to operate in initially unknown environments and to become more competent than its initial knowledge.
A learning agent can be divided into four conceptual components:

  1. Learning element – This is responsible for making improvements. It uses the feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future.
  2. Performance element – which is responsible for selecting external actions and it is equivalent to agent: it takes in percepts and decides on actions.
  3. Critic – It tells the learning element how well the agent is doing with respect to a fixed performance standard.
    Problem generator – It is responsible for suggesting actions that will lead to new and informative experiences

Fig. Schematic Diagram of Learning Agents

A knowledge-based Agent

  • Knowledge based agents are best understood as agents that knows their world and reason about their course of action.
  • A knowledge base is a set of representations of facts of the world.
  • Each individual representation is called a sentence.
  • The sentences are expressed in a knowledge representation language.

The agent operates as follows:
1. It TELLs the knowledge base what it perceives.
2. It ASKs the knowledge base what action it should perform.
3. It performs the chosen action.

Architecture of a knowledge-based agent

Knowledge Level.

The most abstract level: describe agent by saying what it knows.
Example: A taxi agent might know that the Golden Gate Bridge connects San Francisco with the Marin County.

Logical Level.

The level at which the knowledge is encoded into sentences.
Example: Links (GoldenGateBridge, SanFrancisco, MarinCounty).

Implementation Level.

The physical representation of the sentences in the logical level.
Example: ‘(links goldengatebridge sanfrancisco marincounty)

Conclusion

In conclusion Artificial Intelligence is a truly fascinating field. It deals with exciting but hard problems. A goal of AI is to build intelligent agents that act so as to optimize performance.

  • An agent perceives and acts in an environment, has an architecture, and is implemented by an agent program.
  • An ideal agent always chooses the action which maximizes its expected performance, given its percept sequence so far.
  • An autonomous agent uses its own experience rather than built-in knowledge of the environment by the designer.
  • An agent program maps from percept to action and updates its internal state.
  • Reflex agents respond immediately to percepts.
  • Goal-based agents act in order to achieve their goal(s).
  • Utility-based agents maximize their own utility function.
  • Representing knowledge is important for successful agent design.
  • The most challenging environments are partially observable, stochastic, sequential, dynamic, and continuous, and contain multiple intelligent agents.

Environments in AI

Back to top button