Intelligent Agents
This notebook serves as supporting material for topics covered in Chapter 2 - Intelligent Agents from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from agents.py module. Let's start by importing everything from agents module.
CONTENTS
Overview
Agent
Environment
Simple Agent and Environment
Agents in a 2-D Environment
Wumpus Environment
OVERVIEW
An agent, as defined in 2.1, is anything that can perceive its environment through sensors, and act upon that environment through actuators based on its agent program. This can be a dog, a robot, or even you. As long as you can perceive the environment and act on it, you are an agent. This notebook will explain how to implement a simple agent, create an environment, and implement a program that helps the agent act on the environment based on its percepts.
AGENT
Let us now see how we define an agent. Run the next cell to see how Agent
is defined in agents module.
The Agent
has two methods.
__init__(self, program=None)
: The constructor defines various attributes of the Agent. These includealive
: which keeps track of whether the agent is alive or notbump
: which tracks if the agent collides with an edge of the environment (for eg, a wall in a park)holding
: which is a list containing theThings
an agent is holding,performance
: which evaluates the performance metrics of the agentprogram
: which is the agent program and maps an agent's percepts to actions in the environment. If no implementation is provided, it defaults to asking the user to provide actions for each percept.
can_grab(self, thing)
: Is used when an environment contains things that an agent can grab and carry. By default, an agent can carry nothing.
ENVIRONMENT
Now, let us see how environments are defined. Running the next cell will display an implementation of the abstract Environment
class.
Environment
class has lot of methods! But most of them are incredibly simple, so let's see the ones we'll be using in this notebook.
thing_classes(self)
: Returns a static array ofThing
sub-classes that determine what things are allowed in the environment and what aren'tadd_thing(self, thing, location=None)
: Adds a thing to the environment at locationrun(self, steps)
: Runs an environment with the agent in it for a given number of steps.is_done(self)
: Returns true if the objective of the agent and the environment has been completed
The next two functions must be implemented by each subclasses of Environment
for the agent to recieve percepts and execute actions
percept(self, agent)
: Given an agent, this method returns a list of percepts that the agent sees at the current timeexecute_action(self, agent, action)
: The environment reacts to an action performed by a given agent. The changes may result in agent experiencing new percepts or other elements reacting to agent input.
SIMPLE AGENT AND ENVIRONMENT
Let's begin by using the Agent
class to creating our first agent - a blind dog.
What we have just done is create a dog who can only feel what's in his location (since he's blind), and can eat or drink. Let's see if he's alive...
This is our dog. How cool is he? Well, he's hungry and needs to go search for food. For him to do this, we need to give him a program. But before that, let's create a park for our dog to play in.
ENVIRONMENT - Park
A park is an example of an environment because our dog can perceive and act upon it. The Environment class is an abstract class, so we will have to create our own subclass from it before we can use it.
PROGRAM - BlindDog
Now that we have a Park Class, we re-implement our BlindDog to be able to move down and eat food or drink water only if it is present.
Now its time to implement a program module for our dog. A program controls how the dog acts upon its environment. Our program will be very simple, and is shown in the table below.
Percept: | Feel Food | Feel Water | Feel Nothing |
Action: | eat | drink | move down |
Let's now run our simulation by creating a park with some food, water, and our dog.
Notice that the dog moved from location 1 to 4, over 4 steps, and ate food at location 5 in the 5th step.
Let's continue this simulation for 5 more steps.
Perfect! Note how the simulation stopped after the dog drank the water - exhausting all the food and water ends our simulation, as we had defined before. Let's add some more water and see if our dog can reach it.
Above, we learnt to implement an agent, its program, and an environment on which it acts. However, this was a very simple case. Let's try to add complexity to it by creating a 2-Dimensional environment!
AGENTS IN A 2D ENVIRONMENT
For us to not read so many logs of what our dog did, we add a bit of graphics while making our Park 2D. To do so, we will need to make it a subclass of GraphicEnvironment instead of Environment. Parks implemented by subclassing GraphicEnvironment class adds these extra properties to it:
Our park is indexed in the 4th quadrant of the X-Y plane.
Every time we create a park subclassing GraphicEnvironment, we need to define the colors of all the things we plan to put into the park. The colors are defined in typical RGB digital 8-bit format, common across the web.
Fences are added automatically to all parks so that our dog does not go outside the park's boundary - it just isn't safe for blind dogs to be outside the park by themselves! GraphicEnvironment provides
is_inbounds
function to check if our dog tries to leave the park.
First let us try to upgrade our 1-dimensional Park
environment by just replacing its superclass by GraphicEnvironment
.
Now let's test this new park with our same dog, food and water. We color our dog with a nice red and mark food and water with orange and blue respectively.
Adding some graphics was a good idea! We immediately see that the code works, but our blind dog doesn't make any use of the 2 dimensional space available to him. Let's make our dog more energetic so that he turns and moves forward, instead of always moving down. In doing so, we'll also need to make some changes to our environment to be able to handle this extra motion.
PROGRAM - EnergeticBlindDog
Let's make our dog turn or move forwards at random - except when he's at the edge of our park - in which case we make him change his direction explicitly by turning to avoid trying to leave the park. However, our dog is blind so he wouldn't know which way to turn - he'd just have to try arbitrarily.
Percept: | Feel Food | Feel Water | Feel Nothing | ||||||
Action: | eat | drink |
|
ENVIRONMENT - Park2D
We also need to modify our park accordingly, in order to be able to handle all the new actions our dog wishes to execute. Additionally, we'll need to prevent our dog from moving to locations beyond our park boundary - it just isn't safe for blind dogs to be outside the park by themselves.
Now that our park is ready for the 2D motion of our energetic dog, lets test it!