Previous contests have historically had your bots compete against other student bots for extra credit. In contrast, the final contest is more collaborative and gives you the chance to explore AI algorithms in a more open-ended way. The contest is called PacPack! (You’ll get the pun soon enough…) In this assignment, we’ll be moving away from the competitive agents we used for topics like minimax game trees. We want to flip the script a little, and challenge you with a much more realistic, and interesting, task: you will be building bots to cooperate with each other!
This assignment is a more open-ended project, and will draw on some of the techniques you have learned in the first half of the class while simultaneously requiring you to refine those techniques to develop a bot that can work together with other bots to solve our task. As always, our setting is Pacman!
PacPack involves a multi-player variant of Pacman, where each agent controls a Pacman in coordinated team-based strategies. The PacPack code is available as a zip archive. You may choose to work alone or with one partner. There is room to bring your own unique ideas, and there is no single set solution. We are very much looking forward to seeing what you come up with!
Extra credit points are earned on top of your overall score on projects (e.g., if you you earn 1 point of EC through the Final Contest, then that means you get an extra 1 % * 25 = 0.25 points on your overall grade tally for CS188). Recall that the grading scale is available on the policy page.
Your agent will be tested against staff agents on several “held-out” maps.
Exact thresholds to be determined after some more calibration has been done (but we wanted you to be able to start in the meantime). Students that perform well in the final leaderboard, ranked by the “final score” metric, will receive the following:
The goal of this contest is for your agent to work together with a staff-built PacMan agent to eat all but two of the pellets as quickly as possible while avoiding a single ghost. Unlike previous contests and projects, a staff bot teammate will be able to communicate with your agent by broadcasting a plan of its actions at each turn. Your agent will be able to use this plan to determine its own actions in the subsequent steps.
||The main file that runs games locally. This file also describes the
||Specification and helper methods for capture agents.|
||This is where you will define your own agent for submission. (This is the only file that you will submit.)|
||The logic behind how the Pacman world works. This file describes several supporting types like AgentState, Agent, Direction, and Grid. This is probably the only supporting file that you might want to read.|
||Useful data structures for implementing search algorithms.|
||Computes shortest paths between all maze positions.|
||Graphics for Pacman.|
||Support for Pacman graphics.|
||ASCII graphics for Pacman.|
||Keyboard interfaces to control Pacman.|
||Code for reading layout files and storing their contents.|
Although the spirit of PacPack is cooperative, we expect you to share code only with your partner and submit your own code to the best of your ability. Please don’t let us down.
Getting help: You are not alone! If you find yourself stuck on something, contact the course staff for help. Office hours, section, and the discussion forum are there for your support; please use them. If you can’t make our office hours, let us know and we will schedule more. We want these contests to be rewarding and instructional, not frustrating and demoralizing. But, we don’t know when or how to help unless you ask.
The Pacman agents’ goal is to try to eat the food in as few timesteps as possible; A ghost agent will try to stop the Pacman agents from doing so.
There are two numbers you want to pay attention to: the “score” displayed in the game GUI is just the number of pellets eaten, and the total number of timesteps taken to eat all pellets (but 2). The latter is what will be used for grading, and will be printed out in the console at the end of the game. Any game that does not finish in time (the pacman team doesn’t eat the pellets in time) will be assigned a value of
1200 timesteps taken. This score is like golf: lower is better.
We will run your submissions on an Amazon EC2 Large Instance. Each agent has 1 second to return each action. Each move which does not return within one second will incur a warning. After three warnings, or any single move taking more than 3 seconds, the game is forfeit. There will be an initial start-up allowance of 15 seconds (use the
registerInitialState function). If your agent times out or otherwise throws an exception, an error message will be present in the terminal output. Each game is limited to a maximum time of 1 minute.
Your agent will receive a broadcast from their teammate every turn, and will send one to them – containing the actions it expects to be taking in future turns. This enables each agent to update their plans in response to what the other agent is doing, and cooperate more effectively in collecting the food and avoiding the ghost.
Since your agent will not only work with cooperative staff bots, but also everyone else’s cooperative agents, we need to establish some conventions for the communication channel.
chooseActionmethod of the
myAgent.py, you have access to the broadcasted actions of your teammate in the
self.receivedBroadcastattribute, which is updated by your staff bot teammate at each step.
"North", "West", "South", "East", "Stop") and nothing else.
The above points should inform and guide how you decide to deal with incoming broadcasts from your teammate and choosing your own. In general, you want to try and make design choices that are as robust as possible to the choice of your actual teammate.
IMPORTANT: you will be setting your broadcast in
self.toBroadcast within the method
chooseAction while choosing the action for your turn (in addition to computing your current action, you can compute your expected future actions). Your teammate will receive your broadcast in their own turn. Therefore, your broadcast should not include the action you will return from
chooseAction – it will have already been performed by the time your teammate sees the broadcast.
Unlike in projects, the agent now has to work with a partner when completing the task. The behavior and predictability of the other agent varies across different phases. Finally, the time limit on computation introduces new challenges.
You should include your agent in the
myAgent.py file. Your agent must be completely contained in this one file.
capture.py should look familiar, and contains methods like
getFood, which returns a grid of all food on the board. Also, note that you can list a team’s indices with
getPacmanTeamIndices, or test membership with
isOnPacmanTeam. This is relevant for determining which agent (your agent, your teammate, or a ghost) is acting in each turn.
To facilitate agent development, we provide code in
distanceCalculator.py to supply shortest path maze distances.
To get started designing your own agent, we recommend subclassing the
CaptureAgent class. We have already done so in the starter code. This provides access to several convenient methods. Some useful methods are:
def chooseAction(self, gameState):
Override this method to make a good agent. It should return a legal action within the time limit (otherwise a random legal action will be chosen for you).
def getFood(self, gameState):
Returns a matrix where
m[x][y]=True if there is food you can eat in that square.
def getOpponents(self, gameState):
Returns agent indices of your opponents (the ghosts). This is the list of the numbers of the agents (e.g., ghosts might be
def getTeam(self, gameState):
Returns agent indices of your team. This is the list of the numbers of the agents (e.g., for the pacman team it might be
def getScore(self, gameState):
Returns the score of the agent’s team for a specific state
def getMazeDistance(self, pos1, pos2):
Returns the distance between two points; These are calculated using the provided distancer object. If
distancer.getMazeDistances() has been called, then maze distances are available. Otherwise, this just returns Manhattan distance.
You are free to design any agent you want. However, you will need to respect the provided APIs. Agents which compute during another agent’s turn will be disqualified. In particular, any form of multi-threading is disallowed, because we have found it very hard to ensure that no computation takes place on the opponent’s turn.
You can start a game with:
This will run a match using:
myAgent.py(which you will replace).
A wealth of options are available to you:
python capture.py --help
The Pacman team is created from
team.py and the ghost team is created from
oneghostTeam.py. To control one of the agents with the keyboard, pass the appropriate option:
python capture.py --keys0
The arrow keys control your character. (This might not work on Windows machines. Contact us if this is the case).
There are 2 team files,
Each pacman team has two members, specified as default values in the signature of the function
createTeam. If you want to create other agent classes for testing purposes, just change the signatures in the team files. Something to note is that your agent will not necessarily have index 0. Try to keep your code flexible.
Note: you might have to first import your agent class at the top of the team file you are using.
The ghost finally found out that the Pacman team has been secretly stealing food 😠 so it decided to hunt them down. The ghost is implemented in
The ghost cannot see very well, and usually gets confused by other sounds, leading him to go the wrong direction a lot of the time (he has partially random actions). Moreover, he is scared of going to close to the Pacman team home base, so he will try to avoid it.
By default, all games are run on the
defaultcapture layout. To test your agent on other layouts, use the
-l option. In particular, you can generate random layouts by specifying
RANDOM[seed]. For example,
-l RANDOM13 will use a map randomly generated with seed 13.
You can record local games using the
--record option, which will write the game history to a file named by the time the game was played. You can replay these histories using the
--replay option and specifying the file to replay.
All online matches should be automatically recorded and the most recent ones can be viewed on the PacPack website. You will also able to download the history associated with each replay.
For local testing, we provide an
autograder.py script. This script runs your agent with the SimpleStaffBot for 10 matches to make sure that it doesn’t crash and can deal with the structure of the game. Note:
simpleStaffBot is bad – it is mainly a sanity check that you are using the API correctly. It then runs 10 matches with a team made up of two of your bots to see if it can cooperate with itself and perform well! Use this script to get a general idea of how well your bot can perform in the online arena before submitting.
simpleStaffBot will make a print something if illegal actions are broadcast to it. This is ok in certain situations (when a pacman dies), but if this happens regularly, this is an indication that your broadcast is incorrect.
To enter into the online challenge, your agent must be defined as MyAgent in
myAgent.py. Due to the way the matches are run, your code must not rely on any additional files that we have not provided. You may not modify the code we provide, except for testing purposes.
The submission interface is slightly different than previous contests or projects. pacpack.org hosts a leaderboard interface that lets you run matches against staff bots or other teams.
myAgent.pyfile to the server.
Make sure that the last bot you submit is the one you intend to be your final submission for the contest. This is the one we will use to perform the final evaluation on the test maps.
If you choose to work with a partner, whoever submits has to appropriately mark their partner at submission time.
A huge thanks to Austen Zhu, Tony Zhao, Micah Carroll, Roshan Rao, and Mesut Yang for taking charge of the initial development of this project. Thanks to the rest of the Summer 2018 CS188 staff for helping to design, tweak, debug and deploy the first iteration, and to Noah Golmant for bringing it in as a contest in the Fall. Further thanks to Barak Michener and Ed Karuna for providing improved graphics and debugging help.
Have fun! This project is very open-ended, so make sure to just spend time exploring the problem and its possible solutions. If you find any infrastructural bugs, please report them to the staff. This will ensure they are fixed promptly so we can continue to improve this contest in the future.