Dr. Mark Humphrys

School of Computing. Dublin City University.

Online coding site: Ancient Brain

coders   JavaScript worlds

Search:

Free AI exercises


CA425 Practical

Express the 3 x 3 noughts-and-crosses (or "Tic-tac-toe" or "X's and O's") game as a Reinforcement Learning problem.

The program will play against another copy of itself. That is, at first you will be playing against a random player. As you learn, your opponent gets better.


  1. Define the space of states x.
    e.g. Let x = (x1,..,x9) represent the 9 squares.
    Each xi takes value: 0 (blank), 1 (me) or 2 (opponent).
    Start at x = (0,0,0,0,0,0,0,0,0).
    I make first move (one empty square becomes "1"). Then the opponent makes a move into an empty square (becomes "2"). Then I observe the new state. And can take a new action, and so on.

    Note from my viewpoint the world is probabilistic (same action a in same x can lead to different y after opponent moves).

    Observe x
    Take a
    (Opponent makes move)
    Observe y
    

  2. Define the space of actions a.
    e.g. Let a = (a1).
    a1 takes value 1 to 9 (meaning put "1" into one of the 9 squares).

  3. How to deal with illegal actions? (e.g. Put "1" into non-empty square.) In different states x, different actions a are illegal. Rather complex to define all these in advance. Various possibilities:
    1. Agent learns that action leaves you in same state x, ready to take action again.
    2. Loop:
       repeat 
       {
         a = getSuggestedAction(x) 
       }
       until ( validAction(x,a) )
      
    3. Punish the action (e.g. lose game) to make it learn not to take it.

  4. Define when a win happens.
    Rather than define in advance all possible win states, it might be easier to write a function "isWin(y)" to check if any given state y is a win.

  5. Set it up so you can do a run (until end game, maximum of 9 steps) and are ready to plug in a learning algorithm.

  6. Play it first with an algorithm that suggests random actions.

The above is for 40 percent. For more:

  1. Next, suggest actions based on Q-values.

  2. Define an array to hold Q(x,a) values.
    See Coding the state-space as a lookup-table.
    See Sample code for lookup-table Q-learning.
    39 states = 19,683 states
    We need 19,683 x 9 boxes for Q(x,a) table = 177,147 boxes.
    Run testEnumeration.

  3. Define reward function.
    Consider win, loss, draw, game not over.
    Interim rewards? (e.g. For 2 in a row.)

  4. Start your Q-learning by modifying the Q-values as follows. When you get a reward or punishment:
    Q(x,a) := r

  5. Learn playing against itself.
  6. Demo it playing a random opponent. Show how well it plays.

The above is for 80 percent. For more:

  1. Modify it to look into the future, so we can move towards a goal state from far away:
    Q(x,a) := r + γ c
    where c = best Q-value in next state

  2. Modify it to deal with the fact that sometimes you get a reward / punishment, sometimes you don't (because opponent is unpredictable). Build up a running average of all feedback:
    Q(x,a) := (1-α) Q(x,a) + α ( r + γ c )
    where α goes 1, 1/2, 1/3, ...
    i.e. α is 1/n, where n is the number of times you have updated this particular pair Q(x,a).
    i.e. You need to keep another array n(x,a) to remember all these.

  3. Save Q-values to file, and read from file on startup, so don't start blank each time program is run.


Not this year: Use WWM World

An alternative would be to avoid writing the World, and write an RL Mind for the "TicTacToe" World by Szymon Zielinski on the WWM server.

You cannot learn online because you would need a persistent data structure between runs, to hold Q-values, so that you can learn from multiple runs.

See if you can learn your Q-values offline. And then you can upload your final Mind with Q-values filled in to run online.


To hand up:

  1. Write it in any language.
  2. Hand up a commented printout of your program.
    Colour, syntax-highlighted printout in landscape mode (the best way to print code.)
  3. Hand up a report showing what your program does, with output and screen shots.
  4. Show Q-values half-way through learning. Show Q-values at end. Explain what it learned.
  5. Show stats of how well your program plays against a random player.
  6. Give your write-up direct to me or leave it for me in the school office.



ancientbrain.com      w2mind.org      humphrysfamilytree.com

On the Internet since 1987.      New 250 G VPS server.

Note: Links on this site to user-generated content like Wikipedia are highlighted in red as possibly unreliable. My view is that such links are highly useful but flawed.