Wednesday, August 9, 2023

Artificial Intelligence Book 1 - Crash Course in AI - Q Learning

The Crash Course in AI presents a fun project for the purpose of developing a familiarization with the principles of Q Learning.  

It presents the situation of Robots in a Maze (Warehouse), with the idea that the AI will learn the optimal path through a maze.

To do this, the following procedure is followed:

  1. Define a Location-to-State Mapping where each location (alphabetic; A, B, C, etc) is correlated to an Integer value (A=0,B=1,C=2,D=3, et al).
  2. Define the Actions (each location is an action, represented by its integer value) which is represented as a array of integers.
  3. Define the Rewards - here, each square in the maze, has certain squares it is adjacent to that constitute a "move". 
     The set of Reward Arrays, is an "array of arrays", and we know that an "array of arrays" is 
     essentially a matrix! Hence, we can refer to this large "rule set" as a Rewards Matrix.  
        
     # Defining the rewards
     R = np.array([
                            [0,1,0,0,0,0,0,0,0,0,0,0], --> A's only valid move is to B
                            [1,0,1,0,0,1,0,0,0,0,0,0], --> B's only valid move is to A, C, F
                            [0,1,0,0,0,0,1,0,0,0,0,0], --> C's only valid move is to B, G
                            [0,0,0,0,0,0,0,1,0,0,0,0], --> D's only valid move is H
                            [0,0,0,0,0,0,0,0,1,0,0,0], --> E's only valid move is to I
                            [0,1,0,0,0,0,0,0,0,1,0,0], --> F's only valid move is to B, J
                            [0,0,1,0,0,0,1,1,0,0,0,0], --> G's only valid move is to C, G, H
                            [0,0,0,1,0,0,1,0,0,0,0,1], --> H's only valid move is to D, G, L
                            [0,0,0,0,1,0,0,0,0,1,0,0], --> I's only valid move is to E, J
                            [0,0,0,0,0,1,0,0,1,0,1,0], --> J's only valid move is to F, I, K
                            [0,0,0,0,0,0,0,0,0,1,0,1], --> K's only valid move is to J, L
                            [0,0,0,0,0,0,0,1,0,0,1,0] --> L's only valid move is to H, K
                         ])

So this array, these "ones and zeroes" govern the "rules of the road" in terms of the maze. In fact, you could draw the maze out graphically based on these rules.

Now - from a simple perspective, armed with this information, you can feed a starting and ending location into the "Engine", and it will compute the optimal path for you. In cases where there are two optimal paths, it may give you one or the other.

But how does it do this? How does it "know"?

This gets into two key concepts, that comprise and feed an equation, known as the Bellman Equation.
  • Temporal Difference - how well (or how fast) the AI (model) is learning
  • Q-Value - this is an indicator of which choices led to greater rewards
If we consider that models like this might have thousands or even millions of X/Y coordinate data points (remember, it is a geographic warehouse model), it is not scalable for the AI to store all of the permutations of these as it works through the model. What this Bellman Equation does, is allow for a Calculus-like coefficient to be used such that we know if we hit coordinate X,Y, what the optimal steps were to reach X,Y.

Basically, as we traverse the maze, before we start, all Q values are (initialized to) zero. As we traverse the maze, the model calculates the Temporal Difference, and if it is high then the model flags it as a Reward, while if it is low, it is flagged as a "frustration". High values early on, are "pleasant surprises" to the model. So - in summary, as the maze is traversed, the TD is calculated, followed by a Q value adjustment (Q Value for the state/action combination to be precise).

Now...before I forget to mention this, the Rewards Matrix needs to be adjusted to reflect the ideal ending location.  For example, if the maze was to begin at point E, and and at point G, the X/Y axis (starting location, ending location) of G would need to have a huge value that would tell the AI to stop there and go no further. You can see this in the coding example of the book:

# Optimize the ending state with the ultimate reward
R_new[ending_state, ending_state] = 1000

I have to admit - I started coding, before I fully read and digested what was in the book. I got tripped up by two variables in the code: Alpha, Gamma. Alpha was coded as .9, while Gamma was coded as .75. I was very confused by these constants; what they were, why they were used. 

I had to go back into the book.
  • Alpha - Learning Rate
  • Gamma - Discount Factor

Hey, this AI stuff - these algorithms, they're all about Mathematics (as well as Statistics and Probability). I am not a Mathematician, and only took intermediate Calculus, so some of us really need to concentrate and put our thinking caps on if we truly want to follow the math behind these models and equations.


No comments:

Fixing Clustering and Disk Issues on an N+1 Morpheus CMP Cluster

I had performed an upgrade on Morpheus which I thought was fairly successful. I had some issues doing this upgrade on CentOS 7 because it wa...