Thursday, October 5, 2023

Artificial Intelligence Book 1 - Crash Course in AI - Deep Q Learning

 I read about Q Learning, and was feeling somewhat proud of myself for sticking my toe into the water.

Then I read about Deep Q Learning - in this same book - and it was as if someone took an ice bath and dumped it over my head. I went into the tunnel - advancing through chapters 9,10 and 11, only to come out the other end confused ("what did I just read?").  

The coding examples were interesting enough that I kept pushing forward, but a lot of what is in the code is masked by the fact that the math and formulas were hidden away in libraries like Keras.  So while I thought the examples were cool and I had a grasp of the problems they were attempting to solve, I still came out at the end with confusion and question marks in my head.

Q Learning vs Deep Q Learning

 In Chapter 9, which covers Deep Q-Learning, things start to get very complex very fast. So what is the difference between Q Learning (introduced in Chapter 7-8), and Deep Q Learning?

  • More complex problems in Deep Q Learning - with more variables
  • The approach to solving a more complex problem

With regards to the approach to solving problems, the book gets into a good discussion - worth mentioning here - about the difference between ArgMax and SoftMax.

Argmax vs Softmax

In Q Learning, the 'name of the game' was to find (and use) the highest Q Value. This is referred to as "Exploitive" and is known as the ArgMax method of Reinforcement Learning.  

In Deep Q Learning, probability distributions across several variables are being continually updated during the training of the model. You have a set of (input) variables, with specific weights, but as you take random samples and compare the predicated value to the actual value, the weights are updated according to the new realities (results).  This process is referred to as Explorative (in nature) and is named the SoftMax method.

Chapter 9 starts you off simple(r). With a Real Estate example of predicting home prices. Seems sensible enough, since we can all think of input variables that help drive the price of a home.  The focus here is on trying to show the process, which is broken down into the following steps:

  1. Uploading the Data Set (actual home prices)
  2. Building the Neural Network
  3. Training the Neural Network
  4. Displaying the Results

From here, the book advances into Deep Learning Theory. The idea borrows from the human brain, which is connected by Synapses that send signals. This is the fundamental concept behind Deep Q Learning because it starts with a certain number of "Layers". There are a minimum of three layers that are as follows:

  1. Input Layer - consists of Input Values, and each of these gets weights that are continually adjusted
  2. Hidden Layer(s) - these "neurons" are also continually adjusted
  3. Output Layer - this layer compares predicted values to actual values and computes Loss Error.

The loss error gets back-propagated through the layers, re-adjusting the weights continually, using a concept called Gradient Descent (which requires at least a basic understanding of Calculus). The book covers three types of Gradient Descent (Batch, Stochastic and Mini-Batch).

The book mentions Activation Functions that take weighted input values, and return an output value. The book mentions three of these, which sound intimidating, but if you are familiar with Electronics and/or Trigonometry, these names actually make some sense:

  • Sigmoid Activation Function - a logarithmic curve denoting a move from state value (no lower than) 0 to (no higher than) 1.
  • Rectifier Activation Function - a linear but angular approach from state 0 to state 1
  • Threshold Activation Function - An abrupt binary state transition from 0 to 1. This is much like flipping a switch into an on/off state.

Now from here (Chapter 9), the book goes into Chapter 10 - Self Driving Car - which an implementation of Chapter 9 - and quite fun to do. Then it dives into Chapter 11, which uses the example taken from Google's DeepMind project that optimizes server temperature with a simulation.

Chapter 11 in particular really drives home the process by showing how you can optimize or minimize costs.

  1. Building the Environment
  2. Building the Brain - using DropOut vs NoDropOut techniques
  3. Implementation (of the Deep Learning Algorithm)
  4. Training the AI - using either Early Stopping or No Early Stopping
  5. Testing the AI

Seeing is believing, and when you see this code run and start to view the results, I have to admit it is pretty darn cool.

It also takes a LONG time to run. I had to shorten the epochs from 100 to 25 to keep the job from getting killed (I am not sure what exactly was killing it). Running for 100 epochs was taking my laptop HOURS to finish (2-3 hours). But at the end of each Epoch, almost always the energy savings from the AI was superior to the energy savings of not using the AI (which in this case is modeled by the server's mainboard temperature controller).

There's so much more to discuss. But I think I have hit the highlights here.

No comments:

Zabbix to BigPanda Webhook Integration

Background BigPanda has made its way into the organization. I wasn't sure at first why, given that there's no shortage of Network Mo...