Showing posts with label Memory. Show all posts
Showing posts with label Memory. Show all posts

Thursday, November 16, 2023

Artificial Intelligence Book 1 - Crash Course in AI - Chapter 13 - Memory Patch

Okay, last chapter in the book!

In this chapter, you get to "create" (actually, "create" means download and run) some Github hosted code that allows you to train a model to learn how to play the video game "Snake".

Snake is an early video game, probably from the 1970s or 1980s. I don't know the details of it but I am sure there is plenty of history on it. I think you could run it on those Radio Shack Tandem TRS80 computers that had 640K of RAM on them and saved to a magnetic cassette tape (I remember you could play Pong, and I think Snake was one of them also).

The idea was that each time the snake ate an apple (red square) the snake's length would increase (by one square). You could move up, down, left, right constrained by coordinate boundaries, and if the snake overlapped with itself, it died and the game ended.

Snake Video Game

When I first ran the model training for this, it ran for more than a day - perhaps all weekend, and then died. The command prompt, when I returned to check on progress, had a [ Killed ] message.

I had other models in this book die this way, and decided that I was running out of memory, and my solution to the other models was to edit the source code, and decrease the number of Epochs, and reduce the loop complexity. This made the models a LOT less efficient and reliable, but I still saw beneficial results from running them with this tactic.

In this case, for some reason, I went to Github and looked at the Issues, and I saw a guy complaining about a Memory Leak in the Tensorflow libraries. There was a patch to fix this!

Below is a Unix/Linux "diff" command, which shows this patch:

% diff train.py train.py.memoryleak
5d4
< import tensorflow as tf
12,15d10
< import gc
< import os
< import keras
<
64,67c59
<             #qvalues = model.predict(currentState)[0]
<             qvalues = model.predict(tf.convert_to_tensor(currentState))[0]
<             gc.collect()
<             keras.backend.clear_session()
---
>             qvalues = model.predict(currentState)[0]

So in summary, the patches are:

  • The original statement qvalues = model.predict(currentState)[0] is replaced by: 
    • qvalues = model.predict(tf.convert_to_tensor(currentState))[0]
  • There is also a garbage collect statement: gc.collect() that is added for the patch. 
  • A Keras library call "clear_session()" has been added

Of course some imports are necessary to reference and use these new calls. 

This fixes the memory problem. It does not appear that the training will ever end on its own when ou run this code. You have to Ctl-C it to get it to stop, because it just trains and trains, looking for a better score and more apples. I had to learn this the hard way after running train.py for a full weekend.

So this wraps up the book for me. I may do some review on it, and will likely move on to some new code samples and other books.

Friday, October 20, 2023

Deep Q Learning - Neural Networks - Training the Model Takes Resources

I now am starting to see why those companies with deep pockets have an unfair advantage in the not-so-level playing field of adopting AI.  Resources.  

It takes a LOT of energy and computing resources to train these Artificial Intelligence models.

In Chapter 11 of AI Crash Course (by Hadelin de Ponteves), I did the work. I downloaded, inspected, and ran the examples, which are based on Google's Deep Mind project. The idea is to use an AI to control server temperature, and compare this with an "internal" (no AI) temperature manager.

What you would do, is to train the model (first), and it would produce a model.h5 file, that would then be used when you ran the actual model through testing. 

The problem, though, is that on my rather powerful Mac Pro laptop, the training would never run. I would return HOURS later, only to see [ killed ] on the command prompt. The OS apparently was running out of resources (memory probably).

So I started tinkering with the code.

First, I reduced the number of epochs (from 25 to 10). 

#number_epochs = 25  

number_epochs = 10

Which looked like it helped, but ultimately didn't work.

Then, I reduced the number of times the training loops would run. When I looked at the original code, the number of iterations was enormous.

# STARTING THE LOOP OVER ALL THE TIMESTEPS (1 Timestep = 1 Minute) IN ONE EPOCH

while ((not game_over) and timestep <= 5 * 30 * 24 * 60):

This is 216,000 loop iterations in the inner loop, and of course this needs to be considered from the context of the outer loop (25, or, adjusted down to 10 as I did).  So 216,000 * 25 = 5 million, 400 thousand. If we reduce to 10 the number of Epochs, we are dealing with 2 million, 600 thousand.

I don't know how much memory (Heap) is used over that many iterations but on a consumer machine, you are probably going to tax it pretty hard (remember it has to run the OS and whatever tasks happen to be running on it).

I was FINALLY able to get this to run by reducing the number of Epochs to 10, and reducing the steps to 5 * 30 * 24 (3600). And even with this drastic reduction, you could see the benefits the AI performed over the non-AI temperature control mechanism.

SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...