Monday, November 27, 2023

Quantum Computing - Is it worth an investment of my time to understand?

I recently watched a presentation done on Quantum Computing by a guy who looks to me, like he is following it as a fascination or hobby. After watching the presentation, I decided to do some quick searching on Quantum Computing to see if there were some things I was specifically looking for that I didn't see covered in the presentation - especially along practical lines.

I found this site, with essentially corroborated his presentation:

https://www.explainthatstuff.com/quantum-computing.html

Absolute KUDOS to the author of this site, because he explains an advanced topic in a simplistic way, and discusses many of the top-of-the-head questions one might have about Quantum Computing. If you can stay patient and get down to the bottom, he shows a patent of a Quantum Computing Architecture, which is super interesting.

https://cdn4.explainthatstuff.com/monroe-kim-quantum-computer.png

He also makes this statement, which is very interesting:

"Does that mean quantum computers are better than conventional ones? Not exactly. Apart from Shor's algorithm, and a search method called Grover's algorithm, hardly any other algorithms have been discovered that would be better performed by quantum methods."

I remembered reading a blurb about Einstein some years back, and some of his comments about "Spooky Action At a Distance", where an electron 'way over here' would seem to be inextricably and unexplainably linked to another electron "way over there". And, while we even today don't seem to have a full or proven explanation of why that behavior happens, we are apparently finding it reliable enough to exploit it for purposes of Quantum Computing (the key concept here is Entanglement). Through complex state manipulation (see Schrødinger's Cat), Entanglement unlocks massively parallel computing. I won't even attempt to go deeper than this on this post.

Now...why do we want atomic level computing? With states that are entangled (see concept of entanglement), and the whole bit?

The main Use Case for this level of expense and sophistication, is cryptography - the ability to break ciphers. After all, 256 bit AES encryption is moot for that level of super-computing.  

But I wanted to see if there were others, and found this site, which kind of shows "who is doing what" with regards to Quantum Computing.

Quantum Computing Applications

I think between these links here, you can be brought up to speed on what Quantum Computing is, why it is a Thing, and some potential uses for it. Which is what most of us essentially want at this point.

Thursday, November 16, 2023

Artificial Intelligence Book 1 - Crash Course in AI - Chapter 13 - Memory Patch

Okay, last chapter in the book!

In this chapter, you get to "create" (actually, "create" means download and run) some Github hosted code that allows you to train a model to learn how to play the video game "Snake".

Snake is an early video game, probably from the 1970s or 1980s. I don't know the details of it but I am sure there is plenty of history on it. I think you could run it on those Radio Shack Tandem TRS80 computers that had 640K of RAM on them and saved to a magnetic cassette tape (I remember you could play Pong, and I think Snake was one of them also).

The idea was that each time the snake ate an apple (red square) the snake's length would increase (by one square). You could move up, down, left, right constrained by coordinate boundaries, and if the snake overlapped with itself, it died and the game ended.

Snake Video Game

When I first ran the model training for this, it ran for more than a day - perhaps all weekend, and then died. The command prompt, when I returned to check on progress, had a [ Killed ] message.

I had other models in this book die this way, and decided that I was running out of memory, and my solution to the other models was to edit the source code, and decrease the number of Epochs, and reduce the loop complexity. This made the models a LOT less efficient and reliable, but I still saw beneficial results from running them with this tactic.

In this case, for some reason, I went to Github and looked at the Issues, and I saw a guy complaining about a Memory Leak in the Tensorflow libraries. There was a patch to fix this!

Below is a Unix/Linux "diff" command, which shows this patch:

% diff train.py train.py.memoryleak
5d4
< import tensorflow as tf
12,15d10
< import gc
< import os
< import keras
<
64,67c59
<             #qvalues = model.predict(currentState)[0]
<             qvalues = model.predict(tf.convert_to_tensor(currentState))[0]
<             gc.collect()
<             keras.backend.clear_session()
---
>             qvalues = model.predict(currentState)[0]

So in summary, the patches are:

  • The original statement qvalues = model.predict(currentState)[0] is replaced by: 
    • qvalues = model.predict(tf.convert_to_tensor(currentState))[0]
  • There is also a garbage collect statement: gc.collect() that is added for the patch. 
  • A Keras library call "clear_session()" has been added

Of course some imports are necessary to reference and use these new calls. 

This fixes the memory problem. It does not appear that the training will ever end on its own when ou run this code. You have to Ctl-C it to get it to stop, because it just trains and trains, looking for a better score and more apples. I had to learn this the hard way after running train.py for a full weekend.

So this wraps up the book for me. I may do some review on it, and will likely move on to some new code samples and other books.

SLAs using Zabbix in a VMware Environment

 Zabbix 7 introduced some better support for SLAs. It also had better support for VMware. VMware, of course now owned by BroadSoft, has prio...