Friday, August 16, 2024

Pinephone Pro - Unboxing and Use Part II

I picked up the Pinephone Pro, which I had attached to a standard USB-C charger. It indeed was sitting at 100%. So it looks like the charging works okay.

The OS asked me for a pin code to unlock the screen. Yikes. I wasn't prompted to set up a pin code! 

I rebooted the phone to see if I could figure out what OS was on it from the boot messages. I figured out that the phone was running the Pinephone Manjaro OS. 

https://github.com/manjaro-pinephone/phosh/releases

Since the Manjaro OS has a default pincode, I attempted that pin code and got lucky - it wasn't changed, and it worked.  I (re) connected to WiFi, and noticed that the OS is prompting for my WiFi Password every single time and doesn't seem to remember it from before. Secure? Yes Annoying? Yes.

The form factor issue I ran into using the Firefox browser seemed to be more related to Firefox than the OS. The issue with Firefox is that the browser is sized past the phone form factor, and you need to scroll left and right which is a major hassle. The browser doesn't auto-size itself for the screen dimensions.

I played with the Terminal app, and noticed that the user when I launched the Terminal app was pico-xxxx (I don't remember what the suffix is). I tried to sudo to root, but didn't know what the password was for this user. 

Lastly, I played a video from YouTube, and the sound was very tinny. So the speaker on this phone is not high-end. I have not yet attempted to use a headphone on this device yet. 

Since the Linux-Mobile apps are so limited, many apps you typically run from a dedicated icon app/client on a mobile phone will need to be run from a browser.

I am not sure Manjaro is the "right" OS to use on this phone, or if the version of the OS running is current or stale. I ordered the Docking Hub and a Micro SD Card and when those arrive, maybe I will try flashing a new/different OS on this phone.

Friday, August 9, 2024

Pinephone Pro - Unboxing and First Use

I ordered a Linux Pinephone that just arrived.

In the United States, trying to get off of Google, Apple, and even Samsung is nigh onto impossible. Carriers make a ton of money off of selling and promoting phones, and have locked Linux phones out of their stores and off of their networks because they can't all collude and make money, either by selling the devices (carriers) or siphoning your data on their operating systems or defaulting the browser, etc.

There are probably numerous videos that show the unboxing of a Pinephone, so I will skip that and just make some general comments on my first experience.

When I unboxed the phone, there was no charger included. I bought this phone used on eBay, and while it came in the box, I wasn't sure if they come standard with a charger or not. The phone uses USB-C as a charger, though, and I had plenty of these. The phone had some weight to it. The screen seemed quality, but the back cover looked like a cheap piece of plastic and I could feel something pushing against the back cover (battery? dip or kill switches?). As I don't yet have a SIM for it, I have not yet opened the back.

The phone did not boot up at first. I wasn't sure of the button sequences, so I downloaded the Pinephone User Guide to get going. I decided that the phone probably needed to be charged, and plugged it into my USB-C charger, and immediately, I got a Linux boot sequence on the screen. Linux boot sequences are intimidating to just about anyone and most certainly to a user that is unfamiliar with Linux and not Linux-savvy.

When the boot sequence finished, the phone shut itself down again - presumably because it didn't have enough juice to boot and stay running. I left the phone on the charger, and returned to it 3-4 hours later.

When I came in and picked the phone up and powered it on, I got the boot sequence again and it booted up to the operating system. The OS was reasonably intuitive. I don't have a SIM in the phone yet, so I configured it for WiFi as a first step. Then I tried to set the clock, and I added my city but it is using UTC as the default. Next I went looking to see what apps were installed. It took me a few minutes to realize that the "Discover" app is the app for finding, updating and installing applications.  The first time I tried to run Discover, it crashed. When I re-launched it, it showed me some apps and I tried to update a couple of them, and got a repository error. I finally was able to update Firefox, though. Then I launched Firefox. 

Right away with Firefox, I had issues with screen real-estate and positioning. The browser didn't fit on the screen, and I didn't see a way to shrink it down to fit the screen properly. After closing the 2nd tab I had opened, I was able to use my finger to "grab" the browser, and pull it around, but clearly the browser window fit and lack of a gyroscope to re-orient the browser when the phone is turned sideways are going to make this browser a bit of a hassle - unless I can solve this.

I want to test out the sound quality. That's next.


Wednesday, June 26, 2024

Rocky Generic Cloud Image - Image Prep, Cloud-Init and VMware Tools

 

The process I have been using up to now, has been to download the generic cloud images from various Linux Distro sites (CentOS, now Rocky). These images are pre-baked for clouds, meaning that they're smaller, more efficient, and they generally have cloud packages installed on them (i.e. cloud-init).

It is easier (and more efficient) to use one of these images, in my thinking, than to try and take an ISO and build an image "from scratch".

The problem, though, is that "cloud images" are generally public cloud images: AWS, Azure, GKE, et al.  If you are running your own private cloud on VMware, you will run into problems using these cloud images.

Today, I am having issues with the Rocky 9.5 generic cloud image.

I am downloading the qcow2, using qemu-img convert to convert qcow2 to vmdk, then running ovftool using a templatized template.vmx file. Everything works fine, but when I load the image into our CMP which initializes with cloud-init, the VM is booting up fine, but no cloud-init is running, so you cannot log into the VM.

Here is the template.vmx.parameterized file I am using. I use sed to replace the parameters, then the file is renamed template.vmx before running ovftool on it.

.encoding = "UTF-8"
config.version = "8"
virtualHW.version = "11"
vmci0.present = "TRUE"
floppy0.present = "FALSE"
svga.vramSize = "16777216"
tools.upgrade.policy = "manual"
sched.cpu.units = "mhz"
sched.cpu.affinity = "all"
scsi0.virtualDev = "lsilogic"
scsi0.present = "TRUE"
scsi0:0.deviceType = "scsi-hardDisk"
scsi0:0.fileName = "PARM_VMDK"
sched.scsi0:0.shares = "normal"
sched.scsi0:0.throughputCap = "off"
scsi0:0.present = "TRUE"
ide0:0.present ="true"
ide0:0.startConnected = "TRUE"
ide0:0.fileName = "/opt/images/nfvcloud/imagegen/rocky9/cloudinit.iso"
ide0:0.deviceType = "cdrom-image"
displayName = "PARM_DISPLAYNAME"
guestOS = "PARM_GUESTOS"
vcpu.hotadd = "TRUE"
mem.hotadd = "TRUE"
bios.hddOrder = "scsi0:0"
bios.bootOrder = "cdrom,hdd"
sched.cpu.latencySensitivity = "normal"
svga.present = "TRUE"
RemoteDisplay.vnc.enabled = "FALSE"
RemoteDisplay.vnc.keymap = "us"
monitor.phys_bits_used = "42"
softPowerOff = "TRUE"
sched.cpu.min = "0"
sched.cpu.shares = "normal"
sched.mem.shares = "normal"
sched.mem.minsize = "1024"
memsize = "PARM_MEMSIZE"
migrate.encryptionMode = "opportunistic"

I have tried using cdrom,hdd and just hdd on the boot order. Neither makes a difference.

When I run the ovftool program, it generates the following files, which look correct.

Rocky-9-5-GenericCloud-LVM-disk1.vmdk
Rocky-9-5-GenericCloud-LVM-file1.iso
Rocky-9-5-GenericCloud-LVM.mf
Rocky-9-5-GenericCloud-LVM.ovf

The ovf file, I have inspected. It does have references to both the vmdk and iso file in it, as it should.

The iso file, I ran a utility on it and it seems to look okay also. The two directories user_data and meta_data seem to be on there as they should be.

$ isoinfo  -i Rocky-9-5-GenericCloud-LVM-file1.iso -l

Directory listing of /
d---------   0    0    0            2048 Dec 18 2024 [     28 02]  .
d---------   0    0    0            2048 Dec 18 2024 [     28 02]  ..
d---------   0    0    0            2048 Dec 18 2024 [     30 02]  META_DAT
d---------   0    0    0            2048 Dec 18 2024 [     29 02]  USER_DAT

Directory listing of /META_DAT/
d---------   0    0    0            2048 Dec 18 2024 [     30 02]  .
d---------   0    0    0            2048 Dec 18 2024 [     28 02]  ..

Directory listing of /USER_DAT/
d---------   0    0    0            2048 Dec 18 2024 [     29 02]  .
d---------   0    0    0            2048 Dec 18 2024 [     28 02]  ..

This Rocky generic cloud image, it does NOT have VMware Tools (open-vm-tools package) installed on it - I checked into that. But you shouldn't need VMware Tools for cloud-init to initialize properly.

I am perplexed as to why cloud-init won't load properly, and I am about to drop kick this image and consider alternative ways of generating an image for this platform. I don't understand why these images work fine on public clouds, but not VMware. 

I may need to abandon this generic cloud image altogether and use another process. I am going to examine this Packer process. 

https://docs.rockylinux.org/guides/automation/templates-automation-packer-vsphere/

 

Thursday, June 20, 2024

New AI Book Arrived - Machine Learning for Algorithmic Trading

This thing is like 900 pages long.

You want to take a deep breath and make sure you're committed before you even open it.

I did check the Table of Contents and scrolled quickly through, and I see it's definitely a hands-on applied technology book using the Python programming language.

I will be blogging more about it when I get going.

 




Tuesday, June 4, 2024

What Makes an AI Chip?

I haven't been able to understand why the original chip pioneers, like Intel and AMD, have not been able to pivot in order to compete with NVidia (Stock Symbol: NVDA).

I know a few things, like the fact that when gaming became popular, NVidia made the graphics chips that had graphics acceleration and such. Graphics tend to draw polygons, and drawing polygons is geometric and trigonometric - which require floating point arithmetic (non-integer based mathematics). Floating point is difficult for a CPU to do, so much so that classical CPUs either offloaded or employed other tricks to do these kinds of computations.

Now, these graphics chips are the "rave" for AI. And Nvidia stock has gone through the roof while Intel and AMD have been left behind.

So what does an AI chip have, that is different from an older CPU?

  • Graphics processing units (GPUs) - used mainly for training AI models
  • Field-programmable gate arrays (FPGAs) - used mainly for inference
  • Application-specific integrated circuits (ASICs) - used in various capacities of AI

CPUs use all three of these in some form or another, but an AI chip has all three of these in a highly optimized and accelerated design. Things like prediction (such as branching prediction), parallelism, etc. They're simply better at running "algorithms".

This link, by the way, from NVidia, discusses the distinction between Training and Inference:
https://blogs.nvidia.com/blog/difference-deep-learning-training-inference-ai/

CPUs, they were so bent on running Microsoft for so long, and emulating continuous revisions of instructions to run Windows (286-->386-->486-->Pentium--> and on and on), that they just never went back and "rearchitected" or came up with new chip architectures. They sat back and collected money, along with Microsoft, to give you incremental versions of the same thing - for YEARS.

When you are doing training for an AI model, and you are running algorithmic loops millions upon millions of times, the efficiency and time start to add up - and make a huge difference in $$$ (MONEY). 

So the CPU companies, in order to "catch up", I think, with NVidia, would need to come up with a whole bunch of chip design software. Then there is the software kits necessary to develop to the chips. You also have the foundry (which uses manufacturing equipment, much of it custom per the design), etc. Meanwhile, NVidia has its rocket off the ground, with decreasing G forces (so to speak), which accelerates its orbit. It is easy to see why an increasing gap would occur.

But - when you have everyone (China, Russia, Intel, AMD, ARM, et al) all racing to catch up, they will at some point, catch up. I think. When NVidia slows down. We shall see.

Tuesday, April 16, 2024

What is an Application Binary Interface (ABI)?

After someone mentioned Alma Linux to me, it seemed similar to Rocky Linux, and I wondered why there would be two Linux distros doing the same thing (picking up from CentOS and remaining RHEL compatible).

I read that "Rocky Linux is a 1-to-1 binary to RHEL while AlmaLinux is Application Binary Interface-compatible with RHEL".

Wow. Now, not only did I learn about a new Linux distro, but I also have to run down what an Application Binary Interface, or ABI is.

Referring to this, Stack Exchange post: https://stackoverflow.com/questions/2171177/what-is-an-application-binary-interface-abi, I liked this "oversimplified summary":

API: "Here are all the functions you may call."

ABI: "This is how to call a function."

Friday, March 1, 2024

I thought MacOS was based on Linux - and apparently I was wrong!

I came across this link, which discusses some things I found interesting to learn:

  • Linux is a Monolithic Kernel - I thought because you could load and unload kernel modules, that the Linux kernel had morphed into more of a Microkernel architecture because of this. But apparently not?
  •  The macOS kernel is officially known as XNU, which stands for “XNU is Not Unix.” 
 According to Apple's GitHub page:

 "XNU is a hybrid kernel combining the Mach kernel developed at Carnegie Mellon University with components from FreeBSD and C++ API for writing drivers”.

  Very interesting. I stand corrected now on MacOS being based on Linux.

AI / ML - Data Source Comparison with More Data

" To validate generalizability, I ran the same model architecture against two datasets: a limited Yahoo-based dataset and a deeper Stoc...