Friday, August 12, 2022

Python, pip, venv, package managers

 Upgrading an old computer from one Ubuntu LTS version to the next: 20.04 to 22.04,...

I came across an annoying error when running do-release-upgrade:

AttributeError: 'UbuntuDistroInfo' object has no attribute 'get_all' 

This shows up in other forms, folks online complain about errors of the kind:

AttributeError: 'DistUpgradeController' object has no attribute 'tasks'


Now the usual reaction is to blame Ubuntu's release. Free Software, and it's lack of reliability, ...

Except that I've upgraded several computers from 20.04 to 22.04, so I know that the process works well.  Quite unlike my usual nature, I decided to investigate.

The fix: pip uninstall distro_info


I'll spare you the full investigation, because life is short.  The end-result is that this is caused by Python's incredibly brittle package management.  On an Ubuntu system, you have both packages installed by the package manager (dpkg/apt) and by pip.  And pip can install software for the whole system, when run by root.  In this case, the package 'distro-info' was installed by root using pip, and that overrides whatever distro_info object exists in Python.  Since the Ubuntu installer expects certain behavior from the class UbuntuDistroInfo, the package install fails early.

The fix, again: pip uninstall distro_info


Having solved it, let's reflect on the incredibly broken nature of Python packaging. Why was a user unable to upgrade their Ubuntu system, and why was pip at the bottom of the mess?

First, the package namespace should be unique.  The package distro_info should either not conflict, or Ubuntu's package manager should create a unique name that doesn't conflict. It should be called 'ubuntu_distro_info'.  If these are maintained by the same team, then new versions should be very careful removing methods from previous versions.  Software versioning is complicated, but there are many lessons learned over the years.

Second, the parallel universes that exist between pip and dpkg/apt are a mess. root should not be allowed to pip install packages. 

Third, venv is a crutch. Package maintenance is difficult, and dependencies are tied specific version of packages, and so you need virtual environments to create each parallel universe.  This gets the job done, but pushes a lot of maintenance headache to the end-user of the packages.  Now, in addition to the dpkg/pip mess, you individual pip messes in subdirectories scattered all over your filesystem.

Fourth, python versus python3. In Debian-based systems, python and python3 have different namespaces for packages, and this further confounds the issue. So if you install python-libraryname, you might also have to install python3-libraryname.

Finally, python errors are broken. This is a pet peeve of mine.  Python's errors are the bottom of the stack, and the full stack trace.  But these are completely unhelpful in explaining what might be involved. If package versions are in flux all the time, python software should start off verifying versions and sanity.  Imagine if this failure had happened half-way in the install process, leading to a broken machine. Fragile systems need defensive programming.


All this leads to an incredibly brittle software setup: where packages are frozen in time (in venv directories), but also no way of querying what versions of software exist on the system.  Once an environment is set-up, there is little confidence that it will continue working.



Sunday, February 27, 2022

Christone "Kingfish" Ingram

I recently heard an artist called Christone Ingram, who goes by the stage name "Kingfish". He plays the blues on guitar. It blew me away: the guitaring, the singing.  After many years, I have come across a contemporary artist who is awe-inspiring.

It all began when I chanced upon an album called "662" by someone holding a Stratocaster.  I wasn't expecting much.  What could have been just another blues musician turned out to be a genius of our time, for each song was delightful.


If you like blues, or classic rock, give it a listen.
  This Youtube video, for example, gives you some idea of his level of skill. You can also preview his music on his website: the album I heard is called "662", and that's his second album.  His first album is called "Kingfish".


I find 'influencers' hollow.  Many current influencers are popular solely because they're popular. Some of them might be good looking, but they have no skill beyond that accident of birth. For every influencer that takes our attention, there are real artists, folks with skill, folks working hard. We ought to devote  our time on folks who have skill, who advance art.



Sunday, February 20, 2022

Book Review: Systems Performance 2nd ed, by Brendan Gregg

Summary: "Systems Performance", by Brenden Gregg covers end-to-end performance for Linux-based systems. If you run Linux software, you will learn a lot from this book.


From its rough and loose beginnings, Linux has become a force in the commercial world. Linux is the most pervasive, most readily available system that you can experiment with.  Starting from the $10 Raspberry Pi to the multi-million dollar Top 500 supercomputers, Linux runs on everything: laptops, desktops, phones, cloud instances.

Despite widespread adoption, there is little documentation to get a thorough understanding of system performance. I routinely see veteran engineers struggle with performance bottlenecks. Folks revert to running 'top', and trying to infer everything from its limited output.  The easy answer is to over-provision hardware or cloud instances to cover up sloppy performance. A better answer is to get a solid understanding of end-to-end performance; to find and eliminate bottlenecks.


"Systems Performance", by Brenden Gregg covers the entire area of end-to-end performance of all components: CPU, RAM, network, block devices.  The second edition of this book is focussed on Linux, and covers many tools and utilities that are critical to understanding every level of the stack. If you have written any software on Linux, or intend to write any software on Linux, you need a copy.

First, the good:
  1. There is an overview at the beginning, and then a deep-dive on specific system resources (CPU, RAM, block devices, network). You read the overview to understand the system at the top-level, and based on your system and bottlenecks, you can read the in-depth sections.
  2. There's coverage of pre-BPF tools (perf, sar, ftrace) in addition to the newer BPF-era tools like bcc and bpftrace. 'perf' probes are easier to use, and available on more architectures, for instance. BPF-based tools can be a slog to install, or might not have good support on fringe architectures and older kernels. No single tool can cover every need, and good engineers need to understand the full tool landscape. This book provides a wide overview of most tools.
  3. The book provides a methodical look of the full system, with tools targeting individual levels of the system components (example diagram). This process helps isolate the problem to the correct component.

The not-so-good:
  1. The book is repetitive. Since it expects some readers will start reading a deep-dive, it repeats the USE methodology at the start of most chapters. Folks reading it cover-to-cover will find themselves wondering if they have seen the material already.
  2. Print quality is worse than the previous edition. The fonts are thin and dim, the pages bleed through, and the graphs need more contrast. The first edition was a high quality printed book, and the second edition is worse in this department. Since this is a reference book, a physical copy is better than an ebook. You will mark pages, put sticky notes, and highlight tools that are more pertinent to your work. Luckily, the binding holds up to heavy use.
    I really wish the third edition comes with better print quality, and is hard-bound.

Every software engineer should be familiar with end-to-end performance: how to think about it, how to locate trouble spots, and how to improve the system.  This book will give you a firm foundation of performance that should help on most desktop, server, and cloud systems. 

You will probably not get this understanding from a scattershot reading of online documentation and Stack Overflow articles. Online articles are limited in scope and accuracy, and don't provide a comprehensive view of how to think about performance. This topic deserves a book-length treatment.


Image Courtesy: Brenden Gregg


Monday, January 03, 2022

Tensorflow 2.8 and Jax 0.1.76 for NO AVX cpus

 In what has become a tradition, I compiled Tensorflow for my no-avx CPU.  This time, the installation was more complicated because of a dependency on jaxlib. I had installed jax either through pip3 or through Debian's repositories (apt-get tool). The jaxlib was compiled with AVX support and would not work on my computer.

So I spent some time getting Jax sources and compiling those without AVX support.

Here are the two files for older Intel CPUs:

jaxlib-0.1.76-cp38-none-manylinux2010_x86_64.whl

tensorflow-2.8.0rc0-cp38-cp38-linux_x86_64.whl


Unless you have compiled your own jaxlib, you will need to download both. The jaxlib should be useful with the native 'jax' install from pip3 since the jax library only contains Python code.  As I understand, the jax library does not contain native code.

You could also use the jaxlib in isolation for playing with Jax.

To install them, download the whl files to disk, and run 

pip3 install filenameHere.whl


These were compiled on a cpu with the following flags in the output of /proc/cpuinfo:

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault cat_l2 ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts md_clear arch_capabilities


Both these wheels work great on my Core 2 duo and another Pentium CPU that didn't have AVX support.  Compiled with Python 3.8, they should work for most Linux distributions, assuming the dependencies (numpy, absl-py, scipy, flatbuffers, tensorboard, ...) are installed. pip3 should get the dependencies you don't have. None of the dependencies contain any native code that requires AVX instructions.



Wednesday, June 02, 2021

Tensorflow 2.5 without AVX

 I was playing with Tensorflow, needed a new version, and realized that a Tensorflow release without AVX still doesn't exist.  My prior post on Tensorflow without AVX had been beneficial to people, so here's Tensorflow 2.5 for Linux.

Tensorflow 2.5 for Linux without AVX support (155 Megabytes)

and the prior link

Tensorflow 2.3 for Linux without AVX support (126 Megabytes)


Download the file, and then run

$ pip3 install -U filename.whl

Friday, March 12, 2021

Cloud of DOS machines

tl:dr;  ssh guestdos@dos.eggwall.com


Retro computing is in. New computers and new systems are always fun. But to really appreciate the arc of history, use something old. Something ancient. Something you never really learned.

To help you get your retro computing fix, here's a bank of DOS machines. Rather than MSDOS (copyrighted, etc), I've got FreeDOS, an open-source implementation of DOS. There is the editor 'edit' and 'edlin'. 'foxcalc' is a sweet calculator. 'help' will tell you what to do. For the Assembly geeks, there is always 'debug'.

This is running on my custom cloud of DOS machines. Connect to your instance today. You can use either ssh or telnet:

$ ssh guestdos@dos.eggwall.com


Telnet works too:

$ telnet dos.eggwall.com

Username: 'guestdos', no password.


When you are done, type 'halt' and travel back to the present time.


DOS


Look around the file system. There's compilers for C, Pascal, an IDE, emacs, vi, BASIC, and some games. Using a system this old gives you an appreciation for systems today. You miss the many conveniences you take for granted.  You are also closer to the machine, you can modify arbitrary memory addresses. Nothing to get in your way.

Powered by the FreeDOS project, and magic.