Sunday, February 27, 2022

Christone "Kingfish" Ingram

I recently heard an artist called Christone Ingram, who goes by the stage name "Kingfish". He plays the blues on guitar. It blew me away: the guitaring, the singing.  After many years, I have come across a contemporary artist who is awe-inspiring.

It all began when I chanced upon an album called "662" by someone holding a Stratocaster.  I wasn't expecting much.  What could have been just another blues musician turned out to be a genius of our time, for each song was delightful.

If you like blues, or classic rock, give it a listen.
  This Youtube video, for example, gives you some idea of his level of skill. You can also preview his music on his website: the album I heard is called "662", and that's his second album.  His first album is called "Kingfish".

I find 'influencers' hollow.  Many current influencers are popular solely because they're popular. Some of them might be good looking, but they have no skill beyond that accident of birth. For every influencer that takes our attention, there are real artists, folks with skill, folks working hard. We ought to devote  our time on folks who have skill, who advance art.

Sunday, February 20, 2022

Book Review: Systems Performance 2nd ed, by Brendan Gregg

Summary: "Systems Performance", by Brenden Gregg covers end-to-end performance for Linux-based systems. If you run Linux software, you will learn a lot from this book.

From its rough and loose beginnings, Linux has become a force in the commercial world. Linux is the most pervasive, most readily available system that you can experiment with.  Starting from the $10 Raspberry Pi to the multi-million dollar Top 500 supercomputers, Linux runs on everything: laptops, desktops, phones, cloud instances.

Despite widespread adoption, there is little documentation to get a thorough understanding of system performance. I routinely see veteran engineers struggle with performance bottlenecks. Folks revert to running 'top', and trying to infer everything from its limited output.  The easy answer is to over-provision hardware or cloud instances to cover up sloppy performance. A better answer is to get a solid understanding of end-to-end performance; to find and eliminate bottlenecks.

"Systems Performance", by Brenden Gregg covers the entire area of end-to-end performance of all components: CPU, RAM, network, block devices.  The second edition of this book is focussed on Linux, and covers many tools and utilities that are critical to understanding every level of the stack. If you have written any software on Linux, or intend to write any software on Linux, you need a copy.

First, the good:
  1. There is an overview at the beginning, and then a deep-dive on specific system resources (CPU, RAM, block devices, network). You read the overview to understand the system at the top-level, and based on your system and bottlenecks, you can read the in-depth sections.
  2. There's coverage of pre-BPF tools (perf, sar, ftrace) in addition to the newer BPF-era tools like bcc and bpftrace. 'perf' probes are easier to use, and available on more architectures, for instance. BPF-based tools can be a slog to install, or might not have good support on fringe architectures and older kernels. No single tool can cover every need, and good engineers need to understand the full tool landscape. This book provides a wide overview of most tools.
  3. The book provides a methodical look of the full system, with tools targeting individual levels of the system components (example diagram). This process helps isolate the problem to the correct component.

The not-so-good:
  1. The book is repetitive. Since it expects some readers will start reading a deep-dive, it repeats the USE methodology at the start of most chapters. Folks reading it cover-to-cover will find themselves wondering if they have seen the material already.
  2. Print quality is worse than the previous edition. The fonts are thin and dim, the pages bleed through, and the graphs need more contrast. The first edition was a high quality printed book, and the second edition is worse in this department. Since this is a reference book, a physical copy is better than an ebook. You will mark pages, put sticky notes, and highlight tools that are more pertinent to your work. Luckily, the binding holds up to heavy use.
    I really wish the third edition comes with better print quality, and is hard-bound.

Every software engineer should be familiar with end-to-end performance: how to think about it, how to locate trouble spots, and how to improve the system.  This book will give you a firm foundation of performance that should help on most desktop, server, and cloud systems. 

You will probably not get this understanding from a scattershot reading of online documentation and Stack Overflow articles. Online articles are limited in scope and accuracy, and don't provide a comprehensive view of how to think about performance. This topic deserves a book-length treatment.

Image Courtesy: Brenden Gregg

Monday, January 03, 2022

Tensorflow 2.8 and Jax 0.1.76 for NO AVX cpus

 In what has become a tradition, I compiled Tensorflow for my no-avx CPU.  This time, the installation was more complicated because of a dependency on jaxlib. I had installed jax either through pip3 or through Debian's repositories (apt-get tool). The jaxlib was compiled with AVX support and would not work on my computer.

So I spent some time getting Jax sources and compiling those without AVX support.

Here are the two files for older Intel CPUs:



Unless you have compiled your own jaxlib, you will need to download both. The jaxlib should be useful with the native 'jax' install from pip3 since the jax library only contains Python code.  As I understand, the jax library does not contain native code.

You could also use the jaxlib in isolation for playing with Jax.

To install them, download the whl files to disk, and run 

pip3 install filenameHere.whl

These were compiled on a cpu with the following flags in the output of /proc/cpuinfo:

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault cat_l2 ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts md_clear arch_capabilities

Both these wheels work great on my Core 2 duo and another Pentium CPU that didn't have AVX support.  Compiled with Python 3.8, they should work for most Linux distributions, assuming the dependencies (numpy, absl-py, scipy, flatbuffers, tensorboard, ...) are installed. pip3 should get the dependencies you don't have. None of the dependencies contain any native code that requires AVX instructions.