Friday, August 12, 2022

Python, pip, venv, package managers

 Upgrading an old computer from one Ubuntu LTS version to the next: 20.04 to 22.04,...

I came across an annoying error when running do-release-upgrade:

AttributeError: 'UbuntuDistroInfo' object has no attribute 'get_all' 

This shows up in other forms, folks online complain about errors of the kind:

AttributeError: 'DistUpgradeController' object has no attribute 'tasks'

Now the usual reaction is to blame Ubuntu's release. Free Software, and it's lack of reliability, ...

Except that I've upgraded several computers from 20.04 to 22.04, so I know that the process works well.  Quite unlike my usual nature, I decided to investigate.

The fix: pip uninstall distro_info

I'll spare you the full investigation, because life is short.  The end-result is that this is caused by Python's incredibly brittle package management.  On an Ubuntu system, you have both packages installed by the package manager (dpkg/apt) and by pip.  And pip can install software for the whole system, when run by root.  In this case, the package 'distro-info' was installed by root using pip, and that overrides whatever distro_info object exists in Python.  Since the Ubuntu installer expects certain behavior from the class UbuntuDistroInfo, the package install fails early.

The fix, again: pip uninstall distro_info

Having solved it, let's reflect on the incredibly broken nature of Python packaging. Why was a user unable to upgrade their Ubuntu system, and why was pip at the bottom of the mess?

First, the package namespace should be unique.  The package distro_info should either not conflict, or Ubuntu's package manager should create a unique name that doesn't conflict. It should be called 'ubuntu_distro_info'.  If these are maintained by the same team, then new versions should be very careful removing methods from previous versions.  Software versioning is complicated, but there are many lessons learned over the years.

Second, the parallel universes that exist between pip and dpkg/apt are a mess. root should not be allowed to pip install packages. 

Third, venv is a crutch. Package maintenance is difficult, and dependencies are tied specific version of packages, and so you need virtual environments to create each parallel universe.  This gets the job done, but pushes a lot of maintenance headache to the end-user of the packages.  Now, in addition to the dpkg/pip mess, you individual pip messes in subdirectories scattered all over your filesystem.

Fourth, python versus python3. In Debian-based systems, python and python3 have different namespaces for packages, and this further confounds the issue. So if you install python-libraryname, you might also have to install python3-libraryname.

Finally, python errors are broken. This is a pet peeve of mine.  Python's errors are the bottom of the stack, and the full stack trace.  But these are completely unhelpful in explaining what might be involved. If package versions are in flux all the time, python software should start off verifying versions and sanity.  Imagine if this failure had happened half-way in the install process, leading to a broken machine. Fragile systems need defensive programming.

All this leads to an incredibly brittle software setup: where packages are frozen in time (in venv directories), but also no way of querying what versions of software exist on the system.  Once an environment is set-up, there is little confidence that it will continue working.