Building an E-Ink Calendar Display

A few weeks ago I saw a link to this project on Daring Fireball and it really caught my eye, so I decided to have a go at doing something similar based around the same WaveShare 7.5 inch screen. So over the course of the last few evenings and weekends, what I’ve produced so far is this:

A future in industrial design is not beckoning….

The Enclosure
Before I get into the development, I’ll cover the blindingly obvious: the enclosure is a bit of a mess. Those are soldering iron marks across the bottom of the frame for the screen. Having never tried it before, I thought it would be an interesting sub-project to use some CAD software to design an enclosure and order up a 3D print. It turns out that ‘having a go’ at CAD is about as good an idea as ‘having a go’ at parking an aircraft carrier. Anyway, in the process of doing battle with the software on my iPad I managed to get an internal ‘y’ dimension wrong by a whopping 5mm, and ended up having to carve an extra cavity to slot the screen into. So, to cover that mini crime scene, what was intended as the top of the enclosure is now on the underside and covered with gaffer tape.

Let’s pretend none of this happened and move on…

The Screen
As covered on the original posting, there are significant limitations with this E-Ink screen. I don’t mind about the refresh: as I will go on to explain in more detail, I currently only update it once every 15 minutes. What the original author doesn’t mention and which I found much more restrictive is the 1 bit colour depth. I now know more about typefaces than I ever thought I’d need to in my life. On a ‘normal’ screen and assuming the text is black, most fonts – as a sized, weighted and rendered instance of a typeface – are going to be displayed with anti-aliasing, the shades of grey that smooth out the jagged edges of any non horizontal or vertical lines. No access to those shades means jagged edges are the order of the day. I toyed with using dithering instead, but the results were worse. While you are obviously free to use any typeface you like, the results are going to be variations on the theme of ‘awful’. While I tried a few – I found a few resources suggesting Verdana, but it looked a bit meh – I ended up using a TrueType font that comes with the sample code for the screen.

The TL;DR Version

I have two Python scripts, running on an old Raspberry Pi 3 (which I use for other Philips Hue control related stuff such as this). The first is a lightly modified Google example script which pulls down the next 10 events from a shared calendar. It writes the data out both as a pipe-separated file for some post processing, and also as a HTML table. [Update 17/04/22: all other things being equal, the example script stops working after a week. See this for more info.]

Those post processing steps are performed by a second Python script which also calls a BBC RSS feed, and then eventually renders the various content as a 1 bit bitmap for the screen to display. Both scripts are called from cron: the calendar processing twice a day, and the rendering every 15 minutes. The two main tables alternate in prominence.

Warts And All…

I’ve included some detail on elements of the implementation which I tried and didn’t work. Part of what made the project interesting was figuring out what was viable in terms of the screen’s display capability and, generally, revising downwards.

Starting Point for the Calendar Content

The initial motivation for this was a tongue in cheek attempt to convert my wife from using a wall calendar. What I thought might be an interesting approach would be to use Selenium and chromedriver to authenticate to Google Calendar, screenshot the month, crop it down to size and convert it.

Authentication

It appears that Google may block this. Regardless, I dismissed this out of hand: I’d not thought about the implications of 2FA, so that was a showstopper (just… no, don’t :). What I settled on was the Google calendar API, using this example code from Google. It requires you to configure a Cloud project and then permission an API key for OAuth2.

So: you run the script on the command line, which initiates the Authorisation Code grant flow, printing a url to standard out. You paste this into your browser of choice – I copied this from a Putty window into Chrome on my PC – and authenticate with Google. It then redirects you to a url on localhost (not running, in my case). I copied this url from the browser, put it in quotes to avoid the shell interpreting the ampersands on the path, prefix it with the wget command and then paste it back into a separate ssh window. This then populates a json file with an access and refresh token which will be used without the browser interaction going forward.

It sounds convoluted but it’s actually straightforward in practice, and it’s a common enough approach for demonstrating grant flows without full application integration. (I’ve actually written something in the same ballpark myself, although my implementation was a lot simpler.) It’s also a lot more ‘joined up’ if you are running the browser on your Pi, as the localhost redirect will resolve to the little server the code spools up for the duration.

Update: 17/04/22: unless you ‘publish‘ the client, the example code will only be allowed to haul down tokens which expire after a week. I’ve gone with a simpler alternative, discussed here. Back to the original post…

I’ve made a couple of minor tweaks to the code. One is to change from the default ‘primary’ calendar to a shared one. You just find the calendarId string by using the ‘try this method’ built into the documentation.

The second change is writing the event data out to a couple of files for onward processing. In order to make that as simple as possible, I ‘flatten out’ the dates that the API returns. There is a gotcha here, because recurring events are returned without a start time so:

try:
   formattedDate = datetime.datetime.strptime(start, '%Y-%m-%dT%H:%M:%S%z')
except ValueError:
   # recurring events like birthdays just have:
   formattedDate = datetime.datetime.strptime(start, '%Y-%m-%d')
   formattedDate = formattedDate.replace(hour=12)

I hadn’t expected this at all, particularly as recurring events are rendered at a time on every Google Calendar UI I’ve seen, and the change in date format only emerged when I started adding birthdays to the shared calendar, which broke the original parsing – without the ‘try’.

Content Display (and wasting more time with selenium)
While I’d abandoned using selenium and chromedriver for the Google calendar screen grab, I hadn’t completely given up on it for other content rendering. It’s actually quite convoluted to get working. Using pyvirtualdisplay as a way of convincing the operating system that you’ve got a display to run X on is straightforward enough, but the versions of chromedriver and chrome itself have to match up exactly. At the time of writing the former is lagging the latest chrome release, and getting a slightly older version to work required a workaround.

The thing that made me finally abandon it was that something I had working one day (browser.find_element_by_xpath to click on ‘Most Read’ on the BBC website) stopped working a day or two later. I never got to the bottom of why chromedriver was saying it couldn’t find the element, and finally realised that RSS would be an awful lot simpler.

Up until that point I’d also been using chromedriver as a way of rendering and screenshotting the html table for the calendar events. I decided to use imgkit to do this instead. It has its foibles, specifically with UTF characters, so I may swap it out at some point.

As I alluded to earlier, while with the benefit of hindsight the whole approach around selenium reads like a bad idea (or, to put it another way, utter madness) from the get-go, it reflects the fact that over the period of the couple of weeks that I was working on the project, there was a lot of trial and error with figuring out what the screen was capable of displaying. Although as 1 bit colour depth clearly suggests, the answer to that is ‘not much’.

The last significant decision was how to present the two main pieces of content: the BBC headlines and the calendar events. Because of the lack of anti-aliasing, bigger text is better. Rather than just display one at a time, what I decided on is overlapping, and then flipping them every 15 minutes. This way, you get to see the most news- or time significant information either way round. I’ve got some clunky code to read / write out a single word to file – either ‘beeb’ or ‘cal’ indicating which was the last ‘top’ item. Depending on which of the two’s turn it is, I set the frame coordinates and, most importantly, the order that I call PIL’s Image paste method.

The last part was having something useful in the opposite corners the main tiles of content leaves. The date was a no brainer. The ‘up next’ thing took a little more work and at the moment is the only consumer of the serialised event data.

Tidying up

As well as the appearance of the table, there are a few stragglers that I may get round to tidying up. I check the length of the headlines before I print them out, truncate them if they hit a character limit, and add ellipses. I have no idea if there is any kerning magically happening with the TrueType font. I don’t check any of the other lengths – and the Pil Image Draw function will display exactly what it’s given, for instance, which could get interesting with longer event titles.

Another possible enhancement is very specific to my implementation but brings up a feature of the API call: it decides whether or not to return event data based on the end time. There doesn’t seem to be a way of changing this to start time. This would have been handy for me as I import calendar data from an external source which creates all day events that aren’t actually useful after the start date has passed. As the start time is returned by the API (recurring events not withstanding), it would be trivial to post process this.

The Implementation

There’s nothing terribly exciting about the implementation but the two python scripts are here. As ever, the repo is principally intended for my own backup purposes. The various dependencies (sourced from git, pip3 and apt-get), while not documented explicitly, are all fairly self evident.

Jenkins Container on Kubernetes

For ease of deployment, but without wanting to dive straight into Helm (just yet), I decided to try to stand up a very simple / crude Kubernetes deployment based on Jenkins. Needless to say, it wasn’t as simple as I’d hoped: I ran into a number of permission related problems.

On first pass, I got a log message which said that the Jenkins didn’t have permission to write to /var/jenkins_home/copy_reference_file.log. This post suggested a quick and easy fix, which was to run the container as root. This translates to a runAsUser 0 in a securityContext definition. And, per the recommendation (which includes caveats), the permission problem went away. While the pod started correctly, I then started having persistent problems with the connection refused, even when I tried on localhost from within the container. I suspected (possibly incorrectly) this was related to the root perms, so based on this issue, removed the block, deleted everything under the persistent volume that was already installed, and chown’ed to 1000 (which is the ubuntu user). This fixed the perms problem, but I was still getting a connection refused.

What I suspect was the problem was that I was trying to map a loadBalancer service definition onto port 80, which the non-privileged user 1000 didn’t have perms for. Changing this to the default of 8080 worked. This is the working spec. I’m slightly suspicious about the use of the environment variable for the volume but, as I say, it works.

Running Kubernetes on a Raspberry Pi Cluster

TL;DR:

  • Microk8s: eminently stable and usable.
  • Q: “Will I end up with something usable when I deploy a workload?” A: It depends on both your budget and design choices.
  • Persistent Volumes: the trickiest part, if you need them.

I have spent some spare time over the last few weeks building up, and then installing a Kubernetes cluster on 3 Raspberry Pis using Microk8s.

One point worth getting out of the way up front: if you have stumbled here via Google and are interested in finding out whether or not you are going to have something practical / usable at the end of the exercise, unfortunately that’s not a clear cut question. On first pass, I didn’t, and that was for a simple workload definition. On a system which may be resource bound to start with, decisions that you make on the software stack and hardware / budget choices (storage arguably above all else) are going to have more of an impact than in other environments.

Another reason you might land here is because there isn’t a huge amount of documentation once you get off the beaten track. What I’m principally going to write about here is a couple of problems that I found tough to solve. I’m by no means an expert in Kubernetes – the whole point of this exercise is to learn more about it – and a better informed person might well have trivially avoided the territory I got myself into.

And to conclude the scene setting, unless you have exceptionally broad experience (I don’t, having lingered at layer 7 for all of my career), it’s a fair bet that you will bump up against technical puzzles that are either outside your comfort zone, or at least the reason that might be driving your interest in a project like this in the first place.

I have linked to working examples on my GitHub repo throughout: I’ve not generalised them (e.g., references to NFS path etc) because to all intents and purposes they are my own backups – but there might some useful pointers in them and what I’ve documented below.

Here is the list of hardware that I’m currently running:

  • Cluster master (also file and database servers): Pi 4B with 8GB RAM. Storage is a Samsung T5 250 GB SSD.
  • 2x worker nodes: Pi 4B with 4GB RAM. Storage for both is SandDisk Extreme Pro 32GB micro SD.
  • Power: Anker 63W 5-Port USB charger.
  • Network: TP-Link 5-Port Gigabit Ethernet managed switch.
  • Case: Acrylic rack with cooling fans.

Power
Early in the build process, I started to run into problems with reboots on the master node when it was under load, which were almost certainly due to power. I saw a recommendation for the Anker charger in a blog post on a cluster build; after I bought the adapter, I also saw a comment on Amazon specifically saying to avoid it for the same use. Having tried a number of different options including swapping to a dedicated charger, what ended up stabilising the issue was changing the USB C cable that I was using. My research was inconclusive on whether or not the cables make a difference. There are potentially a few different moving parts here and, to cut to the chase, changing it worked for me (for now!).

Storage Attempt #1: Samba
Without a shadow of a doubt, if your use case for using K8s requires persistent volumes, figuring out an approach and then how to get it working is the most complicated part of the cluster setup. Part of my reason for deciding to use the SSD on the higher spec master was in order to run some sort of file server on the same hardware. My test case – WordPress. Not ground breaking but enough to exercise some general principles – needs 3 separate volumes: one for the webroot, one for changing the default file upload size in /usr/local/php/conf.d/, and finally one for an Apache configuration directive, which I’ll come back to shortly (the short version: CIFS related).

I made an arbitrary decision to use Samba for the file server because of a half-formed idea that it might be handy for working on files from my pc, as well as mounting in the same directories into pods as volumes. I also use it quite a lot elsewhere, e.g., for transferring data between VMs and my pc. I tried a couple of different K8s storage drivers which I couldn’t get working before finding the CIFS FLexvolume plugin which was easy to set up and configure.

Database
At the time of writing, there appears to be no support for ARM V8 in the offical MySQL container build. I spent about a week’s worth of spare time trying to make my own, complicated by the fact that I decided that it would be a fabulous idea to mount the persistent store for the database in over CIFS. It’s not, so I’m pretending it didn’t happen, and am just using the MariaDb container off the shelf. I run this on the same Pi as the master, but as a standalone container outside Microk8s, accessed via a service.

Microk8s
It was the throw of a coin for me whether or not to go with MicroK8s or K3s. Having gone with the former, it seems pretty solid. I had to do a couple of reinstalls after doing some daft newbie stuff like changing the names and IPs of the nodes after they were joined to the cluster.

Load Balancer using Metallb
This introduced a problem that was difficult to diagnose: it turned out that the arp mapping for the dynamically assigned IP wasn’t propagating. I eventually stumbled on expanding all of the comments below the original question here which in turn points at this github issue: putting all the WiFi cards in promiscuous mode worked for me.

CIFS and Apache
I went round the houses on this one and, to be clear, it has nothing to do with K8s. I’d gotten to a point where everything seemed to be working but, for some reason, all of the images that I was trying to load from WordPress were broken when the browser rendered them. Trying to refer to them directly by full URL caused the browser to download them (again, rather than trying to display them).

I had an idea early on to drop one of the downloaded files into an editor just to see if there was anything obviously wrong. While I love Sublime, it tripped me up at this point: it tries to display images rather than showing the binary content. Because the image was corrupted, it just displayed a message saying ‘loading image…’ …indefinitely.

I then tried Postman, which showed something I’d never seen before: ‘Parse Error: Expected HTTP/’. Even WireShark wasn’t much help, just displaying a warning that there were ‘illegal characters found in header name’. That threw me completely: I thought there might be a bug in the WiFi driver (based on the earlier issue with arp propagation). At this point I bought the switch and abandoned WiFi in favour of wired ethernet, which had the sole effect of delivering the corrupt binary files faster.

Having inadvisedly tried to run a database over CIFs right at the start, I then tried googling to see if the file server might be a contributory factor and fairly soon after found this. Sure enough, opening one of the corrupt image files in Notepad showed the headers incorrectly prepended to the binary content:

19:25:54 GMT
ETag: "400d-5bf13149eaf08"
Accept-Ranges: bytes
Content-Length: 16397
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: image/png

So, the third of the 3 volumes that I mount into my WordPress container spec is a httpd.conf file containing the directive EnableMMAP off.

I’ve been using Apache off and on since its first release in the mid 90s, and have never come across anything like this before. Then again, you are – hopefully! – much less likely to be doing this sort of weird plumbing in a work environment, which throws up problems like this.

Working
This is my working spec for Samba, which:

  • creates a service of type LoadBalancer;
  • refers to the service definition for the database, which is running as an external container;
  • refers to a secret, which is for MySQL;
  • has the three Samba directory mounts defined (each of which itself needs to refer to a secret).

(Note the reference to 5.6 for the WordPress container version, as the latest version seems to have problems constructing the database config on first run. Also note, per the comments immediately below that this is not a particularly useful example!)

Storage Attempt #2: NFS
Having gotten this far, I decided I should have a look at NFS, which I rejected early on because it was clearly an even worse idea(!?!) than Samba for mounting into MySQL.

I did a little bit of research into any performance differences. While there are plenty of side by side comparisons like this, there wasn’t anything that sounded terribly compelling. That said, the load performance that I was seeing was poor to the point of unusable over Samba: say 7+ seconds to render the WordPress homepage, which has very little content straight out of the box.

I was initially confused at the difference between using persistent claims and mounting volumes directly into the pod spec. This was the best explanation for the configuration that I found, and my understanding is that the claim is carving out some space in the volume which starts blank. If you want to start with, say a bunch of configuration files (or you are picking up the already installed webroot as I was from Samba), you are better off mounting directly into the pod. While the blank starting state might not be strictly true, I couldn’t see any other way round it.

NFS, it turns out, isn’t without its foibles. On Ubuntu, just doing a test mount of an exported directory, straight out of the install guide, doesn’t work. You have to specify v3 as a command line option. The example command on this page that works for the client is:

sudo mount -o v3 a-nfs-server:/path/to/export /path/to/mount

Having done a bit of digging, it became clear that the NFS client that Microk8s is using is v4. This is no doubt for good reason, as the host based model that v3 uses to authenticate client requests isn’t very robust. Unfortunately, I wasn’t able to find a way of passing a version option into the Volume definitions. Back on the stackexchange link, there is an answer to get v4 working which looks like this in /etc/exports:

/Path/to/export 192.168.1.0/24(rw,sync,fsid=0, no_root_squash,crossmnt,no_subtree_check,no_acl)

..with the appropriate emphasis being on the no_acl config option. This works, but obviously is wildly unsuitable for industrial use cases.

Claims

So this config works for me with the pod spec based volumes, which is the only difference with the Samba variant above. FWIW, it is much faster than using CIFS, and eminently usable – possibly because it is unencumbered by any security(!).

For completeness, this is what I set up for the claims based approach, starting with the persistent volumes, which are vaguely notable for requiring labels which are used as selectors by the claims, and necessary when you have more than a single volume; these are then in turn referenced by the pod spec.

And there you have it. I’ll add a separate post about some Linkderd stuff such as how to bind the dashboard to a load balancer, as it took me a while to figure out.