Permanent programmatic access to Google Calendar

Following on from this post about setting up an e-Ink screen with calendar based content, what the Google quickstart example doesn’t call out is the significance of this setting in the OAuth Consent screen:

OAuth Consent

If you leave the ‘Publishing status’ set to ‘Testing’, the access and refresh tokens expire after 7 days. Also, the way the out-of-the-box script works, you need to delete the token file before re-running the script, which requires you to re-authenticate.

The process of publishing means that the script is open to the public, and requires a review by Google – clearly not appropriate for this project.

I tried a couple of different approaches to work around this, starting with service account impersonation – i.e., having a service account that impersonates the end user account (mine). While it seems to be possible to do this, you need to use Google Workspace features which I don’t have a subscription for.

The alternative approach that I’ve taken is simply to add the service account to have read only access in the Google Calendar interface:

Add the service account…

I’ve exported the credentials for the service account to a JSON file, and altered the original quickstart example to use it. It’s actually a lot simpler: it should (!) run without any manual intervention at all.

The details of the authentication that the service account is using is hidden by the Google Python library, but my working assumption is that it’s using the private key as a secret, and presenting a token via the Client Credential grant. I’ve added the new version of the quickstart file which uses the service account authentication to my GitHub repo.

Building an E-Ink Calendar Display

A few weeks ago I saw a link to this project on Daring Fireball and it really caught my eye, so I decided to have a go at doing something similar based around the same WaveShare 7.5 inch screen. So over the course of the last few evenings and weekends, what I’ve produced so far is this:

A future in industrial design is not beckoning….

The Enclosure
Before I get into the development, I’ll cover the blindingly obvious: the enclosure is a bit of a mess. Those are soldering iron marks across the bottom of the frame for the screen. Having never tried it before, I thought it would be an interesting sub-project to use some CAD software to design an enclosure and order up a 3D print. It turns out that ‘having a go’ at CAD is about as good an idea as ‘having a go’ at parking an aircraft carrier. Anyway, in the process of doing battle with the software on my iPad I managed to get an internal ‘y’ dimension wrong by a whopping 5mm, and ended up having to carve an extra cavity to slot the screen into. So, to cover that mini crime scene, what was intended as the top of the enclosure is now on the underside and covered with gaffer tape.

Let’s pretend none of this happened and move on…

The Screen
As covered on the original posting, there are significant limitations with this E-Ink screen. I don’t mind about the refresh: as I will go on to explain in more detail, I currently only update it once every 15 minutes. What the original author doesn’t mention and which I found much more restrictive is the 1 bit colour depth. I now know more about typefaces than I ever thought I’d need to in my life. On a ‘normal’ screen and assuming the text is black, most fonts – as a sized, weighted and rendered instance of a typeface – are going to be displayed with anti-aliasing, the shades of grey that smooth out the jagged edges of any non horizontal or vertical lines. No access to those shades means jagged edges are the order of the day. I toyed with using dithering instead, but the results were worse. While you are obviously free to use any typeface you like, the results are going to be variations on the theme of ‘awful’. While I tried a few – I found a few resources suggesting Verdana, but it looked a bit meh – I ended up using a TrueType font that comes with the sample code for the screen.

The TL;DR Version

I have two Python scripts, running on an old Raspberry Pi 3 (which I use for other Philips Hue control related stuff such as this). The first is a lightly modified Google example script which pulls down the next 10 events from a shared calendar. It writes the data out both as a pipe-separated file for some post processing, and also as a HTML table. [Update 17/04/22: all other things being equal, the example script stops working after a week. See this for more info.]

Those post processing steps are performed by a second Python script which also calls a BBC RSS feed, and then eventually renders the various content as a 1 bit bitmap for the screen to display. Both scripts are called from cron: the calendar processing twice a day, and the rendering every 15 minutes. The two main tables alternate in prominence.

Warts And All…

I’ve included some detail on elements of the implementation which I tried and didn’t work. Part of what made the project interesting was figuring out what was viable in terms of the screen’s display capability and, generally, revising downwards.

Starting Point for the Calendar Content

The initial motivation for this was a tongue in cheek attempt to convert my wife from using a wall calendar. What I thought might be an interesting approach would be to use Selenium and chromedriver to authenticate to Google Calendar, screenshot the month, crop it down to size and convert it.


It appears that Google may block this. Regardless, I dismissed this out of hand: I’d not thought about the implications of 2FA, so that was a showstopper (just… no, don’t :). What I settled on was the Google calendar API, using this example code from Google. It requires you to configure a Cloud project and then permission an API key for OAuth2.

So: you run the script on the command line, which initiates the Authorisation Code grant flow, printing a url to standard out. You paste this into your browser of choice – I copied this from a Putty window into Chrome on my PC – and authenticate with Google. It then redirects you to a url on localhost (not running, in my case). I copied this url from the browser, put it in quotes to avoid the shell interpreting the ampersands on the path, prefix it with the wget command and then paste it back into a separate ssh window. This then populates a json file with an access and refresh token which will be used without the browser interaction going forward.

It sounds convoluted but it’s actually straightforward in practice, and it’s a common enough approach for demonstrating grant flows without full application integration. (I’ve actually written something in the same ballpark myself, although my implementation was a lot simpler.) It’s also a lot more ‘joined up’ if you are running the browser on your Pi, as the localhost redirect will resolve to the little server the code spools up for the duration.

Update: 17/04/22: unless you ‘publish‘ the client, the example code will only be allowed to haul down tokens which expire after a week. I’ve gone with a simpler alternative, discussed here. Back to the original post…

I’ve made a couple of minor tweaks to the code. One is to change from the default ‘primary’ calendar to a shared one. You just find the calendarId string by using the ‘try this method’ built into the documentation.

The second change is writing the event data out to a couple of files for onward processing. In order to make that as simple as possible, I ‘flatten out’ the dates that the API returns. There is a gotcha here, because recurring events are returned without a start time so:

   formattedDate = datetime.datetime.strptime(start, '%Y-%m-%dT%H:%M:%S%z')
except ValueError:
   # recurring events like birthdays just have:
   formattedDate = datetime.datetime.strptime(start, '%Y-%m-%d')
   formattedDate = formattedDate.replace(hour=12)

I hadn’t expected this at all, particularly as recurring events are rendered at a time on every Google Calendar UI I’ve seen, and the change in date format only emerged when I started adding birthdays to the shared calendar, which broke the original parsing – without the ‘try’.

Content Display (and wasting more time with selenium)
While I’d abandoned using selenium and chromedriver for the Google calendar screen grab, I hadn’t completely given up on it for other content rendering. It’s actually quite convoluted to get working. Using pyvirtualdisplay as a way of convincing the operating system that you’ve got a display to run X on is straightforward enough, but the versions of chromedriver and chrome itself have to match up exactly. At the time of writing the former is lagging the latest chrome release, and getting a slightly older version to work required a workaround.

The thing that made me finally abandon it was that something I had working one day (browser.find_element_by_xpath to click on ‘Most Read’ on the BBC website) stopped working a day or two later. I never got to the bottom of why chromedriver was saying it couldn’t find the element, and finally realised that RSS would be an awful lot simpler.

Up until that point I’d also been using chromedriver as a way of rendering and screenshotting the html table for the calendar events. I decided to use imgkit to do this instead. It has its foibles, specifically with UTF characters, so I may swap it out at some point.

As I alluded to earlier, while with the benefit of hindsight the whole approach around selenium reads like a bad idea (or, to put it another way, utter madness) from the get-go, it reflects the fact that over the period of the couple of weeks that I was working on the project, there was a lot of trial and error with figuring out what the screen was capable of displaying. Although as 1 bit colour depth clearly suggests, the answer to that is ‘not much’.

The last significant decision was how to present the two main pieces of content: the BBC headlines and the calendar events. Because of the lack of anti-aliasing, bigger text is better. Rather than just display one at a time, what I decided on is overlapping, and then flipping them every 15 minutes. This way, you get to see the most news- or time significant information either way round. I’ve got some clunky code to read / write out a single word to file – either ‘beeb’ or ‘cal’ indicating which was the last ‘top’ item. Depending on which of the two’s turn it is, I set the frame coordinates and, most importantly, the order that I call PIL’s Image paste method.

The last part was having something useful in the opposite corners the main tiles of content leaves. The date was a no brainer. The ‘up next’ thing took a little more work and at the moment is the only consumer of the serialised event data.

Tidying up

As well as the appearance of the table, there are a few stragglers that I may get round to tidying up. I check the length of the headlines before I print them out, truncate them if they hit a character limit, and add ellipses. I have no idea if there is any kerning magically happening with the TrueType font. I don’t check any of the other lengths – and the Pil Image Draw function will display exactly what it’s given, for instance, which could get interesting with longer event titles.

Another possible enhancement is very specific to my implementation but brings up a feature of the API call: it decides whether or not to return event data based on the end time. There doesn’t seem to be a way of changing this to start time. This would have been handy for me as I import calendar data from an external source which creates all day events that aren’t actually useful after the start date has passed. As the start time is returned by the API (recurring events not withstanding), it would be trivial to post process this.

The Implementation

There’s nothing terribly exciting about the implementation but the two python scripts are here. As ever, the repo is principally intended for my own backup purposes. The various dependencies (sourced from git, pip3 and apt-get), while not documented explicitly, are all fairly self evident.

Getting started with Jenkins Pipelines for Docker and Kubernetes

This is a summary of a project I’ve been working on in my spare time over the last few weeks to get more familiar with Kubernetes in general, and Jenkins based docker build processes in particular. It’s not intended as a standalone guide, which would be prohibitively long! It builds on this blog post which I found very helpful in explaining the plumbing required to stand up a Docker based build agent. What I’ve documented here has ended up being significantly different from that original post, however, as what’s described also covers an agent which builds and deploys containers into Kubernetes.

I’m using a contrived example application in order to drive a build process triggered by polling a GitHub repo. The polling is to avoid having to make a NAT based inbound connection to my home network from GitHub, which would be necessary for a webhook. Other than the potential for build specifics, what the test app does isn’t very relevant in the context of this post but, for completeness, it’s node based and makes a call to a WordPress containerised application which has been altered to expose the WordPress API. It pulls a random post, which it then displays. (The contrivance is to introduce an API call within the cluster for service mesh injection – unrelated to this post.)

The end to end process here is a plausible example of a containerised app being deployed into a non-production environment (minus a slew of unit tests):

  • Trigger a build to pull the source from GitHub.
  • Do the application build (inside a container).
  • Push that back to a registry – DockerHub in my case.
  • Deploy it to the Kubernetes cluster.

There are a quite few different approaches possible, not to mention variants within them. I looked at 3, start with the use of a Docker build agent.

So how does it hang together? This approach uses a standalone Docker server running outside of Kubernetes; everything else is inside the cluster. It doesn’t have to be: it just happens to be the starting point for me. You need to set Jenkins up to be able to talk to the docker host as the build agent. You then need to change the default configuration of the Docker API to be able to accept connections from the Docker CLI running via a Jenkins plugin. This configuration is security-critical: the API is running as root. This setup then allows the image for the build process to be pulled by Jenkins, which is a distinct step from the application image build itself. This is all described well in the blog post I referenced at the start.

While still very helpful, the Dockerfile for the build agent provided by the author of the original blog post didn’t quite work for me: the steps to add the user were returning an error. Combining this with the differences for the Docker build and the additional functionality for the deployment phase, it makes for quite a different set of functionality:

FROM ubuntu:latest

# get rid of some tz issues since bumping the ubuntu version:
ENV TZ=Europe/London
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get -qy upgrade
RUN apt-get install -qy git
RUN apt-get -qy install curl

# note: arm64 reference; substitute accordingly. RUN curl -LO$(curl -s RUN mv ./kubectl /usr/local/bin
# may be unnecessary because of the mv command: RUN chmod 755 /usr/local/bin/kubectl # Install a basic SSH server (from original Dockerfile) RUN apt-get install -qy openssh-server RUN sed -i 's|session required|session optional|g' /etc/pam.d/sshd RUN mkdir -p /var/run/sshd RUN apt-get install -qy openjdk-8-jdk
# pull just a single binary from an image: COPY --from=docker /usr/local/bin/docker /usr/local/bin # Add user jenkins to the image RUN useradd -ms /bin/bash jenkins # Set password for the jenkins user (you may want to alter this). RUN echo 'jenkins:jenkins' | chpasswd RUN mkdir /home/jenkins/.m2 RUN mkdir /home/jenkins/.ssh RUN chown -R jenkins:jenkins /home/jenkins/.m2/ RUN chown -R jenkins:jenkins /home/jenkins/.ssh/ RUN mkdir /home/jenkins/.kube RUN chown -R jenkins:jenkins /home/jenkins/.kube EXPOSE 22 CMD ["/usr/sbin/sshd", "-D"]

Per some of the inline comments, this is ‘rough but working’ :). There are a few pieces of functionality which are worth pointing out, in the order they appear:

  • git — it’s the freshly minted build agent docker container that does the git checkout, specified in the pipeline definition. I’d initially assumed that it was Jenkins that would do this (by virtue of the config in the pipeline. It’s not.)
  • The install of kubectl. I failed to get the Kubernetes CD plugin to work. I couldn’t get it to play ball with any of the variants of the secret it needs, so what I ended up doing was using a reference to kubectl in the Jenkinsfile, which therefore requires it to be installed in the build container.
  • The rest of the components – the sshd, jenkins, and java install – are, I think, just standard parts of the build agent. I’ve not come across any documentation that breaks down the functionality yet. Sshd and JNLI are an either / or for connection configuration for the build agent. My working assumption is this is how configuration files – like for instance how the secret for kubectl (discussed later when I cover plugins) – end up on the agent. I’ve not tried building and running the agent without Java, so it may just be a hangover from the original Dockerfile.

On the subject of the agent configuration, the build agent provided by JenkinsCI saves you having to build your own image from scratch, but there doesn’t seem to be an ARM version of it available at the time of writing. As my cluster is ARM based, that was a showstopper. It also serves as a base, so you’re probably going to have to extend it for things like docker build support anyway.

I mentioned some of the configuration steps earlier, some of which infer plugin requirements for Jenkins and, again, which are covered in the original post. In more detail:

  • Installation of the Docker plugin for Jenkins.
  • In the cloud configuration, a reference to your docker build agent – i.e., the host that exposes the Docker API.
  • In the Docker agent template, a label that is referred to in the build-pipeline definition, and the registry which hosts the docker image pulled down by Jenkins and deployed to the agent.

The combination of these 3 allow Jenkins to find the Docker image for the build agent, and to stand it up on the nominated host running the Docker API.

For my variant, which includes the deploy to Kubernetes stage I also need the Kubernetes CLI plugin, which allows you to reference the secret in my Jenkinsfile with the directive withKubeConfig.

The final variation from the post is that I use a pipeline for the build and deployment, rather than a freestyle project. You then need to refer to the Jenkinsfile as part of the repo configuration:

There are undoubtedly better ways to separate out these files in the build process for industrial usage but this at least outlines a working approach.

So while this works reliably, it’s pretty complicated, and ties the container build to a specific host. I also spent quite a long time crafting the Dockerfile for the build agent. While you could scale the approach via the cloud configuration, it would be better to have everything running inside Kubernetes.

That leads me to the next approach I tried, again based on a different blog posting from the same site as above. This requires the use of the Jenkins Inbound Agent which again, at the time of writing, isn’t available for ARM, so a non-starter for my cluster.

So, what I looked at next is Kaniko. While it’s a doddle to started with it, I have hit a showstopper: a requirement for two ‘destinations’.

This repo is a copy of the same application that I’ve used with the Docker build above, but which includes an updated Jenkinsfile and the reference to the Kaniko pod spec. The pod uses a single destination parameter, which is to publish to Docker Hub. This is transparent to the configuration, and is actually driven by the use of a Kubernetes secret of type ‘docker-registry‘.

The Jenkinsfile has a problem, which isn’t immediately apparent:

The first kubectl call applies the pod spec, which includes the reference to pull the code from the GitHub repo.

The second one, which applies the deployment spec for newly minted container, pulling it from Docker Hub. There’s a ‘but’, and it’s showstopper: it assumes the first stage has completed. In the same way that kubectl apply returns immediately on the command line, the ‘build and push’ stage finishes immediately as well, long before the Kaniko build is done.

I haven’t figured out a way round this yet. The first approach I’ve looked at is to do the equivalent of the second kubectl apply from within Kaniko itself. While it supports multiple destinations, the out-of-the-box functionality appears to bind the required secrets to the supported destination types, which are in turn linked by either the secret type (for Docker Hub) or pre-canned environment variables – for GCP, AWS etc. I would need two volume mounts, one for Docker Hub, and one for the Kubernetes token for my cluster.

The second approach would be to have some sort of polling mechanism in the ‘Deploy’ stage to identify when the push to Docker Hub has completed. While it’s possible to poll an SCM service from within a declarative pipeline, I don’t see any way of adapting this for Docker Hub, particularly as the API appears to be restricted to Pro users.

A variation on this theme would be to use a separate pipeline completely for the deployment, triggered by a webhook called by Docker Hub. This looks easier but it’s not something I am going to do for the same reason I mentioned about not wanting to use GitHub webhooks: it means punching holes in my domestic firewall to some crash and burn kit.

As this is proving more difficult than it should be, I suspect that it may not be quite be using Kaniko as intended. I’ve also skirted over one issue, which is surfacing an error in the container build process, when it’s wrapped up in the application of the pod spec. This is talking about something in the same sort of hinterland, but once the shell command executes, it has no visibility on an error which, if it occurs, is actually generated inside the Kaniko pod.

I will update this post if and when I figure it out.