Hue, Alexa and “A few things share that name”

Edit 09/10: what I described below didn’t work: Alexa adds in all of the copies of the scenes cyclically, presumably every day. What I ended up having to do was:
* Fire up the app, go in to the room settings and delete all of the scenes except ‘bright’. This may be a non-starter for some people straight away.
* Change room names: this is fairly tedious but it was the only way I could think of getting it to work. I had “X’s bedroom” and “X’s bedroom light”. This seems to be enough to trip up the commands. As you need to retain the room definition, I simply made it “less intersecting” with the light name, by renaming it to “X’s room”.
* Final gotcha. Do a ‘discover’ in the Alexa app. All of the scene definitions are marked offline. I had to delete them again, then do a rescan [enough to reinsert them before deleting the light-scene association in the app] and they were gone.

This is so torturous I’m fairly sure I’ve missed something obvious. If I figure it out, I’ll update this posting. If it breaks overnight because Alexa runs some weird batch job *again* I’m going to delete the article and pretend I never started with this :).

==== Original post ====

This has been really bugging me over the last few months. Voice commands to control our Hue lights stopped working with the ‘a few things share that name’ response.

I am pretty sure it was something that Amazon introduced: what breaks the voice commands is that it treats both rooms and scenes as separate targets for control. I deleted the room references in the devices list and the scenes – 12 per room, which is quite tedious – and the ‘off’ and ‘on’ controls have started working again.

There are a couple of gotchas: the first is that the Alexa web interface – which I used for all of the changes – offers a ‘forget all’ option at the bottom of all of the Hue related pages. This deletes everything, not just the scene or room that you happen to be editing. The second is that a rescan adds all of the scenes back in, which is quite annoying.

So what Alexa seems to be doing is getting a list of resources from the bridge, and then taking copies of them locally to act as command targets. Some of the scene names are pretty convoluted, and by virtue of the fact that the commands you want to use – ‘turn off x light’ – contains a substring of some of the constructed scene names, the language processing won’t let you use it.

It’s not the smartest move to do the Alexa integration this way: just blindly adding the default scenes is almost guaranteed to break the functionality you want to use.

Anyway, deleting the stuff you don’t want from the Alexa UI has no impact on the resources on the bridge.

Fitting a Mastery Bridge to an American Pro Jaguar

I bought a Mastery M1 to replace the stock Fender bridge on my 2017 Jaguar during the week. I asked at a store to double check that it was compatible – I wasn’t sure if I needed the kit, including the thimbles – and was told it was.

I then found that the posts on the Mastery were too wide, by about a millimetre or so.

I searched everywhere online to try to figure out what I needed to do to fit the thing. I was hoping that it would be something that I could do myself, but I really didn’t want to tackle anything that was irreversible.

I couldn’t find anything specific to the American Pro, which is why I’m writing this up, on the off chance that Google lands someone else here who is thinking of doing the same thing.

Long and short of it, it’s a doddle. The plastic mountings [thimbles?] that the stock bridge fits into are less than a centimetre deep. I very carefully levered them out of the metal fittings they are inserted into using one of the Allen keys that came with the Mastery. As I couldn’t tell the dimensions of them before I started, I initially thought I’d have to take them out and replace them with the Mastery thimbles.

You don’t need the Mastery thimbles.The M1 fits perfectly on its own.

Here’s a quick snap showing the little mounting that I removed:

American Pro Jag with Mastery M1 Bridge

 

First Apple Watch App Using WKInterfaceMap

I was lucky enough to get my Apple Watch last week through the ‘expedited delivery’ option that was offered to developers on a random basis, and I’m really pleased with it.

I’ve written a very simple app to understand the communication between the watch and the phone, and it’s unearthed a couple of interesting points. Before that, setup: you need to add the watch’s unique ID to the rest of your devices on the Apple dev portal, and create a new developer profile. You’ll also find that you want to mess around with Xcode to change the logging: it can’t show logs from the phone and the watch at the same time as they are separate processes, so you will need to switch between them.

So the simple app: I thought it would be fun to display the current location of the International Space Station on a map. Note that there is no ‘glance’ for this app. Having first added the WatchKit app template to a single view project, I added the following elements to the Watch storyboard:

Screen Shot 2015-05-04 at 10.24.10

So simply a button, the WKInterfaceMap and two separate labels. I’ve then created outlets and actions as you’d expect, in the InterfaceController.h:

Screen Shot 2015-05-04 at 10.27.20

In the awakeWithContext method, I call a method to do the communication with the phone and render the results:

Screen Shot 2015-05-04 at 10.30.39

I also call this method from the action for the ‘refresh’ button, after deleting the current location pin.

So the main communication is with the openParentApplication, where you both send and receive data in an NSDictionary. It’s all nice and clean. A quick explanation around the way I’ve marshalled the data: I’ve sent the latitude and longitude values over in the dictionary as strings. Not one for the purists, but you have as much work to do with NSNumbers, and the two values are ultimately going to be set as string values for the labels anyway.

One interesting point to call out here is the part of the code I’ve commented out. Rather than a pin, I thought it would be nice to display a little image of the ISS as an alternative. I spent quite a lot of time on this, and the conclusion that I’ve come to, so far at least, is that the Watch doesn’t support images with transparency [an alpha channel]. I posted to the Apple Developer Forum [the thread is here; authentication required], and that seems to be the consensus. I also tested this with a WKInterfaceImage and had the same result. While I’ve seen quite a few references to transparency, especially in articles about animation, I’ve failed to get it to work – and the same goes for the other people on the developer forum thread. Either there are other options in the SDK, or there may be something baked into the image metadata that the WatchKit doesn’t like. I’ve tested with transparent images I’ve used in traditional iOS apps, which I’ve created myself using Gimp.

Anyway, on the phone side of the communication, you use handleWatchKitExtensionRequest:

Screen Shot 2015-05-04 at 10.54.34

So the first thing to notice here is that it’s completely synchronous: that reply(responseDict) is getting populated in this method call. It took me a while to figure  out the implications of this. Initially I was going to use an async variant of NSURLConnection, when I realised that the connectionDidFinishLoading data delegate wasn’t going to be much help here: there is no way of joining the dots between identifying the end of the response from the web server in that delegate method and then populating and calling the reply back up in the handleWatchKitExtensionRequest.

There are so many methods that return results asynchronously, not just at the network level, that dealing with them in iOS is a constant refrain. The way I normally do this is to farm the entire functionality out to a class, and then set a key value observer on the setting of a property in the class instance. I’ve used this recently for getting results back from the keychain – which I’ll come back to in a second.

I’m not sure what the implications of this are: there may be a way round it that is either beyond my knowledge of the WatchKit or Objective C. While one option would be to prepare data and save it locally – this is all on the phone – for the response through background processing, there may be reasons why you don’t want to do this. Data that depends on the keychain is an obvious example.