Hue, Alexa and “A few things share that name”

Edit 09/10: what I described below didn’t work: Alexa adds in all of the copies of the scenes cyclically, presumably every day. What I ended up having to do was:
* Fire up the app, go in to the room settings and delete all of the scenes except ‘bright’. This may be a non-starter for some people straight away.
* Change room names: this is fairly tedious but it was the only way I could think of getting it to work. I had “X’s bedroom” and “X’s bedroom light”. This seems to be enough to trip up the commands. As you need to retain the room definition, I simply made it “less intersecting” with the light name, by renaming it to “X’s room”.
* Final gotcha. Do a ‘discover’ in the Alexa app. All of the scene definitions are marked offline. I had to delete them again, then do a rescan [enough to reinsert them before deleting the light-scene association in the app] and they were gone.

This is so torturous I’m fairly sure I’ve missed something obvious. If I figure it out, I’ll update this posting. If it breaks overnight because Alexa runs some weird batch job *again* I’m going to delete the article and pretend I never started with this :).

==== Original post ====

This has been really bugging me over the last few months. Voice commands to control our Hue lights stopped working with the ‘a few things share that name’ response.

I am pretty sure it was something that Amazon introduced: what breaks the voice commands is that it treats both rooms and scenes as separate targets for control. I deleted the room references in the devices list and the scenes – 12 per room, which is quite tedious – and the ‘off’ and ‘on’ controls have started working again.

There are a couple of gotchas: the first is that the Alexa web interface – which I used for all of the changes – offers a ‘forget all’ option at the bottom of all of the Hue related pages. This deletes everything, not just the scene or room that you happen to be editing. The second is that a rescan adds all of the scenes back in, which is quite annoying.

So what Alexa seems to be doing is getting a list of resources from the bridge, and then taking copies of them locally to act as command targets. Some of the scene names are pretty convoluted, and by virtue of the fact that the commands you want to use – ‘turn off x light’ – contains a substring of some of the constructed scene names, the language processing won’t let you use it.

It’s not the smartest move to do the Alexa integration this way: just blindly adding the default scenes is almost guaranteed to break the functionality you want to use.

Anyway, deleting the stuff you don’t want from the Alexa UI has no impact on the resources on the bridge.

Fitting a Mastery Bridge to an American Pro Jaguar

I bought a Mastery M1 to replace the stock Fender bridge on my 2017 Jaguar during the week. I asked at a store to double check that it was compatible – I wasn’t sure if I needed the kit, including the thimbles – and was told it was.

I then found that the posts on the Mastery were too wide, by about a millimetre or so.

I searched everywhere online to try to figure out what I needed to do to fit the thing. I was hoping that it would be something that I could do myself, but I really didn’t want to tackle anything that was irreversible.

I couldn’t find anything specific to the American Pro, which is why I’m writing this up, on the off chance that Google lands someone else here who is thinking of doing the same thing.

Long and short of it, it’s a doddle. The plastic mountings [thimbles?] that the stock bridge fits into are less than a centimetre deep. I very carefully levered them out of the metal fittings they are inserted into using one of the Allen keys that came with the Mastery. As I couldn’t tell the dimensions of them before I started, I initially thought I’d have to take them out and replace them with the Mastery thimbles.

You don’t need the Mastery thimbles.The M1 fits perfectly on its own.

Here’s a quick snap showing the little mounting that I removed:

American Pro Jag with Mastery M1 Bridge

 

First Apple Watch App Using WKInterfaceMap

I was lucky enough to get my Apple Watch last week through the ‘expedited delivery’ option that was offered to developers on a random basis, and I’m really pleased with it.

I’ve written a very simple app to understand the communication between the watch and the phone, and it’s unearthed a couple of interesting points. Before that, setup: you need to add the watch’s unique ID to the rest of your devices on the Apple dev portal, and create a new developer profile. You’ll also find that you want to mess around with Xcode to change the logging: it can’t show logs from the phone and the watch at the same time as they are separate processes, so you will need to switch between them.

So the simple app: I thought it would be fun to display the current location of the International Space Station on a map. Note that there is no ‘glance’ for this app. Having first added the WatchKit app template to a single view project, I added the following elements to the Watch storyboard:

Screen Shot 2015-05-04 at 10.24.10

So simply a button, the WKInterfaceMap and two separate labels. I’ve then created outlets and actions as you’d expect, in the InterfaceController.h:

Screen Shot 2015-05-04 at 10.27.20

In the awakeWithContext method, I call a method to do the communication with the phone and render the results:

Screen Shot 2015-05-04 at 10.30.39

I also call this method from the action for the ‘refresh’ button, after deleting the current location pin.

So the main communication is with the openParentApplication, where you both send and receive data in an NSDictionary. It’s all nice and clean. A quick explanation around the way I’ve marshalled the data: I’ve sent the latitude and longitude values over in the dictionary as strings. Not one for the purists, but you have as much work to do with NSNumbers, and the two values are ultimately going to be set as string values for the labels anyway.

One interesting point to call out here is the part of the code I’ve commented out. Rather than a pin, I thought it would be nice to display a little image of the ISS as an alternative. I spent quite a lot of time on this, and the conclusion that I’ve come to, so far at least, is that the Watch doesn’t support images with transparency [an alpha channel]. I posted to the Apple Developer Forum [the thread is here; authentication required], and that seems to be the consensus. I also tested this with a WKInterfaceImage and had the same result. While I’ve seen quite a few references to transparency, especially in articles about animation, I’ve failed to get it to work – and the same goes for the other people on the developer forum thread. Either there are other options in the SDK, or there may be something baked into the image metadata that the WatchKit doesn’t like. I’ve tested with transparent images I’ve used in traditional iOS apps, which I’ve created myself using Gimp.

Anyway, on the phone side of the communication, you use handleWatchKitExtensionRequest:

Screen Shot 2015-05-04 at 10.54.34

So the first thing to notice here is that it’s completely synchronous: that reply(responseDict) is getting populated in this method call. It took me a while to figure  out the implications of this. Initially I was going to use an async variant of NSURLConnection, when I realised that the connectionDidFinishLoading data delegate wasn’t going to be much help here: there is no way of joining the dots between identifying the end of the response from the web server in that delegate method and then populating and calling the reply back up in the handleWatchKitExtensionRequest.

There are so many methods that return results asynchronously, not just at the network level, that dealing with them in iOS is a constant refrain. The way I normally do this is to farm the entire functionality out to a class, and then set a key value observer on the setting of a property in the class instance. I’ve used this recently for getting results back from the keychain – which I’ll come back to in a second.

I’m not sure what the implications of this are: there may be a way round it that is either beyond my knowledge of the WatchKit or Objective C. While one option would be to prepare data and save it locally – this is all on the phone – for the response through background processing, there may be reasons why you don’t want to do this. Data that depends on the keychain is an obvious example.

New Version of Pin Your Pics…

I released a new version of Pin Your Pics back at the start of February. I’ve added a new privacy feature, which is the exporting of images without location data. I use a very nice contextual library that I found on CocoaControls called VLDContextSheet. So if you now do a ‘long press’ gesture on the icon of one of your pictures, the menu pops up allowing you to export the file to an email, with 3 size options [small, medium and large].

I’m just finishing off another little feature, which was pretty straightforward to add: the use of Google Directions from the current location to the location in the photo.

What was less easy was a one off popup ViewController, to show a ‘What’s New’ message. If people have automatic app updates, it’s quite likely that they would miss the features and, just as importantly, the instructions on how to turn them off if they don’t want to use them.

Arduino Based PIR Motion Sensor with iOS Push Notification

We’ve been having some problems with our cat staying out late over the last few weeks, and expecting to be able to summon us at the back door. As I’m not keen on standing guard waiting for him, I’ve decided to revert to type and build a solution around one of my Arduino boards.

I’ve used various bits of example code, and it’s all working. First the hardware, which is a real thing of beauty(!)

PIR Sensor

PIR Sensor

So we have a PIR sensor in a yoghurt pot. I’ve enclosed the sensor in a tube to try to make it more directional. The sketch I’ve used is from the Arduino Playground. The only change I’ve made is to comment out every Serial.Print[ln] command, except for the one that identifies the end of the motion detection period.

Next, on the Raspberry Pi, I installed this variant of the standard serial port library for Perl.  I’ve used the example script to make a call to curl, using the system(“curl….”); command.

Next comes the slightly trickier part. I did a bit of a trawl around yesterday to see if there were any nice push notification services. This is a pretty complicated sport, but long and short of it is that I went for a free option from an organisation call PushApps, who have detailed instructions on how to set yourself up with the various certificate options on the Apple Developer Portal, and then how to integrate the actual code into your app. I just went for a very rough and ready single view app for now. It’s not doing anything other than displaying the remote notification.

The actual process of integrating the notification functionality into an app is very straightforward.

That said, there are two moderately tricky parts to this. The first is to make sure that you configure Xcode to pick up the right ‘provisioning profile’ that you have to go through in the instructions.

The second is – well, actually pretty straightforward with the benefit of hindsight, but it took me a while to translate the example provided in the documentation in PHP into something I could use – via curl. It’s a call to PushApps’ JSON interface, which brings us back to the perl script. When I read a string from the serial port equating to a motion detection event I call:

curl -H “Content-Type: application/json” -d ‘{“SecretToken”:”your-secret-token-here”, “Message”:”short message to be displayed”}’ https://ws.pushapps.mobi/RemoteAPI/CreateNotification

which, lo and behold, will send a push notification event. The example code takes care of the registering of the phone [well, the app installed on the phone] to pick up the notification.

I’ve left the curl command unescaped for readability purposes. When you put it inside the system() command, you have to escape all of the double quotes with a backslash.

if this doesn’t work, there’s nothing for it, the cat will have to go 🙂

I initially tried making the call using the web client on the Arduino WiFi shield. You can’t: there’s no support for TLS.

Jedi Cat

These aren't the whiskers you are looking for

I have absolutely no justification for this picture of Ping – and the internet can survive without yet another cat picture – other than it makes me giggle.

Our New Light Fitting

Ping, our Siamese, is obsessed with climbing on top of our kitchen cupboards, which are no doubt covered in dust. It’s not clear from the perspective, but the top of his ear is millimetres off the ceiling in this picture. I bounced the flash to try to minimise the shadow. It took quite a few attempts to get the manual exposure down. This is F13 at 1/80th second, with the flash tuned down by -1/3.

Ping