Categories
Philosophy Software Technology

Wilful Sven

So a minor act of stupidity on my part today got me thinking about cleverness of user interfaces, and when it can sometimes be a bad thing. Specifically: I managed to leave the (side)lights on my car switched on overnight, and sure enough in the morning the battery was pancake-flat. One rapid bike ride/train journey later and I was only the one hour late for work, but embarrassingly this isn’t the first time I’ve forgotten them – though I have been able to get the car started (and indeed open the doors) on previous occasions….

So what’s my excuse? Well, the reason I turn my sidelights on in first place is sort of complex.

We start to think of systems that enforce decisions based on encoded tables of values as being “wilful”. Even if we agree with these ends, it’s a very natural to resist the interference, especially when it comes out of a “dumb machine”

It all comes down to some extra automation that Volvo have decided is a good idea in control of the lights. When the internal clock indicates its the evening, the headlights will automatically come on when the engine is started, and there’s no direct way to turn them off. I’m not keen on wearing out the bulbs and seeing the reflection of my own headlights in the back of other cars when it’s blinding sunlight – so I cleverly trick the system by switching the light control to sidelights only, so dousing the main beams.

This is fine, but the problem comes when I get out the car – because it’s still broad daylight outside, there’s no obvious cue to remember to check the lights are turned off. (Compounded with a lack of warning sound and lack of an automated feature to turn the lights off after the engine has been turned off) this makes it easy to forget.

I think this is interesting because while there are obviously some good intentions in ensuring drivers have their headlights on at appropriate times, I think there’s a major mistake in this sort of implementation of automation. The mistake, I think, is to endow the system with a “will” – even if it’s a reflection of the will of the designer.

Before this starts to sound a bit Schopenhauer, I don’t think this is psychologically unfamiliar – we will start to think of systems that will enforce decisions based on encoded tables of values as being “wilful”. Even if we agree with these ends, it’s a very natural (and often the first) response to resist the interference, especially when it comes out of a “dumb machine”. The outcome (sidelights on) became a compromise between a (ludicrous) wilful struggle between me and the car. Because this outcome was now disconnected from the actual ends of both intentions (I want the lights off because it’s broad daylight, my car wants headlights on because its past 6pm), the usual feedback cues didn’t match up and I screwed something more major up, with the consequence of no working car until I get some jump leads.

So we’re clearly in the business of introducing automation – to workflows, user interfaces, and development itself. Obviously it’s generally great. But, especially in user interfaces, this might be an odd pitfall to avoid – sometimes the human on the other end is the best arbiter, and (after making suggestions) we can defer to their decisions – rather than introducing a “will” which they might feel inclined to fight.

I’m sure we’ve all seen examples of “wilful” software. Apart from anything, users are just too inventive and stubborn to give up, and will frequently invent workarounds which may be very bad solutions to a problem – like sidelights. If you are going to automate beyond any intervention, you need to cover all the edge cases as well – really, my car should have turned the lights off itself after an hour or so with the engine off.