Categories
Augmentation Complex Systems Wearables

Right Effort, Right Interface

So here’s irony. If I stare straight ahead right now, all I can see are screens. My new desk, with my new monitor setup, takes up almost all of my field of view, and yet, as luxurious as this is, it’s lead me to hours at a time away from screens altogether.

Here’s the thing: while for some types of work, two (or, swoon, three) big monitors and some associated ergonomically enhancing desk set up would seem to be the thing, (see Multiple Screens and Devices in an Information-rich Environment, for example), I’m profoundly loathe to sit at a desk all day. I live in Southern California, and there’s sunshine I don’t want to miss out on.

Moreover the years of following a GTD practice have lead me to realize the power of breaking my work up into chunks of, so to say, right-thing, right-place, right time. Being able to escape one form of computing, and with it, the form of thinking that is implied and enforced by those tools, is a powerful technique.

At the same time, as a student of Zen Buddhism, I’m trying to pay attention, in a very active sense, to what I do as a daily practice in all parts of my life. Right now, it’s the Spring Practice Period at the San Francisco Zen Center, (I’m attending online from down here in LA), and the theme this season is Wise Effort. The associated meditations have me thinking on this anew, and from there, in part, on how much time I spend on my phone.

Every morning, for example, I take my daughter to school, and we stop off for coffee/steamed milk on the way. Too often have I caught myself engaging with Twitter rather than with her during those brief minutes, and that doesn’t require exquisite Zen training to uncover as not-ideal for either of us.

So for the past few weeks, I’ve been wrapping my phone up in a cloth, furoshiki-style, and putting it into my bag. The added friction of getting it out, unwrapping it, turning it back on, et-connecting-cetera, takes away the quick hit of checking notifications, and the inevitable in-suck from there. It works well.

So far, so digital-detox middle-aged-angst. But it has lead to something new.


I need to work on my 風呂敷 technique.

I need to work on my 風呂敷 technique.

Leaving my phone wrapped up is anxiety-driving, because while I know my own work schedule well enough, there’s always an ego-driven part of the brain that thinks I’m an air-traffic controller with a part-time heart surgery practice. Those neurons want me to check my inbox on a regular basis, just to make sure I’m not being called into action for something that has to happen right now now now. By, you know, something on Twitter.  ¯\_(ツ)_/¯

This is bollocks, obviously, but it’s nonetheless an itch that needs to be scratched. Combine it with my podcast-listening habit, and you have two major drives pulling on me to unwrap the thing, so to pull it out every few minutes.

But I don’t. I’ve defaulted to my watch as my primary mobile platform. Podcasts I can get, after installing Outcast for Apple Watch, and the rest of my super urgent messages and pushes can be relied on to come through just fine. (Which is to say, never, as I’m not a coastguard).

Thus becalmed, my brain can get on with other stuff. And here’s the new thing. As I mentioned on Twitter yesterday as part of a thread started by the splendid @hondanhon by defaulting to a new, tiny, platform, I’m forced to use its features more deeply. Turns out, they’re really good: 

Many of my work tasks, it seems, are much better suited by combining the pens and paper of my choice with Siri-based interaction with my watch via AirPods. While my desktop machine has a whole monitor dedicated to my inboxes, Omnifocus, and calendar, and my weekly review is for sure a multiscreen activity, if I’m planning a talk, for example, I just need to send quick messages and drop reminders to myself. I can do that, and even some pretty nice long-form dictation into Evernote, from Siri. I’m both connected to my IT-ecosystem, and pleasingly untethered from it.

Is it the perfect platform for the entire day? No. But it does speak to an interesting trend. As my phone gets ever more powerful, there’s sometimes a feeling that it’s too powerful to be brought out without proper psychological preparation. It’s too moreish a device to just have in my pocket, and most of the time I have no operational need for it to be quick-draw holstered there either.

And also, voice interfaces are pretty good – the household menagerie of smart speakers speak to that already – and offer a new way of interacting that doesn’t require eyes or fingers. In the kitchen, or the bedroom, a quiet work to an Alexa is almost always cognitively appropriate to the computational task at hand.

That’s what I suppose I’m searching for here: matching an appropriateness of interface to the task, rather than matching my tasks to the interface I have in front of me: deciding what to do based on priorities other than what technology I am sat with. This could involve breaking habits I didn’t know I had. Or, at least, having to become aware of how much my available toolset shapes my thinking. Let’s see. Onwards.

Categories
Complex Systems Futures Wearables

Spectator Cockroaches, Sand, and the Social Facilitation of Skeuomorphs.

It was an experiment on cyclists in 1898 that first showed us how we might live with robots. It’s a really interesting observation. Let me tell you about it.

So. I love bots. Give me a pseudo-human interface, a smattering of natural language, and  a computery-voice, and I’m all yours. This year I’m working on a project to discover just how useful they can be. With wearables, and systems like Amazon Echo we’re about need to deal with a lot of these things, and it seems to me it’s not so much the technology as the user psychology that we need to pay the most attention to, and so we need to ask what we know about these things already. 

Discussing this with Dr Krotoski, my very local social psychologist, I was pointed to the seminal paper, The Dynamogenic Factors in Pacemaking and Competition, by Norman Triplett, The American Journal of Psychology  Vol. 9, No. 4 (Jul., 1898) , pp. 507-533.

This is basically the ur-text of Social Psychology. You can read the original paper for details of the experiment, but the simplified conclusion was this: if a person is being watched, they find easy things easier, and harder things harder. It turns out, from other experiments, that this is true for many species. For example, cockroaches will find their way through mazes much more quickly if they have spectators too, (Zajonc, R. B. (1965). Social facilitation. Science, 149, 269-274.)

 


A maze for cockroaches, with spectator seating.

A maze for cockroaches, with spectator seating.

 

Further research, specifically Social facilitation effects of virtual humans, Park, Hum Factors. 2007 Dec;49(6):1054-60, went on to the nub of it: “Virtual Humans” produce the same social facilitation effect. In other words, the presence of a bot will make simple things simpler, and hard things harder, simply by just being there “watching”.

This, it seems to me, is quite a big deal. If we’re designing systems with even a hint of skeuomorphic similarity to a conscious thing – even if it just has a smiley face and a pretty voice – it might make sense for it to ostentatiously absent itself when it detects the user doing something difficult. This might be the post-singularity reading of the Footprints In The Sand story, but nerd-rapture aside, it’s an interesting question: when is it best for context-aware technology to decide to disappear? When the going is easy, or when the going gets tough?

Furthermore, I’m not sure if we know yet the Uncanny Valley-like threshold of “humanness” that triggers the social facilitation effect: do cameras have the same effect? Or even just the knowledge that someone is surveilling you? But this has serious implications beyond AI design.

For example, the trend for the quantified workplace, where managers can gather statistical data on their employees’ activities, might be counterproductive, not simply because of the sheer awfulness of metrics, but because the knowledge they are being watched might make the more complex tasks that employee needs to do inherently more difficult, and hence more unlikely to be attempted in the first place.

For the most challenging tasks we face, the problems requiring the most cognitive effort, and the most imaginative approaches, we may find that many of our current social addictions – surveillance, testing, and so on, might be deeply harmful. “It looks like you’re writing a letter,” as Clippy would say, “would you like me to make that sub-conciously more difficult for you?” 

Categories
Complex Systems Futures

Future-Dense Sentences

There’s a technique for pondering emerging technologies that originated, I think, with Jamais Cascio. Imagine you’d been instantly transported back in time x years in a particular place. How many years would you have to have travelled before you noticed you had slipped back in time? What would give it away? People’s clothing? The music on the radio? Headlines on newspapers in the first papershop you come across? The cars, the phones people are carrying, the TVs you can see through the windows you pass? Sat where you are now, could you tell if you’d suddenly dropped back to 2006? 2001? 1989?

Ok, you’re on the internet, so that breaks that, but it’s a fun game to play if you travel a lot, and can be also quite revealing within institutional buildings. Applied to business processes or cultural values, it can uncover a good deal too. 

My variation on this is to look for the places, or the ideas, or the writing, that is the most future-dense. What sentences can we find that contains the most stuff that, were we to fall back in time only a few years, would make no sense whatsoever. What contains the most embedded understanding of wholly modern concepts. Here’s a good one, from this morning:

See what I mean? Go back ten years, and that would be crazy. Go back thirty years, and you’d have to start from such first principles, you’d be considered mad.

Here’s another from earlier this year, that at first glance reads, technologically at least, entirely, boringly, banal:

If you fell back thirty years to 1985, think of all the things about this screenshot you’d have to explain, and all the layers you’d have to fill in before you could. “Ok, so…[deep breath] the President of the United States is a black man named Barack Obama. Yes, really. This is a message he has left on a microblogging service on the web…ermmm, it’s a service based around a new hypertext protocol on the internet. Yes, that thing the scientists use. Kinda like a bbs, yes. But with a few billion users. Yes. Billion. With a B. Anyway, he’s saying he’s going to binge-watch a show on Netflix. Netflix? It’s a streaming video site…oh…well, it’s a place…errrrr…Retweets? Spoilers?…I…you know…I think we should drop it.”

Anyway, looking for these brings me to a couple of things. Firstly, it’s a useful koan-like personal thought experiment to find new insights around a place or an organisation or a cultural moment. At the very least, it’s entertaining.

But secondly, I think it raises, once again, the realisation that our future world is heavily, fundamentally layered: that problems have no simple solution that a single technology plonked on top will fix. Instead, it is the interplay of the complex systems – complex, not necessarily complicated – of culture, technology, politics, culture and so on that will come together to make tomorrow’s banal commonplace thing. That complexity, I think, is both deeply exciting, and – hopefully – humbling. The future is not about the tech. It’s perhaps the other way around.