Andy Thornton
Andy Thornton
1st June 2015

It’s been just over a week since UX London came to a close for another year. Beyond the much anticipated debut of our hexidecimal blonde ale #DAA520 (AKA Golden Rod) there’s been much to muse over.

Here’s a summary of the talks from each day.

Day 1: Products

First up Julie Zhuo talked openly about some of the work going on at Facebook and how their concept of ‘Ship Love’ (Valuable + Ease of use + Well crafted) underpins everything they do.

Des Traynor from Intercom had some helpful observations on how to design, maintain and evolve a good product, framing his argument around three simple areas: defining the product, its functions and a roadmap for implementation. Using a matrix of usage vs frequency (i.e. how many users use feature x, how often?) he suggests you can prioritise the meaningful features to avoid feature bloat.

John Willshire pivoted slightly from his initial talk synopsis to propose a more meaningful way of mapping out processes that can work for the holy trinity of Clients, Users and Ourselves, without truly failing any of them. Can using a Flow Engine to identify the next most valuable step to take in your project – in his example by either improving its fidelity or sharing it with a wider audience – free us from the constraints inherent in the ever-present framing of design processes by time? John says yes.

In a theory rich talk Anthony Mann introduced us the learning theories of psychologist Jerome Bruner, and encouraged us to get closer to an ‘enactive’ mode of representation when designing interactions for UI. In short, this involves making things as physical and manipulable as possible through prototypes of increasing fidelity.

Jon Kolko recapped his IA Summit talk by running us through a best practice approach to empathy-centered design, and referencing a case study of it in action for myEdu. Somewhere in between he also gave us a heads-up on the defining characteristics of a good Product Manager with a summary of the important attributes required of the role: Storytelling, Synthesis, Listening, Curiosity & Affability. Sound familiar?

Day 2: People

Stephen Anderson reminded us to sweat the UX details by using evidence from cognitive science (e.g. change blindness and gaze queuing), to convince skeptical colleagues and design-illiterate stakeholders of the impact and necessity of exceptional interaction design. Stephen argued passionately that these characteristics are intrinsic to delivering successful digital experiences in our increasingly competitive landscape.

Author of the UI sci-fi bible Make It So, Chris Noessel, introduced us to the idea of ‘Meaning Machines’, various tools taking a myriad of forms throughout the ages (from animal livers to exquisitive corpses to riffs on madlibs), that help us derive new meaning from randomness, highlighting their potential as creative sources of inspiration. Despite the subtext, I don’t think he was advocating live extraction of calves livers mind you.

Adam Connor gave us some gentle reminders about our bad habits when reviewing each others design work, and some tips to avoid losing sight of the authors intentions behind their decisions. Great stuff to help us all suppress our inner Kanye in the critique.

Cecilia Weckstrom provided some insight on the ‘kids first’ decision-making behind the ongoing (re)design of LEGO.com, with useful guidance on engaging children using a visual, audible and written language that makes sense to them. It even came with a motto for all creatives to live by, from Sam Levenson: “One of the virtues of being very young is that you don’t let the facts get in the way of your imagination.”

Kim Goodwin warned us that organisational change is a stubborn beast. From consensus-building Clans to process-allergic Adhocracies, different organisational cultures present unique challenges, and the values expressed on paper aren’t always the values acted on. Kim advocates Design Principles as a way to leverage change by reframing the conversation and influencing decision-making one step at a time in these complex human systems. I’m pretty sure a sneaky iceberg metaphor floated into the slide deck somewhere (#drink).

Day 3: Platforms

Karen McGrane’s enticingly titled talk on Content in a Zombie Apocalypse was sadly bereft of entrails of the walking dead, but both-barrels-loaded full of handy hints at approaching content workflow in the diverse world of devices and screens we now live in. Karen advocated a paradigm shift away from our print-centric, container-first thinking, to brace ourselves and barracade our content against whatever the next ‘big thing’ around the corner is. That thing sadly being unlikely to be toast, but here’s hoping.

Up next, Brad Frost brought us all up to speed on his Atomic Design framework for creating and maintaining robust design systems. But also, more importantly, the key to getting painless client sign off in your design mock-ups: photos of Beyoncé.

In the final part of this trilogy of ‘stuff about screens’, Patrick Haney & Jenna Marino co-presented on the impact of the new multiscreen reality: users actively engaging with different devices as part of the same ‘session’. Patrick and Jenna brought attention to three approaches to consider when solving this challenge: Consistency across devices, Continuous progression regardless of device, and Complimentary simultaneous experiences from one device to another.

Rachel Hinman reminded us that in a male-dominated industry, it’s easy to lose perspective on the demand for wearables that exists outside of the plethora of fitness trackers that capture data and monitor our performance. Through a case study demonstrating a 3D-printed dress, and reference to other projects from the world of fashion, she encouraged us to consider the expressive potential of wearables for the wellbeing of the wearer, and avoiding putting the technology first.

Tom Coates of upcoming thingyverse(?) Thington took us on a dizzying journey of discovery through our emerging world of connected gubbins. He theorised that a world of internet-enabled, uniquely ’enchanted’ objects may not actually be that scalable for the future success of IoT. For example, imagine trying to memorise all the potentially unique interaction gestures and commands your connected home could require? From your weather forecasting umbrella, to your body heat sensing thermostat and voice-enabled light switch - how do we convey these new digitally-covert affordances and make them easily configurable? We never did see Jean-Luc Picard setting his preferred lumens of brightness on the USS Enterprises’ “lights!” did we?