Clearleft | Blog https://clearleft.com/posts/ The latest news from Clearleft en-gb Sun, 29 Mar 2020 16:25:54 +0000 Sun, 29 Mar 2020 16:25:54 +0000 How to spring-clean your content https://clearleft.com/posts/how-to-spring-clean-your-content Wed, 25 Mar 2020 14:57:00 +0000 https://clearleft.com/posts/how-to-spring-clean-your-content While spring is traditionally time for a clear-out, we often shy away from the boring ‘tidy up’ tasks. In the current climate when we might have some down-time as projects get paused, it seems like a good opportunity to pick up some of those jobs we’ve been putting off.

Cleaning up your content essentially means starting with a content audit to establish what you have and what kind of state it’s in. If your site has grown organically over time without much of a strategy behind it, the chances are you have a lot of fragmented and inconsistent content, as well as content that’s out of date or not used at all by your customers.

Regaining focus and creating strategy for content takes time and effort, it’s not an overnight job. But understanding how your existing content needs to be improved is a great starting point.

In its basic form, a content audit allows you to assess your content against set criteria and work out what the action you need to take with the content. Some suggested criteria are listed below, but you may decide to pick just a handful of these depending on time, access to data, availability of brand guidelines etc.

Page views

How many people are actually viewing this page? Does this account for a high volume of traffic?

Bounce rate

How many people are leaving this page and not going anywhere else on the site. This might be a good thing (for example if the page is providing a phone number or a link to another site this might be the expected outcome) but if you’re hoping they go onto another part of your site but they’re not, then you’ve uncovered an issue with your content.

Findability on site

Can the page be easily found from your main navigation or is there a clear route to this content?

Findability on Google

How high does this page rank in search? Going through this exercise is a great way to also audit the metadata and snippet text (but more about that later).

Tone of voice

Is the content of the page reflecting your brand voice?

Accuracy

Is the content still relevant and up to date? Are there any typos or technical inaccuracies?

Alignment to principles

Does the content reflect your brand principles or design principles? For example if one of your principles is ‘Human’, you’d expect your content to sound human, be active (rather than passive) and feature real-life examples or people.

Value

What is the main thing you want users to do on this page? Does the main CTA reflect that? And are people doing what you want them to do from this page?

Usability

Is your content clear and simple? Is it structured in a way that’s easy to understand, or are users struggling. It might be obviously unclear, or you may have to dig deeper into user behaviour to find this out, for example by doing some more in depth usability testing.

How do I start my audit?

I’ve found that boring old spreadsheets work best. Sorry! I know some people use tools such as Airtable, but you’ll have to import the raw data first.

Begin by extracting a list of your site pages with a tool such as Screaming Frog and export the data into a spreadsheet. Remove the columns you don’t need and clean it up a bit. It’s also worth adding in a column to show what displays in search.

Begin by extracting a list of your site pages with a tool such as Screaming Frog and export the data into a spreadsheet. Remove the columns you don’t need and clean it up a bit. It’s also worth adding in a column to show what displays in search. Now create a column to add the stage of their journey a user is at for each page and the user’s goal for this stage. For example you might have ‘browse options’ as the stage, and then the user goal as ‘pick an option’.

Then create columns for the criteria you’re assessing against, so you’ll end up with something that starts to look like this:

Field labels on spreadsheet
Field labels

For the criteria you’re judging against, set some parameters and some conditional formatting. For example, for ‘Tone of voice’ you may want to say ‘Yes’, ‘No’ or ‘Somewhat’, with the cell becoming red for ‘No’, green for ‘Yes’, and amber for ‘Somewhat.’. Using this kind of RAG (Red, Amber, Green) rating system for your criteria will then help you see at a glance which pages are not up to par and will need to be optimised.

Spreadsheet fields showing red, amber, green
Your rating system coming to life

The review process

Your page assessments will take time, and if a stakeholder is responsible for the content it might take some time to ask questions, or identify the purpose of the page. It’s a lot easier when a content team have created (and are responsible) for all of the content as they have most of the answers.

I recommend picking off a few pages at a time, and you might want to prioritise your high-traffic pages first. It’s also worth adding a column for comments (what is the page doing well or not doing well?), you can also note any particular typos or issues you spot here. Also add a column to note the page owner, and the action that needs to happen next. Typical actions for the page might be:

  • Remove (page is no longer relevant or not being used)
  • Optimise (it needs updating or improving)
  • Merge (with another page that might have a similar purpose or similar content)
  • You might also want to add ‘Investigate further’ as an option for pages you don’t know enough about.

    How do I prioritise actions?

    If there are a number of pages that can be removed then clean those up right away. For the rest of the content, if it’s important for users and will positively impact your business when optimised, then prioritise it. If it’s not really adding value to users, then the existence of the content should be questioned with the content owner (you now should be able to demonstrate this through your rigorous assessment so make use of your audit in conversations). If you’re the content owner, then have a stern chat with yourself about how this content came to exist!

    Add a column to your sheet for High, Medium or Low priority to your spreadsheet so you know what you need to tackle now, next and later on.

    The other benefit of carrying out an audit is that it can help identify content gaps. You’ll find assessing the content against the user’s goal gives you a different perspective. Are you really giving them what they need at this stage of their journey? If not, add that to your comments column and record an action to improve it.

    Venn diagram showing user needs and business needs
    Where user needs and business needs overlap

    Your audit is going to be a long slog if you have a big website so think about attacking it in chunks, and set a target of maybe 30 pages a week. This makes it something that’s manageable to do in between other work or split across content team members. The more unruly your site, the longer this will take.

    The good news is that you now have a model for any new content. The criteria in your audit sets out a benchmark for future content requests — what’s the purpose, what will success look like, does it follow brand principles and style, etc? But content briefs…well that’s a whole new blog post!

    Once your audit is complete, you’ll have a clear, prioritised list for cleaning up your content, making your site content simpler, more relevant, and much more impactful.

    More about content audits

    Some very wise strategists have written great articles to help you learn more. Try these for starters:

    How to plan a content audit that works for you — Lauren Pope

    How to embrace (and gently encourage) the content audit–Kristina Halvorson

    ]]>
    Design Maturity Assessment https://clearleft.com/posts/design-maturity-assessment Thu, 19 Mar 2020 11:02:00 +0000 https://clearleft.com/posts/design-maturity-assessment In our 2020 survey we are looking for your support to further understand how design works in organisations around the globe. How mature is design as a discipline in your organisation?

    Last year when we launched the Design Effectiveness Survey our goal was to start exploring the conditions under which design could best make an impact on an organisations’ goals.

    By surveying designers across the world, our report found three things which greatly increased design’s impact:

    1. Design teams being empowered by executive management to identify and pursue unrequested ideas,
    2. A physical working environment which supports collaborative design activities,
    3. Undertaking regular design research.

    The power of collaboration, empowerment and research helped form the basis for a new tool we were developing to assess the digital design maturity of a well-known manufacturing brand.

    We wanted to be able to quantifiably measure the state of digital design practice within an organisation across five indicators:

    • Collaboration — the ability to build shared understanding & alignment,
    • Empathy — the organisation’s curiosity in customers and human-centred design,
    • Impact — design’s contribution to business success,
    • Trust — the organisation’s empowerment, influence and belief in design,
    • Purpose — how design is deployed to help solve significant challenges.
    The Design Maturity Assessment showing the five factors and a circle for the score out of 100
    Design Maturity Assessment scorecard

    We immediately saw the benefit of encompassing all five factors of design maturity into our 2020 survey. This survey will help us get a flavour of how present and important these factors are in organisations across the world and allow us to create a global benchmark.

    We would appreciate your support in completing and sharing this survey with your team, friends and partners. We are interested in your opinion whether you are in the design team directly, or involved in design from a distance.

    The report will be shared with everyone who completes the survey.

    ]]>
    Going remote https://clearleft.com/posts/going-remote Tue, 17 Mar 2020 12:53:00 +0000 https://clearleft.com/posts/going-remote As of 17 March 2020, Clearleft have taken the decision to close our office and send our employees home, where we’ll be working remotely until the end of April 2020.

    We are committed to the health and well-being of our colleagues, and in these challenging times we feel this decision will benefit staff and the wider community alike.

    Rest assured this is strictly a precautionary measure. Fortunately, we at Clearleft are well set-up and accustomed to working remotely, both as a team and with our valued clients in the UK and beyond. Whilst the current situation will certainly change our approach, it won’t change the outcomes we achieve with our clients.

    We must all do our part to help contain the spread of COVID-19, and we will continue to evaluate our options as the global situation develops. We hope you and your loved ones are safe during this unprecedented time.

    ]]>
    Using content strategy to present your research findings https://clearleft.com/posts/using-content-strategy-to-present-your-research-findings Thu, 12 Mar 2020 11:30:00 +0000 https://clearleft.com/posts/using-content-strategy-to-present-your-research-findings One of the good things about being a content strategist is that you get to offer practical help to other disciplines (as well as learning heaps from them in return).

    We couldn’t do our jobs without research, and so when researchers ask for content help I am always happy to oblige!

    Structuring research findings can be really tricky — you need to be able to tell a story but also make sure you address your original brief. It’s a presentation, but one which could have serious implications on your product if you don’t position it in the right way. After some presentation training I’d had a while back, it got me thinking about how you could apply similar principles to compiling research outputs. There are some mandatories for playing back your research, but also different ways to position your findings. Here’s my advice on what to include:

    doodles of research findings on paper
    When you have all the stuff but just don’t know how to tell the story

    Introduction

    To make sure your introduction really sets up your findings, here’s what to include:

    • What’s the background to the project, why did you carry out the research, what was your objective?
    • Did you have a hypothesis or some assumptions to prove/disprove?
    • Who were the participants?
    • What were the methods used?
    • Where was the research carried out? Was it carried out in different markets?

    Findings

    The bulk of your playback should be what you discovered. This can be hard to know how to position, so here are three different ideas for presenting back your findings:

    1. Present each key finding followed by the supporting observations.

    This is one of the most common techniques but while it can be tempting, don’t list every single quote or insight, just two or three that strongly support the finding. Also it’s worth only having a handful of the most relevant findings in the bulk of the presentation, and keeping the rest in an appendix.

    2. Present the hypothesis followed by some findings that either confirm or conflict with it.

    As a researcher, your role is to present back the findings but not to come up with solutions. One way to keep your presentation objective is to simply list each hypothesis, with the findings that either confirm or conflict with it. Again, stick to just the strongest quotes and observations to evidence, quality will always beat quantity.

    3. Present the ‘vision’ for the product followed by the reality.

    Perhaps a more unique and thought-provoking way to present back research is to remind stakeholders of their product vision, but then show an alternative user perspective. Your observations/insights will either strengthen the vision or show an alternative view, and this can be quite powerful for stakeholders to see, and influence their direction.

    List the key (strongest) supporting observations/quotes as before, and keep the weaker ones in the appendix if anyone needs more detail later on.

    In any of the above methods, audio or video clips are always stronger than quotes written out on a page, but do be aware of how you’ll be playing back. It might be a good idea to include the written quote as well as a video or audio clip to avoid technical constraints.

    Summary

    Summarise three or four key messages from the findings — not all of them. And stick to the most compelling content or problematic issues.

    Appendix

    You can include more detailed quotes, observations or insights (or links to videos) for those that want more detail in this section.

    Don’t forget about the story you want to tell. While you need a conclusion, your role is to present the insights, and not to jump to recommended solutions, unless you’re recommending further research of course! Slides should be simple to read, with clear headings and well-laid out content.

    The strength of your research depends on a clear, compelling playback, so invest extra time to practise your presentation with another team member before the main event (maybe your friendly content designer!). And don’t forget to proof-read!

    This post was originally published on Medium

    ]]>
    Researching the research https://clearleft.com/posts/researching-the-research Fri, 06 Mar 2020 14:00:00 +0000 https://clearleft.com/posts/researching-the-research I’m helping to run a research repositories workshop in March. Find out more about the project and how you can help.

    For those who don’t know about the ResearchOps community, they are “a global group of people who’ve come together to discuss the operations and operationalization of user research and design research”.

    Similar to the DesignOps community, their aim is to further the practice through the process and technological advancements.

    The community is built on three core beliefs:

    • ResearchOps is an emergent consequence of knowledge work at scale
    • Scaling the operation of research requires specific skills and attention to corporate memory
    • Our collective experience holds the keys to building capability within the profession

    The community regularly holds global workshops to discuss and collaborate on specific projects.

    The rise of Research Ops is something we’ve been exploring through our panel events.

    re+ops research ops community logo

    Everything in its right place

    The most recent project to emerge from the community is the research repository project. It is exploring how research is processed, stored and retrieved. The current definition of a repository differs widely across the industry. Solutions range from live products to home-spun ‘hacks’. For these reasons, the project is starting broad to understand the nuances of the process.

    The overall aim:

    • Evaluate the pros/cons of having a research repo with regard to a variety of research methods
    • Gather a folksonomy for each method and develop a structured taxonomy for human research
    • Review strengths/weaknesses of governance in terms of practice and policy
    • Create a list useful requirements for repo builders to prioritise their roadmap.

    Over the next few months, there will be workshops held worldwide to feed into the project. The core of the workshop will be an experience mapping activity. Currently, 55 organisers in 20 countries have signed up, making it one the biggest projects to date!

    I’ll be helping to run a Brighton workshop later this month. If you are involved in the practice of research we’d love for you to see you there!

    ]]>
    Telling the story of performance https://clearleft.com/posts/telling-the-story-of-performance Tue, 03 Mar 2020 13:14:00 +0000 https://clearleft.com/posts/telling-the-story-of-performance Competitor analysis and performance are a match made in heaven.

    At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.

    The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.

    It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.

    There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.

    When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.

    Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

    You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.

    Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.

    Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.

    This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.

    Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.

    SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.

    If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.

    The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.

    Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right nowCrux.run balances that with the story of performance over time.

    Measuring performance is important. Communicating the story of performance is equally important.

    This was originally published on my own site.

    ]]>
    Tiny Lesson: How to run a BERT test https://clearleft.com/posts/tiny-lesson-how-to-run-a-bert-test Sun, 01 Mar 2020 13:55:00 +0000 https://clearleft.com/posts/tiny-lesson-how-to-run-a-bert-test A BERT test allows you to measure how people emotionally perceive your brand through digital products such as a website or mobile app.

    In this lesson, we share how you can run one as part of your next concept test.

    Watch the video, or read the transcript below.

    How to run a BERT test

    BERT stands for bipolar, emotional, response, test.

    You’ll need some Artefact cards or Post-It notes, some Sharpies, and a laptop to analyse the results.

    Start off by generating a longlist of adjectives that describe how your brand should be recognised.

    Next agree on which adjectives are most important to your brand. Tone of voice and branding documents are a great starting point. Agree on 5-6 adjectives to include in your test.

    Now create an opposing or related adjective to create a pair. For example, confident and reserved, premium and budget.

    To create the test, make a 1×7 grid. Add 1 adjective at either end of the grid, leaving the middle 5 blank. Repeat this for each pair.

    During the next round of usability testing include a BERT test at the end of the session. The participant works down the form selecting a point between the 2 adjectives that they feel best describes the ‘personality’ or ‘feel’ of the product. When testing with multiple concepts, include a BERT test for each one.

    Once all the tests have been completed, enter the results into a spreadsheet. The analysis will show the strength of agreement across your participants.

    Now regroup and discuss the results. Which concept performed best? How did internal and external opinions differ? Focus on the worse performing adjectives in future design iterations.

    Resources

    Download the spreadsheet here.

    ]]>
    The changing nature of product teams https://clearleft.com/posts/the-changing-nature-of-product-teams Fri, 28 Feb 2020 13:47:00 +0000 https://clearleft.com/posts/the-changing-nature-of-product-teams Clearleft hosted our third lively morning of debate on the theme of ‘From idea through to delivery’. The first panel discussed how product teams balance business-as-usual with innovation.

    Our first panel featured three digital-first businesses. They’re all 5-10 years old and at different stages of building out their product function.

    You can watch the full panel video here.

    Tension between bottom-up and top-down ideas

    The panel discussed how to level the playing field when it comes to innovation. They all found that the structure of design and research teams plays a key role. An active CEO is also a factor.

    • At ClearScore they know the product is working, but to achieve the active CEO’s vision requires dedicated teams. So rather than random offshoot initiatives, they use standalone teams. They give those teams a bit of space and let them innovate.

    • Receipt Bank are operating in a post-founder world. They needed a whole new cadence of generating strategic visions. It has taken organisational change to shift to an evidence-based approach when they are creating and validating user problems. But some are easier problems than others:

    Very different conversations happen between user problem spaces and optimising checkout flows. The former is still not defined.

    Divya

    There has to be a set discovery process in place, but that seems much easier to define when optimising the existing process rather than the unknown.

    • At OVO they have been growing their research capability to drive innovation and reflect their focus on human-centered design:

    Understanding customer needs is integral and often provides the balance between proving hunches and finding existing problems or opportunities to solve from users.

    Annmarie

    Changing from hunches to hypotheses allows CEO vision and user insights to be validated in the same way.

    Clearleft breakfast panellists

    The spectrum of a product design team

    When prompted on the balance between innovation and business as usual, Annmarie finds two types of designers. There are some who love a challenge and thrive in the unknown. They’re good at identifying problems and coming up with the solutions. Others are not so comfortable with uncertainty. Both are okay:

    This idea of a unicorn I don't think exists. It's okay to be a designer that just loves crafting a great experience that is usable.

    Annmarie

    At Receipt Bank, Divya has focused on the balance of people in the team that you hire: comfort with uncertainty vs. discomfort with uncertainty.

    Perhaps there is not enough recognition of the maintainers?

    There is a tendency to reinvent the wheel. We’ve come to a point in digital design where there isn't that much scope to do new things, which is great in some ways, but not in others especially with new hires who come into their career with the desire to create something new.

    Divya

    When looking at the combination of creators, architects and maintainers, Frank acknowledges that it’s “much harder to create something from what is existing, rather than something from nothing”. For example, it’s easier to create a brand new design system than create a design system from an existing product.

    Scaling product teams

    OVO has gone from five to 30 designers in a very short space of time. They went from being the disruptor to being one of the Big Six. That’s a phenomenal change within a culture that is very agile and digitally focussed. It has taken a lot of work to maintain the culture of being lean: doing discovery and exploration alongside systemised design.

    Designers keen to maintain this culture set up a community of practice. Designers connect across different products, meet regularly and share what they’ve learnt. This maintains a feeling of “we’re all in it together” rather than part of a machine.

    Receipt Bank, on the other hand, has an HR function alone that is 20 people strong. It’s hard to maintain your culture as you scale from a lean 10-20 person start-up to a 100 person product org. Divya learned it is important to “be honest and upfront about it or you will create a false, forced culture”.

    You also need to be realistic about where you are on your product team journey.

    Frank mentioned that one of his first mistakes at ClearScore was thinking he had to hire separate researchers, prototypers, UI and UX designers:

    In a small start-up company, we actually needed more generalists.

    Frank

    This taught him that it’s more important to find people that are right for the current stage of growth rather than sticking to your vision of how the product team ‘should’ be structured. Now that they are a bit larger, they have been able to branch out to more dedicated skillsets.

    Annmarie noticed a spectrum of hiring needs:

    There are those great at discovery and insight, and others that are amazing at brand and applying it to product. It's okay to specialise in one or the other. I have yet to meet someone that covers both.

    Annmarie

    Closing the design and business gap

    • While the design team at OVO is there to discover new things, they also have to deliver on revenue. This can come from both ends of improving the customer experience. You can find solutions to customer problems that can drive new revenue streams that haven’t yet been discovered. Or you can increase the existing revenue stream.

    • Receipt Bank are actively moving away from revenue as a goal. They are moving towards task completion as a goal. They want goals that are more motivating than money-making. This involves some smart measurement of activity.

    • The product team at ClearScore have Objectives and Key Results for design teams. These OKRs quantify the magic and delight of design. But they also ensure that the designers are building something that is contributing to the business goals.

    Clearleft panel audience

    We will be running another breakfast panel in May in London - please register your interest here if you’d like to attend.

    ]]>
    Getting your priorities right https://clearleft.com/posts/getting-your-priorities-right Thu, 27 Feb 2020 13:59:00 +0000 https://clearleft.com/posts/getting-your-priorities-right A handful of things in life are inevitable. On this list you’ll find taxes, death and—if you work in a project team—having more ideas than you have time to deliver.

    Whether creating products or services, working for an agency or for an in-house team, the list of potential features and ongoing fixes is always outpaced by the available time to explore, build and release them.

    In ‘Good Strategy. Bad Strategy’ Richard Rumelt says sagely: “strategy is at least as much about what an organisation does not do as it is about what it does”.

    With this in mind, here’s a roundup of some simple techniques for prioritisation. These can help project teams take control and manage their backlog. After all, your time is too valuable to make decisions on what to work on next by deferring to the hippo (highest paid person’s opinion) in the room or by the toss of a coin.

    Plotting value versus effort

    Value versus effort matrix

    Let’s start with a deceptively simple but incredibly robust method: the 2x2 matrix. In this example user value is plotted against production effort. However, any two competing dimensions can be used.

    It’s an ideal technique to use when you have lots of data points. Seeing the spatial relationship between them will help you identify where the quick wins and long-term value can be found.

    We often use this technique with clients to collectively decide which recommendations we will prioritise from an expert review or findings from user research and which suggestions fall into the quadrants of busy work or time sinks.

    The matrix can be created in a lo-fi way. All you need is brown paper, masking tape, and each item written on an individual post-it® notes to make the information easy to plot and re-plot. Equally, you can use a collaborative digital tool such as Miro. This is ideal if the team doing the prioritisation is geographically distributed.

    In either case, prepare the information to plot in advance so you can use the time in the workshop to map what goes where. People often fall into the trap of trying to make everything high value. To counter this force a decision on relative priorities by insisting that the sticky notes cannot overlap one another.

    Voting for precedence

    Precedence voting chart

    Use this technique when you need to decide what to prioritise from a competing shortlist of possible options.

    In essence, you systematically compare each idea against every other one until you end up with a ranked and scored list of priorities.

    I was introduced to this prioritisation technique by John Sunart who credits Norman McNally for introducing him to it.

    Although the activity takes time, it is worth it if you need to evaluate the relative merit of ideas from a set of options. Give yourself at least 40 minutes for half a dozen deciders to work through a set of six options. Increase the time by five minutes for every additional option, and don’t go over 10 competing options without energy gels at hand.

    The process is relatively straightforward:

    1. Get to around six competing choices. You could dot vote to shortlist.
    2. Draw up a grid. Write in the ideas being evaluated on both the x and y axis. Blank out ideas competing against themselves.
    3. Call out two competing features and ask your participants for a show of hands if they think the first feature is more important, relevant or doable than the second.
    4. Count the hands and add the score to the chart (in two places) for the first and second feature under consideration.
    5. Move across the rows from left to right until all the boxes have numbers in.
    6. Add up the numbers in each row and rank the ideas from the scores.

    This activity works best when the focus is on voting rather than discussing each option. To help keep the activity on track, circulate a brief description of the ideas for consideration to participants in advance of running the workshop.

    Checking there’s innovation in your mix

    The How? Now, Wow! matrix

    Another matrix, this one is ideal to sense-check the makeup of your product roadmap. It will quickly show if you are over-indexed on business as usual and if the team has time set aside for exploring potential futures.

    The How? Now, Wow! matrix is included in Gamestorming, a perennial favourite on the Clearleft bookshelf for finding practical workshop activities.

    Ideally, you are looking for a blended programme of work with items in each of the three named quadrants. It’s a useful method to periodically revisit to evaluate if your team is spending its time appropriately looking across the different horizons of the now, the next and the future.

    Placing your bets and backing the favourites

    The original twitter doodle from Hias Wrba (@ScreaminHias)

    I serendipitously came across this a few years ago via a retweet from a friend. The doodle from Hias Wrba (@ScreaminHias) perfectly encapsulates that not all decisions are (or should be) treated equally.

    Some product decisions are low cost and low risk. Getting on with building them offers more value than having another meeting to debate them. Other options with a higher level of uncertainty and/or risk can best be answered by doing research to provide more insights to give confidence in your future decision making.

    I’m a big fan of having the value and cost scales plotted in humanly understandable terms (going from a beer, a holiday, a month’s salary, a car, a house). Because of these scales, I find this a great tool to use when you want a team to start thinking about their work in terms of competing decisions with financial implications. This matrix also works a treat in workshops to move people from circular conversations to quickly deciding the next best steps for the ideas being evaluated.

    Counting on a repeatable formula

    A table showing a mean score for a number of ideas
    The (Value ÷ Effort) x Confidence calculator

    We’ve used this formula and variations of it on numerous projects. Most recently we introduced it to a client’s customer experience team. They managed multiple websites and wanted a framework that could be used to evaluate requests from numerous stakeholders. They found having a score, from a robust formula, replaced emotive reasoning with a more rational approach to their prioritisation process.

    The technique came to my attention via an article written by Jared Spool who in turn credits the method to Bruce McCarthy.

    The toughest part of using this method is to define your terms so everyone is clear on what you mean by value, effort and confidence.

    For example, value could purely be user value, or business value, or a blend of the two. You might be more focussed on increasing brand reputation rather than revenue or improving the usability of your product over retention rates.

    Likewise, you might articulate and estimate effort in terms of person-hours, the combined capital and operational expenditure, or the blood, sweat and tears the team will shed.

    Confidence is a subjective measure. It can come from having robust user research or having already done a technical proof of concept, or for less risky and innovative suggestions, from best practices or trivial technical requirements.

    The important thing is that everyone has a shared understanding of the terms being used.

    Once you’ve agreed your terms it’s then time to review the scoring system. We tend to keep things simple. Value and effort have a shared three-point scale (1=low, 2=medium, 3=high). For confidence, we use a numeric value between 0 (for no confidence) and 1 (absolutely sure) with incremental steps of 0.25 giving an increased level of certainty.

    Now you’re ready to add the numbers and formula into a spreadsheet with an additional column at the front for features for consideration.

    We find the use of a visible, shared spreadsheet particularly useful when dealing with multiple stakeholders. It enables prioritisation to easily be done over many sessions and for the list to be seen as a living document.

    Making prioritisation a priority

    Prioritisation isn’t and shouldn’t be a one-off exercise. The changing needs of your customers, the business environment and new opportunities from technology mean prioritisation is best done as a regular activity.

    There isn’t a single right way to prioritise your work. Different methods help in different situations. Try out some of these methods to see which best helps you. The important thing is that your teams know what they are going to tackle next and why.

    ]]>
    Utopia https://clearleft.com/posts/utopia Tue, 18 Feb 2020 18:10:53 +0000 https://clearleft.com/posts/utopia Utopia is not a product, a plugin, or a framework. It’s a memorable/pretentious word we use to refer to a way of thinking about fluid responsive design.

    Trys and James recently unveiled their Utopia project. They’ve been tinkering away at it behind the scenes for quite a while now.

    You can check out the website and read the blog to get the details of how it accomplishes its goal:

    Elegantly scale type and space without breakpoints.

    I may well be biased, but I really like this project. I’ve been asking myself why I find it so appealing. Here are a few of the attributes of Utopia that strike a chord with me…

    It’s collaborative

    Collaboration is at the heart of Clearleft’s work. I know everyone says that, but we’ve definitely seen a direct correlation: projects with high levels of collaboration are invariably more successful than projects where people are siloed.

    The genesis for Utopia came about after Trys and James worked together on a few different projects. It’s all too easy to let design and development splinter off into their own caves, but on these projects, Trys and James were working (literally) side by side. This meant that they could easily articulate frustrations to one another, and more important, they could easily share their excitement.

    The end result of their collaboration is some very clever code. There’s an irony here. This code could be used to discourage collaboration! After all, why would designers and developers sit down together if they can just pass these numbers back and forth?

    But I don’t think that Utopia will appeal to designers and developers who work in that way. Born in the spirit of collaboration, I suspect that it will mostly benefit people who value collaboration.

    It’s intrinsic

    If you’re a control freak, you may not like Utopia. The idea is that you specify the boundaries of what you’re trying to accomplish—minimum/maximum font sizes, minumum/maximum screen sizes, and some modular scales. Then you let the code—and the browser—do all the work.

    On the one hand, this feels like surrending control. But on the other hand, because the underlying system is so robust, it’s a way of guaranteeing quality, even in situations you haven’t accounted for.

    If someone asks you, “What size will the body copy be when the viewport is 850 pixels wide?”, your answer would have to be “I don’t know …but I do know that it will be appropriate.”

    This feels like a very declarative way of designing. It reminds me of the ethos behind Andy and Heydon’s site, Every Layout. They call it algorithmic layout design:

    Employing algorithmic layout design means doing away with @media breakpoints, “magic numbers”, and other hacks, to create context-independent layout components. Your future design systems will be more consistent, terser in code, and more malleable in the hands of your users and their devices.

    See how breakpoints are mentioned as being a very top-down approach to layout? Remember the tagline for Utopia, which aims for fluid responsive design?

    Elegantly scale type and space without breakpoints.

    Unsurprisingly, Andy really likes Utopia:

    As the co-author of Every Layout, my head nearly fell off from all of the nodding when reading this because this is the exact sort of approach that we preach: setting some rules and letting the browser do the rest.

    Heydon describes this mindset as automating intent. I really like that. I think that’s what Utopia does too.

    As Heydon said at Patterns Day:

    Be your browser’s mentor, not its micromanager.

    The idea is that you give it rules, you give it axioms or principles to work on, and you let it do the calculation. You work with the in-built algorithms of the browser and of CSS itself.

    This is all possible thanks to improvements to CSS like calc, flexbox and grid. Jen calls this approach intrinsic web design. Last year, I liveblogged her excellent talk at An Event Apart called Designing Intrinsic Layouts.

    Utopia feels like it has the same mindset as algorithmic layout design and intrinsic web design. Trys and James are building on the great work already out there, which brings me to the final property of Utopia that appeals to me…

    It’s iterative

    There isn’t actually much that’s new in Utopia. It’s a combination of existing techniques. I like that. As I said recently:

    I’m a great believer in the HTML design principle, Evolution Not Revolution:

    It is better to evolve an existing design rather than throwing it away.

    First of all, Utopia uses the idea of modular scales in typography. Tim Brown has been championing this idea for years.

    Then there’s the idea of typography being fluid and responsive—just like Jason Pamental has been speaking and writing about.

    On the code side, Utopia wouldn’t be possible without the work of Mike Reithmuller and his breakthroughs on responsive and fluid typography, which led to Tim’s work on CSS locks.

    Utopia takes these building blocks and combines them. So if you’re wondering if it would be a good tool for one of your projects, you can take an equally iterative approach by asking some questions…

    Are you using fluid type?

    Do your font-sizes increase in proportion to the width of the viewport? I don’t mean in sudden jumps with @media breakpoints—I mean some kind of relationship between font size and the vw (viewport width) unit. If so, you’re probably using some kind of mechanism to cap the minimum and maximum font sizes—CSS locks.

    I’m using that technique on Resilient Web Design. But I’m not changing the relative difference between different sized elements—body copy, headings, etc.—as the screen size changes.

    Are you using modular scales?

    Does your type system have some kind of ratio that describes the increase in type sizes? You probably have more than one ratio (unlike Resilient Web Design). The ratio for small screens should probably be smaller than the ratio for big screens. But rather than jump from one ratio to another at an arbitrary breakpoint, Utopia allows the ratio to be fluid.

    So it’s not just that font sizes are increasing as the screen gets larger; the comparative difference is also subtly changing. That means there’s never a sudden jump in font size at any time.

    Are you using custom properties?

    A technical detail this, but the magic of Utopia relies on two powerful CSS features: calc() and custom properties. These two workhorses are used by Utopia to generate some CSS that you can stick at the start of your stylesheet. If you ever need to make changes, all the parameters are defined at the top of the code block. Tweak those numbers and watch everything cascade.

    You’ll see that there’s one—and only one—media query in there. This is quite clever. Usually with CSS locks, you’d need to have a media query for every different font size in order to cap its growth at the maximum screen size. With Utopia, the maximum screen size—100vw—is abstracted into a variable (a custom property). The media query then changes its value to be the upper end of your CSS lock. So it doesn’t matter how many different font sizes you’re setting: because they all use that custom property, one single media query takes care of capping the growth of every font size declaration.

    If you’re already using CSS locks, modular scales, and custom properties, Utopia is almost certainly going to be a good fit for you.

    If you’re not yet using those techniques, but you’d like to, I highly recommend using Utopia on your next project.

    This was originally published on my own site.

    ]]>
    Fluid custom properties https://clearleft.com/posts/fluid-custom-properties Fri, 14 Feb 2020 13:55:00 +0000 https://clearleft.com/posts/fluid-custom-properties A core theme of the seminal introduction to responsive web design was an acceptance of the ‘ebb and flow’ found on our inherently fluid web.

    But despite a broadly accepted view that device-based breakpoints are flawed, and a myriad of solutions proposed to solve the problem, it’s fair to say we still mostly think in terms of pre-defined breakpoints.

    Writing truly isolated components and tailored breakpoints; where we reevaluate our original CSS decisions as the viewport gets to a size that breaks the component, is great in theory, but challenging in practice. Ubiquitous language in a team, and a natural desire to consolidate magic numbers tends to lead to either a set of easy to remember numbers: 30em, 40em, 50em, or more generic ‘small, medium, large’ mixing breakpoints. Both approaches create jarring ‘breakpoint jumps’ rather than a natural easing as a screen changes.

    When there is shared styling between components, our instinct is to group the components in some respect. This tends to fall into one of three camps:

    • Writing large selectors, or relying on Sass @extends, despite its flaws and horizontal coupling.
    • Peppering our HTML with utility classes like .mt18 and .pb24.
    • Duplicating the common styles and accepting the performance hit.

    These all work at a single width, but begin to fall apart as more screen sizes get involved. Here are some of the most common next steps to patch the issue:

    • Adding @media breakpoints to the utility classes and renaming them to something more generic.
    • Create multiple utility classes with breakpoint-specific suffixes, adding all of them to an element.
    • Continuing the duplication with identical @media breakpoints in each component.

    None of these are ideal – not only are we duplicating code or coupling horizontally, we’re still thinking about device-specific breakpoints. It’s a problem, and it affects all aspects of our work – spacing, rhythm, layout and typography.

    We need to think fluidly.

    A proposal

    Fluid custom properties combine CSS Locks, CSS Custom Properties, and the concept of hills (yes, hills). They allow us to write fluid CSS without writing any breakpoints.

    A fluid custom property is a font-size representation of a gradient or slope, set between two screen sizes, and stored as a global CSS custom property.

    With a predefined set of fluid custom properties at the heart of a project, we can hook onto them to create natural, breakpoint-less spacing and typography that gradually interpolates across screen sizes.

    Relying on these global rules brings consistency across a project, and helps to ensure every component looks ‘just right’ on all screens. There are no nasty ‘breakpoint jumps’, just buttery smooth interpolation.

    They significantly reduce code duplication and keep code succinct and readable. Rather than coupling horizontally, shared styles are linked vertically to these global, project-specific constants.

    All the complicated maths is abstracted away, leaving you to work with natural numbers, browser text zoom preferences are respected, and they work naturally with ems.

    They’re also entirely opt-in; the brilliance of custom properties is that they do nothing to your webpage until you reference them. This makes it a great way to retrospectively add fluid sizing to an existing site.

    Let’s dig into the three concepts in a little more detail:

    CSS Locks and interpolation

    Linear interpolation is a mathematical technique used to calculate the value at a position between two points. In the CSS and animation world, Interpolating or ‘tweening’ is the process of smoothly changing a value between the two points over two screen sizes. We can achieve this effect with CSS locks, a technique coined by Tim Brown.

    Below is a CSS lock that interpolates between a font-size of 1em and 2em between the two screen sizes of 20em (320px) and 50em. The locking is handled by the media query directly below it, without it the growth would continue at the same rate forever.

    
    p {
      font-size: calc(1em + (2 - 1) * ((100vw - 20em)/(50 - 20)));
    }
    
    @media screen and (min-width: 50em) {
      p {
        font-size: 2em;
      }
    }
    

    Writing a lock by hand is pretty verbose, so Sass mixins are regularly turned to. This has the huge advantage of making your life as a developer easier, but the distinct disadvantage, like all pre-processor features, of distancing yourself from the final CSS output. Once you’ve been bitten by the fluid bug and seen its virtues, it’s very easy to end up with several hundred CSS locks, and thus several hundred media queries. That’s a lot of code.

    CSS custom properties

    There are plenty of wonderful guides to CSS custom properties, so I shan’t go into too much detail. Here’s a CSS custom property definition and usage example.

    
    :root {
      --brand: #FF4757;
    }
    
    a {
      color: var(--brand);
    }
    

    Not only are they a great way to extract common values out to a central location, they can be overridden using the cascade, and used in calc() functions. Using custom properties for typography and vertical rhythm has been well documented, but they can be used for so much more. We can combine CSS custom properties with locks to great effect…

    Refactoring the lock

    Let’s rewrite the CSS lock we used earlier, harnessing the descriptive power of custom properties. We start by extracting the configurable parts of the lock into a :root definition. The vast majority of CSS locks run from 20em/320px, so we’ll keep that in the lock for brevity. Then we can substitute the values within the declaration and media query, multiplying the appropriate values by 1em:

    
    :root {
      --max-value: 2;
      --min-value: 1;
      --max-screen: 50;
    }
    
    p {
      font-size: calc(
        (var(--min-value) * 1em) + (var(--max-value) - var(--min-value)) *
          ((100vw - 20em) / (var(--max-screen) - 20))
      );
    }
    
    @media screen and (min-width: 50em) {
      p {
        font-size: calc(var(--max-value) * 1em);
      }
    }
    

    Sadly, we can’t use custom properties in the media query definition, so we have to repeat the 50em. But ignoring that, we’ve extracted all the other ‘bits’ of the calculation into a single source of truth. The CSS lock now looks even more unwieldy than it did before but – crucially – the bits we actually need to access are much easier to read.

    Even more refactoring

    With traditional CSS locks, you need a media query for every lock, but as fluid custom properties rely on cascading custom properties, we can solve this really elegantly in one line.

    CSS locks use the 100vw unit to represent the varying screen size, but this doesn’t have to be the case. We can extract that value into its own custom property: --f-screen.

    When we’ve reached the ‘lock point’, rather than update all the CSS locks we have on the page, we can update the value of --f-screen to be the width of our --max-screen. This one line change holds every lock in its maximum state.

    
    :root {
      --max-value: 2;
      --min-value: 1;
      --max-screen: 75;
    
      --f-screen: 100vw;
      --f-bp: (var(--f-screen) - 20em)/(var(--max-screen) - 20);
    }
    
    p {
      font-size: calc((var(--min-value) * 1em) + (var(--max-value) - var(--min-value)) * var(--f-bp));
    }
    
    @media screen and (min-width: 75em) {
      :root {
        --f-screen: calc(var(--max-screen) * 1em);
      }
    }
    

    This is a rather neat refactor, but it’s still only working at a selector-level - we can still step it up a notch or two. But before we can talk about that, we need to talk about hills.

    Hills, grades & slopes

    When travelling by road, we can refer to the steepness of a hill by a gradient or grade. They’re often given in terms of a ratio: 2:1 or a percentage: 30%. The higher the percentage, the steeper the incline, and the more likely you’ll need to get off your bike and walk up the hill.

    A CSS lock can also be visualised as a hill. The screen sizes define the where the hill starts and ends (or the foot and summit), and the two values (say, 1em and 2em) dictate the gradient. When plotted onto a graph, it looks a little like this:

    A graph demonstrating a CSS lock
    A CSS lock, visualised

    In this example, we’re interpolating between two specific values: 1em and 2em, a relationship of 2:1. This is great, but a bit limiting. What if we wanted to interpolate between 2em and 4em. Fluid custom properties encapsulate that relationship into a fluid multiplier that lets you re-use that angle in various ways across a project.

    The implementation

    Below is the CSS for four fluid custom properties than run between 320px and 1200px.

    
    :root {
      --f-summit: 1200;
    
      --f-screen: 100vw;
      --f-foot: 1 / 16;
      --f-hill: (var(--f-screen) - 20rem) / (var(--f-summit) / 16 - 20) + var(--f-foot) * 1rem;
    
      --f-1-25: ((1.25 / 16 - var(--f-foot)) * var(--f-hill));
      --f-1-5: ((1.5 / 16 - var(--f-foot)) * var(--f-hill));
      --f-2: ((2 / 16 - var(--f-foot)) * var(--f-hill));
      --f-3: ((3 / 16 - var(--f-foot)) * var(--f-hill));
    }
    
    @media screen and (min-width: 1200px) {
      :root {
        --f-screen: calc(var(--f-summit) * 1px);
      }
    }
    

    Let’s break it down section by section.

    
    --f-summit: 1200;
    

    This property denotes the largest screen size in px. This gets converted to rems internally to ensure text zoom preferences are respected.

    
    --f-screen: 100vw;
    --f-foot: 1 / 16;
    --f-hill: (var(--f-screen) - 20rem) / (var(--f-summit) / 16 - 20) + var(--f-foot) * 1rem;
    

    --f-screen holds the width of screen (100vw) until we reach the summit. Extracting this make sets us up to be able to succinctly lock all the properties in one go. All fluid custom properties are ratios based off of 1, and --f-foot represents that.

    --f-hill is the media query part of the lock, running from 320px to our --f-summit. By extracting this out from the back end of the CSS lock, and into its own CSS custom property, we can cap all the custom properties in one go - more on that later.

    It’s worth noting I’ve intentionally baked in the assumption of that start point. Extracting that out to another custom property is perfectly valid if it fits your use-case better.

    
    --f-1-25: ((1.25 / 16 - var(--f-foot)) * var(--f-hill));
    --f-1-5: ((1.5 / 16 - var(--f-foot)) * var(--f-hill));
    --f-2: ((2 / 16 - var(--f-foot)) * var(--f-hill));
    --f-3: ((3 / 16 - var(--f-foot)) * var(--f-hill));
    

    These are the fluid custom properties themselves. --f-1-25 represents a gradient of 1.25:1. The names are down to personal preference, I like the clarity of exposing the gradient angle in the variable, but you may prefer more generic names like --f-shallow or --f-steep. Equally, you may find a name like --f-gutter would be more appropriate.

    Side-note: custom properties aren’t evaluated until they are used, so there’s no need to wrap each one in a calc().

    
    @media screen and (min-width: 1200px) {
      :root {
        --f-screen: calc(var(--f-summit) * 1px);
      }
    }
    

    Finally, we have the aforementioned screen width lock to prevent the values from growing to silly levels.

    Using fluid custom properties

    The actual values stored in fluid custom properties are tiny, so they need to be multiplied up to useful numbers. The multiplier you choose represents the pixel size of the value at 320px. You can calculate the final size by multiplying it against the gradient.

    Let’s look at a specific example, setting the font-size on the document body.

    
    body {
      font-size: calc(var(--f-1-25) * 16);
    }
    

    This declaration will interpolate between 16px and 20px (16 * 1.25 = 20), without a breakpoint jump. All screens will get an appropriate font-size somewhere in between those two values.

    Now we’ve written that, we can use ems in the normal way to get relative fluid sizing off the body.

    
    h3 {
      font-size: 1.5em;
    }
    

    This will size h3 tags to be 24px on small screens, gradually changing up to 30px on larger screens.

    Working with steeper gradients

    Here’s an example for a hero banner. These are normally pretty painful to write, involving multiple padding breakpoint jumps as the screen expands. But when we use a fluid custom properties at a steeper gradient, we can achieve it in one line:

    
    .hero {
      padding: calc(var(--f-5) * 40) 0;
    }
    

    This gradient of 5:1 interpolates the vertical padding between 40px and 200px as the screen gets larger.

    The flexibility of different gradients give us a multitude of options to build with. If you’re after tight spacing on mobile and ample on larger screens, choose a steeper gradient multiplied by a smaller number. If you want similar spacing on both, increasing ever so slightly, take a shallower gradient and multiply it by a larger number. You can even use negative gradients to make reductions on larger screens!

    Fluid custom properties can be applied to margins, border-widths, padding, font-size, grid-gaps, transforms and all manner of other properties.

    Common patterns can be consolidated in other CSS custom properties to reduce the number of calc() function calls. There’s also no reason why they can’t be applied to design tokens or utility classes. These common calculations can then be surfaced in a design system to ensure maximum usage and understanding on a project.

    This post was originally published on Utopia.fyi.

    ]]>
    Leading Design 2020 https://clearleft.com/posts/leading-design-2020 Fri, 07 Feb 2020 10:53:00 +0000 https://clearleft.com/posts/leading-design-2020 We’ve got exciting things afoot in the coming year. Through our Leading Design events, we equip design leaders, whether they’re newly in post or have that hallowed seat at the table, with everything they need to take both their careers and their leadership to the next level.

    Our events team had our own retreat to take time to reflect, learn and spend time on understanding the needs of design leaders, that’s why we are excited to announce our Leading Design events over 2020.

    Leading Design logo

    Leading Design Conferences

    Our conferences bring together experts who lead design teams, oversee design direction and know what it takes to successfully create design culture in organisations. You’ll hear their practical tips, take part in hands-on workshops, and get a chance to meet peers from all over the world.

    After four years of successful sell out events in London and most recently in New York, we decided to head to the West Coast - San Francisco. We’ve assembled a wonderful group of design leaders who will cover a wide range of topics to help you make the most of your Design Leadership journey. Even though we have sold out you can add your name to the waitlist and of course keep an eye out for the talk videos post conference.

    European folk, do not fear we will be having Leading Design London the 4-6th November, 2020. We’re launching tickets on the 9th March so if you are interested in getting early bird tickets let us know here and we will be sure to drop you an email when the tickets go live.

    Two people with glases of wine at the Leading design Meetup in New York

    Leading Design Community meet-ups

    Leading Design isn’t all talks and workshops - we’ve created plenty of opportunities for you to meet like-minded design leaders, swap stories, and build relationships we hope will last the rest of your careers. Our community meet-ups are a place to meet other design leaders in a safe, relaxed, supportive space – to share, learn, and have fun. This year we are hosting a number of Leading Design Community Meetups across the globe - London, San Francisco, New York and more!

    Find out more about our meet-ups and all the dates here.

    P.S If you are going to be at SXSW - Andy Budd, LD Conference curator and founder of the LD Slack Community will be arranging a causal drinks meetup on Thursday 19th March. If you’re around and want to meet other design leaders register your interest here.

    Juvet Landscape Hotel

    Leading Design Retreats

    As a design leader, you’re responsible for a team, the direction they take, how they carry out their work, how they innovate, and how they make progress in their individual careers. That’s a lot of responsibility to hold and sometimes when looking after others we neglect ourselves. Our retreats are a perfect opportunity to re-focus on your own self-development and prioritise your own journey.

    Find out more about our Retreat in Norway in September 2020.

    Stay connected

    Follow our Leading Design Medium account where we have recently interviewed Margaret Lee (Director, UX Community & Culture at Google) reflecting on her own personal journey as a leader and how she came to reconcile her own true self against conventional expectations.

    For all the latest Leading Design updates and announcements you can either:

    Well, that’s all for now folks, we can’t wait to see you all in 2020!

    ]]>
    Design systems roundup https://clearleft.com/posts/design-systems-roundup Wed, 05 Feb 2020 13:16:00 +0000 https://clearleft.com/posts/design-systems-roundup A provocative post about design systems prompted some excellent responses.

    When I started writing a post about architects, gardeners, and design systems, it was going to be a quick follow-up to my post about web standards, dictionaries, and design systems. I had spotted an interesting metaphor in one of Frank’s posts, and I thought it was worth jotting it down.

    But after making that connection, I kept writing. I wanted to point out the fetishism we have for creation over curation; building over maintenance.

    Then the post took a bit of a dark turn. I wrote about how the most commonly cited reasons for creating a design system—efficiency and consistency—are the same processes that have led to automation and dehumanisation in the past.

    That’s where I left things. Others have picked up the baton.

    Dave wrote a post called The Web is Industrialized and I helped industrialize it. What I said resonated with him:

    This kills me, but it’s true. We’ve industrialized design and are relegated to squeezing efficiencies out of it through our design systems. All CSS changes must now have a business value and user story ticket attached to it. We operate more like Taylor and his stopwatch and Gantt and his charts, maximizing effort and impact rather than focusing on the human aspects of product development.

    But he also points out the many benefits of systemetising:

    At the same time, I have seen first hand how design systems can yield improvements in accessibility, performance, and shared knowledge across a willing team. I’ve seen them illuminate problems in design and code. I’ve seen them speed up design and development allowing teams to build, share, and validate prototypes or A/B tests before undergoing costly guesswork in production. There’s value in these tools, these processes.

    Emphasis mine. I think that’s a key phrase: “a willing team.”

    Ethan tackles this in his post The design systems we swim in:

    A design system that optimizes for consistency relies on compliance: specifically, the people using the system have to comply with the system’s rules, in order to deliver on that promised consistency. And this is why that, as a way of doing something, a design system can be pretty dehumanizing.

    But a design system need not be a constraining straitjacket—a means of enforcing consistency by keeping creators from colouring outside the lines. Used well, a design system can be a tool to give creators more freedom:

    Does the system you work with allow you to control the process of your work, to make situational decisions? Or is it simply a set of rules you have to follow?

    This is key. A design system is the product of an organisation’s culture. That’s something that Brad digs into his post, Design Systems, Agile, and Industrialization:

    I definitely share Jeremy’s concern, but also think it’s important to stress that this isn’t an intrinsic issue with design systems, but rather the organizational culture that exists or gets built up around the design system. There’s a big difference between having smart, reusable patterns at your disposal and creating a dictatorial culture designed to enforce conformity and swat down anyone coloring outside the lines.

    Brad makes a very apt comparison with Agile:

    Not Agile the idea, but the actual Agile reality so many have to suffer through.

    Agile can be a liberating empowering process, when done well. But all too often it’s a quagmire of requirements, burn rates, and story points. We need to make sure that design systems don’t suffer the same fate.

    Jeremy’s thoughts on industrialization definitely struck a nerve. Sure, design systems have the ability to dehumanize and that’s something to actively watch out for. But I’d also say to pay close attention to the processes and organizational culture we take part in and contribute to.

    Matthew Ström weighed in with a beautifully-written piece called Breaking looms. He provides historical context to the question of automation by relaying the story of the Luddite uprising. Automation may indeed be inevitable, according to his post, but he also provides advice on how to approach design systems today:

    We can create ethical systems based in detailed user research. We can insist on environmental impact statements, diversity and inclusion initiatives, and human rights reports. We can write design principles, document dark patterns, and educate our colleagues about accessibility.

    Finally, the ouroboros was complete when Frank wrote down his thoughts in a post called Who cares?. For him, the issue of maintenance and care is crucial:

    Care applies to the built environment, and especially to digital technology, as social media becomes the weather and the tools we create determine the expectations of work to be done and the economic value of the people who use those tools. A well-made design system created for the right reasons is reparative. One created for the wrong reasons becomes a weapon for displacement. Tools are always beholden to values. This is well-trodden territory.

    Well-trodden territory indeed. Back in 2015, Travis Gertz wrote about Design Machines:

    Designing better systems and treating our content with respect are two wonderful ideals to strive for, but they can’t happen without institutional change. If we want to design with more expression and variation, we need to change how we work together, build design teams, and forge our tools.

    Also on the topic of automation, in 2018 Cameron wrote about Design systems and technological disruption:

    Design systems are certainly a new way of thinking about product development, and introduce a different set of tools to the design process, but design systems are not going to lessen the need for designers. They will instead increase the number of products that can be created, and hence increase the demand for designers.

    And in 2019, Kaelig wrote:

    In order to be fulfilled at work, Marx wrote that workers need “to see themselves in the objects they have created”.

    When “improving productivity”, design systems tooling must be mindful of not turning their users’ craft into commodities, alienating them, like cogs in a machine.

    All of this is reminding me of Kranzberg’s first law:

    Technology is neither good nor bad; nor is it neutral.

    I worry that sometimes the messaging around design systems paints them as an inherently positive thing. But design systems won’t fix your problems:

    Just stay away from folks who try to convince you that having a design system alone will solve something.

    It won’t.

    It’s just the beginning.

    At the same time, a design system need not be the gateway drug to some kind of post-singularity future where our jobs have been automated away.

    As always, it depends.

    Remember what Frank said:

    A well-made design system created for the right reasons is reparative. One created for the wrong reasons becomes a weapon for displacement.

    The reasons for creating a design system matter. Those reasons will probably reflect the values of the company creating the system. At the level of reasons and values, we’ve gone beyond the bounds of the hyperobject of design systems. We’re dealing in the area of design ops—the whys of systemising design.

    This is why I’m so wary of selling the benefits of design systems in terms of consistency and efficiency. Those are obviously tempting money-saving benefits, but followed to their conclusion, they lead down the dark path of enforced compliance and eventually, automation.

    But if the reason you create a design system is to empower people to be more creative, then say that loud and proud! I know that creativity, autonomy and empowerment is a tougher package to sell than consistency and efficiency, but I think it’s a battle worth fighting.

    Design systems are neither good nor bad (nor are they neutral).

    Addendum: I’d just like to say how invigorating it’s been to read the responses from Dave, Ethan, Brad, Matthew, and Frank …all of them writing on their own websites. Rumours of the demise of blogging may have been greatly exaggerated.

    This was originally published on my own site.

    ]]>
    Tiny Lesson: How to run a premortem workshop https://clearleft.com/posts/tiny-lesson-how-to-run-a-premortem-workshop Mon, 03 Feb 2020 15:54:00 +0000 https://clearleft.com/posts/tiny-lesson-how-to-run-a-premortem-workshop A premortem is a great way to help mitigate bad outcomes on a project. It helps project teams identify risks and put actions in place to avoid them from the start.

    In this Tiny Lesson we share how to run one for you and your team.

    Watch the video here, or read the transcript below.

    How to run a premortem workshop

    Allow half an hour. You’ll need a few pens, some post-its of different colours, some wall space, and of course, some willing participants.

    First, ask the team to think about what’s happened in 12 months time, (you can adjust the duration to match your project or product roadmap). then imagine if everything that could’ve gone wrong has gone wrong. Give them about five minutes, and ask them to map all of the things they can think of onto the wall, explaining as they go along. Assign a note-taker to capture any salient points.

    Once they’ve done that, start mapping them into themes. If you can, group the items and give each theme a heading, such as people or processes or systems or ways of working. This will really help later on.

    Next work through each of the post-its, ask the team to think of ways to mitigate these bad outcomes and stick all the ideas they have over the top. Again, explaining each as they go along.

    Now you can take this output and create your action plan. If you can, it helps to assign timings or an owner to each action, and then share the plan with the group.

    This way, everyone’s aware of anything bad that could go wrong, and what you can do to stop it, and you’ll have a much better chance of success on your project.

    We used this technique recently at the start of our project with Virgin Holidays.

    ]]>
    Architects, gardeners, and design systems https://clearleft.com/posts/architects-gardeners-and-design-systems Wed, 29 Jan 2020 16:10:00 +0000 https://clearleft.com/posts/architects-gardeners-and-design-systems Design systems promise efficiency and consistency. But what’s the end game?

    I compared design systems to dictionaries. My point was that design systems—like language—can be approached in a prescriptivist or descriptivist manner. And I favour descriptivism.

    A prescriptive approach might give you a beautiful design system, but if it doesn’t reflect the actual product, it’s fiction. A descriptive approach might give a design system with imperfections and annoying flaws, but at least it will be accurate.

    I think it’s more important for a design system to be accurate than beautiful.

    Meanwhile, over on Frank’s website, he’s been documenting the process of its (re)design. He made an interesting comparison in his post Redesign: Gardening vs. Architecture. He talks about two styles of writing:

    In interviews, Martin has compared himself to a gardener—forgoing detailed outlines and overly planned plot points to favor ideas and opportunities that spring up in the writing process. You see what grows as you write, then tend to it, nurture it. Each tendrilly digression may turn into the next big branch of your story. This feels right: good things grow, and an important quality of growth is that the significant moments are often unanticipated.

    On the other side of writing is who I’ll call “the architect”—one who writes detailed outlines for plots and believes in the necessity of overt structure. It puts stock in planning and foresight. Architectural writing favors divisions and subdivisions, then subdivisions of the subdivisions. It depends on people’s ability to move forward by breaking big things down into smaller things with increasing detail.

    It’s not just me, right? It all sounds very design systemsy, doesn’t it?

    This is a false dichotomy, of course, but everyone favors one mode of working over the other. It’s a matter of personality, from what I can tell.

    Replace “personality” with “company culture” and I think you’ve got an interesting analysis of the two different approaches to design systems. Descriptivist gardening and prescriptivist architecture.

    Frank also says something that I think resonates with the evergreen debate about whether design systems stifle creativity:

    It can be hard to stay interested if it feels like you’re painting by numbers, even if they are your own numbers.

    I think Frank’s comparison—gardeners and architects—also speaks to something bigger than design systems…

    I gave a talk last year called Building. You can watch it, listen to it, or read the transcript if you like. The talk is about language (sort of). There’s nothing about prescriptivism or descriptivism in there, but there’s lots about metaphors. I dive into the metaphors we use to describe our work and ourselves: builders, engineers, and architects.

    It’s rare to find job titles like software gardener, or information librarian (even though they would be just as valid as other terms we’ve made up like software engineer or information architect). Outside of the context of open source projects, we don’t talk much about maintenance. We’re much more likely to talk about making.

    Back in 2015, Debbie Chachra wrote a piece in the Atlantic Monthly called Why I Am Not a Maker:

    When tech culture only celebrates creation, it risks ignoring those who teach, criticize, and take care of others.

    Anyone who’s spent any time working on design systems can tell you there’s no shortage of enthusiasm for architecture and making—“let’s build a library of components!”

    There’s less enthusiasm for gardening, care, communication and maintenance. But that’s where the really important work happens.

    In her article, Debbie cites Ethan’s touchstone:

    In her book The Real World of Technology, the metallurgist Ursula Franklin contrasts prescriptive technologies, where many individuals produce components of the whole (think about Adam Smith’s pin factory), with holistic technologies, where the creator controls and understands the process from start to finish.

    (Emphasis mine.)

    In that light, design systems take their place in a long history of dehumanising approaches to manufacturing like Taylorism. The priorities of “scientific management” are the same as those of design systems—increasing efficiency and enforcing consistency.

    Humans aren’t always great at efficiency and consistency, but machines are. Automation increases efficiency and consistency, sacrificing messy humanity along the way:

    Machine with the strength of a hundred men
    Can’t feed and clothe my children.

    Historically, we’ve seen automation in terms of physical labour—dock workers, factory workers, truck drivers. As far as I know, none of those workers participated in the creation of their mechanical successors. But when it comes to our work on the web, we’re positively eager to create the systems to make us redundant.

    The usual response to this is the one given to other examples of automation: you’ll be free to spend your time in a more meaningful way. With a design system in place, you’ll be freed from the drudgery of manual labour. Instead, you can spend your time doing more important work …like maintaining the design system.

    You’ve heard the joke about the factory of the future, right? The factory of the future will have just two living things in it: one worker and one dog. The worker is there to feed the dog. The dog is there to bite the worker if he touches anything.

    Good joke.

    Everybody laugh.

    Roll on snare drum.

    Curtains.

    This was originally published on my own site.

    ]]>
    Web standards, dictionaries, and design systems https://clearleft.com/posts/web-standards-dictionaries-and-design-systems Thu, 23 Jan 2020 19:50:00 +0000 https://clearleft.com/posts/web-standards-dictionaries-and-design-systems I’ve noticed a trend. It all starts with web standards.

    Years ago, the world of web standards was split. Two groups—the W3C and the WHATWG—were working on the next iteration of HTML. They had different ideas about the nature of standardisation.

    Broadly speaking, the W3C followed a specification-first approach. Figure out what should be implemented first and foremost. From this perspective, specs can be seen as blueprints for browsers to work from.

    The WHATWG, by contrast, were implementation led. The way they saw it, there was no point specifying something if browsers weren’t going to implement it. Instead, specs are there to document existing behaviour in browsers.

    I’m over-generalising somewhat in my descriptions there, but the point is that there was an ideological difference of opinion around what standards bodies should do.

    This always reminded me of a similar ideological conflict when it comes to language usage.

    Language prescriptivists attempt to define rules about what’s right or right or wrong in a language. Rules like “never end a sentence with a preposition.” Prescriptivists are generally fighting a losing battle and spend most of their time bemoaning the decline of their language because people aren’t following the rules.

    Language descriptivists work the exact opposite way. They see their job as documenting existing language usage instead of defining it. Lexicographers—like Merriam-Webster or the Oxford English Dictionary—receive complaints from angry prescriptivists when dictionaries document usage like “literally” meaning “figuratively”.

    Dictionaries are descriptive, not prescriptive.

    I’ve seen the prescriptive/descriptive divide somewhere else too. I’ve seen it in the world of design systems.

    Jordan Moore talks about intentional and emergent design systems:

    There appears to be two competing approaches in designing design systems.

    An intentional design system. The flavour and framework may vary, but the approach generally consists of: design system first → design/build solutions.

    An emergent design system. This approach is much closer to the user needs end of the scale by beginning with creative solutions before deriving patterns and systems (i.e the system emerges from real, coded scenarios).

    An intentional design system is prescriptive. An emergent design system is descriptive.

    Simplified flow of an intentional design system (building blocks first) by Jordan Moore
    Simplified flow of an intentional design system (building blocks first) by Jordan Moore

    I think we can learn from the worlds of web standards and dictionaries here. A prescriptive approach might give you a beautiful design system, but if it doesn’t reflect the actual product, it’s fiction. A descriptive approach might give a design system with imperfections and annoying flaws, but at least it will be accurate.

    I think it’s more important for a design system to be accurate than beautiful.

    As Matthew Ström says, you should start with the design system you already have:

    Instead of drawing a whole new set of components, start with the components you already have in production. Document them meticulously. Create a single source of truth for design, warts and all.

    This was originally published on my own site.

    ]]>
    Exploring DesignOps https://clearleft.com/posts/exploring-designops Thu, 23 Jan 2020 10:32:00 +0000 https://clearleft.com/posts/exploring-designops Late last year we hosted our second Design Leadership Breakfast, with two panels debating in front of 70 design leads.

    While the first panel dived into the role of research and the rise of ResearchOps, the second panel shifted into DesignOps, with candid input from Samantha Fanning (Head of Digital, University College London), Dan Saffer (Product Design Lead, Twitter), Daniel Souza (Design Operations, Babylon Health) and Andy Budd (Clearleft).

    Four panellists and Jeremy comparing in front of a blue screen

    What is DesignOps?

    We first started talking about DesignOps back in 2017, and since then the function has evolved and matured. At Twitter, Dan sees it as ‘the religion and rituals around design practice which support design’. Similarly Andy Budd often refers to Dave Malouf’s excellent analogy:

    DesignOps are the grease, rails, and engine that make design’s processes, methods, and craft as valuable as possible.

    Dave Malouf

    For more traditional companies like UCL, centralising some of the ‘less fun’ processes has cultivated a belief that designers are freed up to do the ‘more interesting stuff’. At rapidly-scaling companies such as Babylon Health, DesignOps has transitioned from a buzz word into the very core of what they do. As Daniel put it, ‘…organisations at scale have many different people working in different ways, across different design teams, DesignOps at Babylon

    • enable visibility
    • enable workflows
    • support design, content, research and branding teams to design at scale with clarity.’

    It was interesting how DesignOps is now its own stream of onboarding at Babylon, with DesignOps tailored to each discipline alongside the company and team onboarding. This has ensured new team members start working with a clear understanding of standards, their design system, and cultural processes like how to join and participate in design critiques.

    According to Andy ‘…as teams scale, things slow down. A lot of things that design leaders hate doing emerged. The need to create a role to own things like performance reviews, onboarding and recruiting became clear. Without a DesignOps function, 80% of their work was becoming recruitment.’

    9861

    Globally distributed teams result in even more complexity. With people in various offices, there is a high level of noise. The challenge is trying to maintain the cohesion. At Babylon, DesignOps delivered more mental space and freed more available time to allow for deeper innovation.

    How embedded is DesignOps?

    Embedding DesignOps doesn’t happen overnight. UCL introduced a DesignOps mindset via training, by paying for everyone to have whatever research/UX/design training they need. For UCL it’s not about forcing people into that role but to increase understanding and respect of the processes across the business.

    For digital-first companies like Twitter where research, design and developers sit together in teams. The product team lead/manager remains responsible for collaboration needs between teams, whereas the DesignOps function is focused on the individual and organisational needs (such as design systems). For Babylon, their design system has been a practical way of delivering the value of having DesignOps in the first place. This has not only helped them make sure the right resources and energy are in place but has also played a role in building a community around it.

    Are we really there yet?

    One thing that has become clear through our work and events at Clearleft: there are constant improvements to be made. It’s not about the destination, it’s about the journey.

    Looking at our engineering partners, we can learn a lot from the VPs and CTOs in tech who’ve implemented successful DevOps functions. We can strive to apply their learnings to the growing DesignOps movement within the teams of our clients.

    Ironically, the design industry itself has created a few blockers to DesignOps. Twenty years ago design was craft-based, operating at a human-scale and through emotion. We’ve since industrialised it as a way to scale design for the sake of conversion and KPIs. The processes and outcomes allow us to take small 1 or 2-pizza teams and turn them into high-functioning design outfits without sacrificing quality or consistency.

    However, are we de-skilling people through the desire to scale? Are we creating a sense of learned helplessness? Are we forcing a senior designer — working in an optimised and systemised environment due to DesignOps — to no longer worry about divergent, problem-solving? Do they now focus only on their ability to ship? Is DesignOps effectively neutralising a designer’s skillset?

    A message often shared at our Leading Design events and retreats: design leaders have felt like they’ve hit a glass ceiling. All the power they thought design once had has now been lost to the VP of Marketing, or Product, where design increasingly becomes a delivery function. At Twitter, Dan found that Product has a lot of power, and Product and design often wrestle around the overall product vision (traditionally called design strategy). Who controls what, who defines the problem space, and who plots the direction of travel is often a source of contention.

    The benefit of DesignOps

    DesignOps allow us to take away all of the administrative, operations-based work designers find themselves doing, and systematising it. It allows them to focus more on the actual design part of what they do. It’s about getting as much value out of the design team and designers as possible while minimising waste and maximising scale, consistency and efficiency.

    If you’d like to discuss how we could help you scale design get in touch.

    You can sign up to find out about our next Design Leadership Breakfast panel here

    ]]>
    11 things I know to be true https://clearleft.com/posts/11-things-i-know-to-be-true Thu, 02 Jan 2020 16:49:00 +0000 https://clearleft.com/posts/11-things-i-know-to-be-true Here are some things I’ve come to believe over the years…

    1. You are not your user

    Stop designing for yourself. Understanding what your users need and communicating that around the organisation is super important.

    However research doesn’t tell you what to do, so don’t rely on it too heavily. Otherwise you can get stuck down rabbit holes and paralysis sets in. So focus on getting just enough research to inform decision making.

    (By the way, I’m aware that “user” is a contentious term, so have mostly used it here for the sake of brevity. Feel free to substitute it with whatever term you feel most comfortable with.)

    2. It’s often easier to break big problems down into smaller problems

    This is one of the benefits of the agile process.

    However if you break things down too much, you lose the big picture view and entropy sets in. As such, you need to think both big and small.

    3. Sometimes you need to slow down to speed up

    However we can also spent a lot of time over analysing problems that are intellectually unanswerable, so other times it’s better to think by making and learn by shipping.

    Knowing which approach to take when is tricky, and most people tend to default to one or the other.

    4. Design can provide a huge amount of value to business

    However designers often take this for granted and get frustrated when business people who have been extremely successful without worrying about design, don’t immediately drink the cool aid.

    The most effective designers become advocates for design, form alliances, and pick their battles sensibly. However they also realise it’s impossible to change hearts and minds overnight, so are in it for the long haul.

    5. Designers and technologists need to stop seeing themselves as separate from “the business”

    However it’s bloody hard to do. Especially if you feel unsupported and marginalised by your company.

    6. The key role in any design or technology team is the lead role

    They set the standards others follow, act as coaches and role models to juniors, while having the trust and ability to influence business stakeholders.

    However leads are all to often hired into management positions, and get sucked into an endless round of recruitment, budgeting, and planning meetings. They stop being effective role models, the team atrophies and attrition and dissension starts to rise.

    7. Everybody is largely trying to do the right thing

    However when your right thing gets blocked by somebody else’s right thing, you almost always end up writing that person off as difficult or ignorant.

    Rather than trying to convince the other person of your opinion, it’s better to understand their opinion, get agreement on the problem you’re trying to solve, and find a middle path.

    This is one of the reasons why working with people is hard, why decisions take so long, and why the results are often mediocre.

    8. It's better to design the right thing than design the thing right

    As designers we often get stuck down rabbit holes that nobody actually cares about, and end up producing a beautifully designed and engineered product that nobody wants.

    9. It’s easier to sell a well designed product than a poorly designed one

    As such, more of the money you were going to spend on marketing should be diverted into design.

    In short, marketing should support product. Not the other way around.

    Much as I believe this to be true, history is littered with the corpses of well-designed products that failed to capture the attention of the market, while there are plenty of crappy products out there that have gained huge success because of superior marketing.

    10. Product management is the hardest job in tech

    You have all the responsibility but none of the power. Everybody thinks they know better than you. It’s impossible to satisfy everybody. When things go well somebody else will get the credit. When things go wrong, you’ll get the blame (even if you flagged it up from the start).

    11. The reason most projects go wrong is because of a lack of shared understanding from the outset

    Everybody sitting around the table has a picture in their heads of what they want. They think its the same picture that the person sitting next to them have, but it’s not.

    The ability to visualise thoughts, beliefs, concepts and decisions is a super power. It gets everybody on the same page, or at least highlights where things unexpectedly diverge. It also prevents people from hiding behind ambiguity.

    ]]>
    How to move fast without breaking things https://clearleft.com/posts/how-to-move-fast-without-breaking-things Thu, 02 Jan 2020 09:45:00 +0000 https://clearleft.com/posts/how-to-move-fast-without-breaking-things Over the last few years, I’ve worked on various product teams at different levels of velocity, but I’ve noticed that moving quickly is not the same as moving effectively.

    Effective teams don’t just get things done — they also know why they’re doing things, and how what they’re doing feeds into a longer-term vision.

    Rachel's sketch of two superheroes
    Product superheroes aren’t born, they’re made

    Effective teams are flexible, they can pivot when they need to. And finally, effective teams realise that each of them has a specific role to play.

    Here are eight things I think contribute to creating a fast, effective product team:

    1. Creating strategy together

    One of the best product managers I’ve worked with understood that to get the team invested in the product strategy and vision, they had to be part of creating it. We collectively created a ‘North Star’ for the product, and each quarter we would — together — look at our business targets, then explore what opportunities we had to improve the product experience to achieve those goals. This meant our backlog was shaped and defined by the team. We also had the opportunity to add things to the backlog that maybe weren’t contributing towards our immediate goals, but that we thought would get us towards our North star. These could be picked up from the backlog when we had time in between the more business-critical items.

    Understanding how our work directly contributed to business goals meant we all knew the purpose of our work, and kept us fully aligned.

    2. Collaborating

    Anyone who’s worked in a full multi-disciplinary team will appreciate how much more efficient you become when you have the right skills in place. It’s really hard doing things outside your area of expertise, so if for example, a designer has to also think about the copy, it’s going to take them a lot longer to get the job done. Enabling equal collaboration and leveraging different skills not only makes the team super-effective, but it also makes team members feel their contribution is valuable and gives them confidence.

    Confidence is essential for a team to push forward innovative ideas and take considered risks.

    Rachel's sketch of a team climbing a mountain

    3. Having a stepped process

    Effective teams understand that it’s virtually impossible to go from zero to hero overnight. Sustainable and effective change is a gradual process. It might sound counter-intuitive to speed to evolve something slowly rather than completely renovate it, but it’s often the small incremental changes that add up to a big overall impact.

    In the process of making small optimisation tweaks on one part of a product, you’re also learning a lot that will feed into the bigger innovation projects or new features. The team that works most efficiently works on a combination of small changes they can learn from, and more strategic, bigger projects which they can enter into with the confidence of learning from the small things.

    4. Making things measurable

    Having a purpose is important, but it’s just as important to track progress against targets.

    The teams I’ve worked on who regularly see how they’re tracking against their goals are the ones that keep up momentum.

    Making goals and current results visible in a place where the team meets regularly is one way to make sure everyone’s engaged. It’s also really helpful for individuals to be able to go into monthly review meetings with a clear idea of how they’re contributing to the company’s targets. The shared accountability of targets means that everyone has the power to make an impact, and everyone can share the success when things are going well.

    A sketch showing a person and team in front of a chart

    5. Testing and learning

    Running A/B tests and split tests is one of the easiest ways to test an isolated change such as copy or visual design. It’s also a great way to test a hypothesis before you commit to bigger changes.

    Teams that don’t have the ability to test can run up against opposition to changes they suggest because there’s no way to justify the effort and resources needed to make the change. This leads to frustration within the team and resentment, which always slows down velocity.

    When small tests can be run, it enables teams to put a value against a change, which can then be used to estimate the incremental impact of rolling out the change permanently. This may only give a quantitative view of changes, but when teams have the opportunity to test, they feel much more empowered to come up with solutions.

    It’s also true that a team who have clear recommendations based on user research need to be given the space to iterate accordingly. If research has shown something to be a suboptimal solution, then any user-centred design team worth their salt will want to make sure they can amend it. Leaving enough time for iteration after research is vital for a team to feel confident in their solution. This can be hard to do when you have engineers itching to build, but I’ve found that the most effective teams get their design team running two or three sprints ahead of the build team for this very reason.

    6. Knowing when day two is

    Nothing is more demotivating to a team than the expression ‘Let’s look at that in day two’ when they know that day two will never come. Shipping an MVP is one thing, but taking away the ability to iterate is super-frustrating.

    Effective teams accept that they might need to postpone some things until after launch, but they have a backlog of prioritised items ready to go just as soon as they can. Of course, all backlogs are subject to change and refinement, particularly if there are teething problems or more urgent bugs that need fixing. But it’s helpful to set a ‘day two’ date, or a target date to get the first round of iterations made by, even if it’s more of a window than a deadline.

    Allowing a team to see there’s a future vision for the product and that their ideas haven’t been parked indefinitely is really important if you want the team to feel enthusiastic, motivated and inspired.

    7. Shipping regularly

    Designing but never shipping is really demoralising for a team. This can happen for many reasons in businesses, but it really damages productivity and morale. Nothing makes a team drag their feet more than the lack of a deadline. It’s also heartbreaking to invest time and effort into something which then sits on a Jira ticket or in a forgotten file for the rest of time.

    For this reason, it’s not a good idea to get the design team out too far ahead of development. When designs happen too far in advance they can get put on a development backlog then superceded by other things.

    Finding a balance between longer-term strategic work and short term tactical changes is hard, but also one of the reasons why allowing testing is so important. Tests may seem small, but just the fact you’re putting changes live can be enough to keep a team motivated and moving at pace.

    Sketch of three people having a retro using post its on a wall

    8. Acting on regular retros

    As with testing and learning, retros allow you to get feedback and make small calibrations to your team’s ways of working. Effective product team leaders create psychological safety for team members to speak their minds in retros and really say what is or isn’t working. They then actually assign actions to the team (or themselves) to make the necessary changes.

    The teams that regularly feedback, listen, and take steps to make things better, are the ones which over time will move from storming and norming, to performing. And top-performing teams are those which create brilliant work together.

    This post was originally published on Rachel’s Medium.

    ]]>
    Feel better - don't go it alone https://clearleft.com/posts/why-you-shouldnt-go-it-alone Mon, 16 Dec 2019 14:10:00 +0000 https://clearleft.com/posts/why-you-shouldnt-go-it-alone “A problem shared is a problem halved” - an old adage we’ve probably all heard from at least one elderly relative, and probably at a time when it was the last thing you wanted to hear.

    But it turns out there might just be some truth behind Great Aunt Nell’s words of wisdom - according to research by Age UK, nearly a fifth of UK adults have something constantly playing on their mind, with over half the population shouldering several worries a day, and nearly another fifth carrying around more than 10 worries at any one time.

    As a nation of worriers, perhaps these figures don’t come as a great revelation, but with the top topics of concern being finance, health, age and the stress of work - basically something for everyone - it seems worth considering just what improvements we can all make in order to help ourselves out a little.

    Personally I have always found I shared my problems easily (thanks Mum), and when it comes to work I’ve never been shy of saying what I think (thanks Dad). But over the last few months I’ve had several experiences (penny-drop moments even) that have helped me put into context what has been a somewhat challenging professional year for me.

    Penny-drop no.1

    Rewind to the end of last summer and I found myself in a bit of a position of professional isolation - even though I had peers and a great team around me, I often felt out on a limb, doubting, facing problems that it seemed no one else could see, or that hadn’t been there previously. At the time I wasn’t really sure what was going on - “is it just me?” - and even though I’m a confident person, my professional confidence seemed to be taking a hit.

    Add to that multiple responsibilities - people, clients, a whole host of other things - and my ‘problems’ seemed compounded. It was well-timed coincidence then that at the end of the summer I found myself on a bus with a group of wide-eyed attendees winding through the majestic Norwegian fjords to Clearleft’s Design Leadership retreat in the mountains.

    Now in the interest of brevity and out of respect for all those present, I’m skipping over the retreat itself here, apart from to say that it’s no surprise that taking time away from our day-to-day emails, meetings and routines in a setting that begged relaxation and reflection would yield positive mental outcomes. However what I didn’t expect was the revelation that in fact I wasn’t as ‘alone’ as I’d thought - it turns out that everyone on the retreat had similar (in some cases almost identical) professional concerns as me. Through the process of sharing these with each other it also became clear that we all had a lot of the ‘answers’ to these concerns - we’d just needed to hear someone else say it to believe it.

    9689

    When people share their worries with others, it can have a positive impact on their situation - of the 3 in 10 adults that regularly share their worries, over a third claim to feel brighter and more positive as a result, with others reporting feelings of relief and even the disappearance of their problems. For me this couldn’t have been more true - the net result of sharing with others was a vastly increased sense of personal professional confidence - knowing that others in similar roles with more experience than me shared my worries made a big impact, and helped me step back a little and see the (Norwegian) wood for the trees.

    Penny-drop no. 2

    Fast-forward from snowy September in Norway to rainy November in London, and I found myself in the audience once again at Clearleft’s stellar Leading Design conference. I won’t spend the time here extolling the conference’s virtues, but what I will say is that in my recently refreshed professional mindset I started picking up on lots of moments in which speakers, attendees and peers were openly sharing - “I thought it was just me”, “recently, there was this thing”, “oh, that happens for you too?”.

    Get a room full of like-minded professionals with similar experiences together and this kind of conversation is inevitable - indeed it’s one of the many side-benefits of conferences and meet ups - however, the step change for me was acknowledging the importance and frequency of those conversations and realising that there’s a lot more shared concern out there than I (we) all might have thought.

    Time for reflection

    I’ve started to try and notice these opportunities more and more, and have been making an effort to share (and be shared with) as much as possible. Now I’m not suggesting that by running off to hills to cleanse ourselves in nature whilst outpouring all our worldly worries we’ll solve all our professional troubles (what happens in Norway stays in Norway, right?!), but I am saying that you could do a lot worse for yourself than taking a little time to reflect, connect with others in a similar position to you, and notice the conversations that you’re having.

    You never know, you might just find that ol’ Nell was right after all. And what was it that she said about a ‘fortifying sherry’? It is the season after all…

    Our next Leading Design retreat is specifically for women (and those identifying as such) is in the Cotswolds, and applications are open now.

    ]]>
    Learnings from a design internship https://clearleft.com/posts/learnings-from-a-design-internship Fri, 13 Dec 2019 12:21:00 +0000 https://clearleft.com/posts/learnings-from-a-design-internship 15 weeks have flown by here at Clearleft. You can see the concept we designed at our website www.selftreat.co.uk.

    We’ve loved this project and it’s been everything an internship should be: fun, rewarding and challenging in equal measures. We have appreciated the privilege of driving a project with a high degree of autonomy but with the essential support and input from experienced designers and researchers.

    In this final blog from us we wanted to share our main takeaways:

    A worried well patient sitting on the sofa looking at her phone. A screen showing the self treat interface
    A still from our Self treat concept video

    We learned to turn challenge into opportunity

    During our research phase we ensured a good diversity of research. Uncovering conflicting research, which we initially saw as a challenge, showed us a problem that needed solving, and with the help of a fellow Clearleft designer this led to our dashboard concept to help bridge gaps in understanding between two groups of stakeholders.

    It taught us the importance of always being curious to get underneath what people are saying; after all, they’re human, and subject to the same bias’ and behaviors anyone is. UX Design is all about questioning to get to the closest place of truth and designing from there.

    We became a bit more comfortable with not being comfortable

    The design process is fraught with unknowns. Again, making sense of a large set of problems left us in the period of the unknown for a long time. It’s perhaps a natural human desire to seek certainty, but we learned to relax a little more in the uncertainty and trust the process. To do that we had to use the design approach to guide us, and use the UX methods to help us. At points we questioned how we could add value and be impactful in a short period, conscious of lots of factors around feasibility and innovation. Ultimately though we kept moving forward, and we ended up with a service that we believe in.

    Collaboration strengthens design

    We’ve been working in a studio full of talented UX designers, researchers, visual designers. Whenever we were unsure of something, at a design milestone or simply felt a chat would be useful we reached out and used the experience of others to give a different perspective. This is something we will continue to seek in our future roles as it was, without fail, incredibly insightful.

    We also really appreciated the team dynamic throughout the internship. We all come from different backgrounds and we have different ways of working but we realised how our methods complement one another.

    Language is crucial to user experience

    Healthcare is a fascinating, important area for technology to impact and we were excited to work in this space. We also appreciated more and more throughout our research that designing in this space has a greater depth and breadth of human emotion to consider. It’s also perhaps the industry that carries the most risk - getting it wrong can be literally fatal. Language is even more important, and safetynetting and considering edge cases were crucial.

    Don’t spend too much time in early stage research

    Our biggest takeaway was not to spend too much time in research. Keen to feel like we were making sufficiently informed choices, it took us five weeks to narrow into our focal topic before further research to go deeper into this. The weight of the ‘right decision’ weighed fairly heavily, but on reflection we knew intuitively almost in week one what we did in week five. We also learned that it’s easy to stay safe in the secondary research. It’s hard having conversations with users in a large problem space but we now know to talk to people sooner, even in the uncertainty.

    Speaking to user’s earlier, and not staying ‘too wide’ for too long would have given us a longer design phase to concept test with more people, develop the prototype and conduct usability testing.

    Holly and Lacin sat at the kitchen table with laptops

    We’re looking forward to our next projects, taking with us the knowledge that every challenge is an opportunity, discomfort is normal, we should seek input at every opportunity, language is crucial, and most important of all, speaking to users early in any project is central to true user-centred design.

    You can read about the project at www.selftreat.co.uk.

    ]]>
    Leading Design London 2019 talks https://clearleft.com/posts/leading-design-london-talks Mon, 09 Dec 2019 15:43:00 +0000 https://clearleft.com/posts/leading-design-london-talks We’re excited to share the talks from Leading Design London 2019.

    Our Leading Design conference is a unique opportunity for design leaders from across the globe to come together, learn, be inspired and connect with their peers. Held at the beautiful (and brutalist) Barbican Centre, the talks range from Design Ops, self-evaluation, diversity and much more. We hope you enjoy watching them as much as we did.

    Once again a huge thank you to all of our amazing speakers, you make this conference what it is.

    Andy Budd on stage at Leading Design

    Day 1 Talks

    Day 2 Talks

    A man and two women at Leading design London in the Barbican conservatory
    ]]>
    Basil: Secret Santa as a Service https://clearleft.com/posts/basil-secret-santa-as-a-service Mon, 09 Dec 2019 09:00:00 +0000 https://clearleft.com/posts/basil-secret-santa-as-a-service I recently launched basil.christmas, a ‘Secret Santa as a service’.

    Basil has two modes. The first is a traditional list generator, letting you shuffle the participants and print off a nice, foldable list of names. The second is a little more involved, but considerably more exciting! A ‘head elf’ from your company can sign up and enter all participant names into the system. Basil will email everyone, letting them know who their giftee is.

    But that’s only the beginning. We’ve all been there; you start in a new company and you get assigned the one person you’ve never spoken to. This is where Basil steps in. In each email, there’s a unique link that, when clicked, anonymously emails the giftee, asking for a little nudge in the right direction. They can respond, all without knowing who has asked for help! Basil-based encryption!

    The basil.christmas home page

    The legend of ‘Basil’ started many moons ago at Clearleft, when Kate Bulpitt discovered the mysterious elvish hero. Kate *ahem* Basil organised the whole affair, emailing each person in the team; acting as the encryption go-between. This worked great, but meant that individual knew all the Secret Santarers. It sounded to me like an opportunity for tech!

    To be honest, it’s a bit of a silly side project, but there’s always ample opportunity for learning on websites like this. With no client restraints or budgets to consider, a side project is a great opportunity to try out new technologies and push ones design chops.

    The stack

    Vue.js is my go-to framework. When set up with Nuxt.js, I find it really quick and empowering to build in. Rather than spin up a Node.js server, I opted to use the generate mode, and host the site on Netlify. It gives me CDN hosting and a great devops experience with zero configuration.

    The database & backend layer was an interesting choice, and where I focused my learning efforts. I opted for an avant-garde option of Airtable + Netlify Functions. Airtable is a lovechild of a spreadsheet and a database. The columns are typed, and the rows can be linked, so it’s possible to run it as a relational database. It has a very sensible API (and incredible live API docs). I used Airtable for our signature generator, but rather than use the HTTP REST API, this time I went for the npm module. Another learning opportunity.

    The module still uses callbacks, so there was a bit of ‘promisification’ required to get it work nicely with async/await.

    exports.updateElf = (rowId, payload = {}) => {
      return new Promise((resolve, reject) => {
        base('Elves').update(rowId, payload, function(err) {
          err ? reject(Errors.Generic) : resolve();
        });
      });
    };

    Netlify Functions are ‘serverless lambda functions’ that run as individual backends. When called, they spin up and make calls to the Airtable database. All API authentication is handled with environment variables stored on Netlify, so when developed with Netlify Dev, all your security is handled for you.

    Emails were handled by Mailgun. The biggest hurdle was getting the first email to send. Their documentation doesn’t currently mention a different API URL for EU domains. As soon as I found this stackoverflow answer, I was away.

    Classic form POSTs

    Rather than use AJAX and JSON requests as is the usual approach in this decade, I ended up going old school and use form POSTs and redirects for the data exchange. This meant I could start from a base of solid HTML, without worrying about requests from JavaScript. It might not be quite as seamless having full page refreshes, but given most users will only see one form in the whole flow, it doesn’t harm the experience.

    Deciding when to add complexity, and when to hold back, is another skill that’s worth honing. So regularly do we reach for the shiny tool, when the slightly dusty one will do just fine. Even in modern tools, like Vue.js, there’s still plenty of power in the humble <form> and 302 redirect.

    Design

    I'm not a designer, but I do love Christmas

    With those credentials out the way, I decided to have a crack at designing this site. Creative Market was my biggest friend on this project. There are so many über talented individuals on that platform. As soon as I stumbled upon these creatures, I fell in love.

    The colour scheme and typography had to be suitably festive for such a project. Zeichen, and DM Sans provided a nice mix of conversational ‘basil tone’ and readable prose. A contrasting scheme of pink and dark blue, combined with lashings of noise, and topped with some wonderful snowflakes (created by Cassie), let to a suitably seasonable creation.

    The initial design direction was decided in Sketch, but I quickly moved to the browser to roll it out. Working in Vue.js components, I was able to swiftly build out the various form-based pages with great ease.

    Error codes

    Deciding on an error structure is easily overlooked. As this project used redirects, rather than JSON responses, it was important to establish a format that could be interpreted by the frontend.

    I came up with an ‘enum’ of possible error codes and shared it between the front- and backend. If the serverless function ever caught an error, it redirected the user to /error/${ErrorCode}/ where Nuxt rendered the appropriate message to the user. This was also an opportunity to play around with Basil’s tone of voice.

    {
      'TokenExpired': 'My dearest elf, it\'s time to log in again',
      'AlreadyContacted': 'Have patience, child. You\'ve already contacted that elf!',
      'FarTooMany': 'Humble apologies, you\'ve hit your elvish quota!'
    }

    Planning for 'appropriate scale'

    Basil was never going to take over the internet, but there was a chance a few others may like to use it. Rather than build it just for our internal use at Clearleft, I made sure the schema was set up to allow multiple groups to use the system. This did mean the added complication of putting in an authentication system, but that in itself was an opportunity to build a password-less authentication flow for the first time.

    There was no need to prepare for greater scale than that. I think we’ve all been burned in the past worrying about whether the stack will cope with X users, with no basis for whether any users will arrive. More and more, I’m realising that building for myself, keeping in mind not to be exclusionary, nor paint myself into a corner, is the best bet for web things.

    Give it a go!

    If you’re in the market for a Secret Santa generator, managed or otherwise, please feel free to give Basil a whirl!

    Visit Basil


    This was originally posted on my own site

    ]]>
    Which content expert do you need? https://clearleft.com/posts/which-content-expert-do-you-need Tue, 03 Dec 2019 00:00:00 +0000 https://clearleft.com/posts/which-content-expert-do-you-need As digital disciplines mature, it’s normal for them to fragment as practitioners begin to specialise.

    There is often overlap between areas, but it’s hard to find someone who is genuinely excellent at everything. When you look at how quickly the digital landscape is growing and evolving, it creates space (and a need) for people to specialise. This has been happening with content roles for a while.

    But how do you know who you need? It’s one thing to know you need a content expert, but an expert in what? If you try to create hybrid roles, it can result in even more job titles being created which confuses the market. And if you recruit one type of content specialist when actually you need another, you may end up finding skill-gaps in your team. The same goes when appointing an agency.

    To help ease some of the confusion around content roles, I’ve created a quick quiz. It’s just a guide – of course, it can’t account for every possible scenario. But it should give you a general indication of what kind of content expert you need. Enjoy!

    Which content expert do you need?

    ]]>
    5 Steps To Better Research https://clearleft.com/posts/5-steps-to-better-research Fri, 29 Nov 2019 12:00:00 +0000 https://clearleft.com/posts/5-steps-to-better-research Time and again, decisions made during a research project come back to haunt us. Square up to the usual suspects and take 5 steps toward delivering better research.

    We deliver better research when we:

    • Have conversations early
    • Let the experts do the recruitment
    • Check the calendar
    • Use a balanced ratio of research and analysis
    • Share the journey

    Have conversations early

    Shaping a research study properly is crucial for good results. Early conversations draw out learning objectives, assumptions, recruitment criteria and potential methods. Involving researchers in these discussions help to develop a solid project foundation and save time later on. Agency researchers benefit by receiving early indications of a client’s research maturity and helps frame future conversations and identify where the best value lies for the client. In-house researchers benefit by providing the inside knowledge of recent and related studies that can offer existing actionable insight, helping to avoid repeated studies at the cost to the organisation.

    For both, there is an opportunity to start refining the research goals from day zero so there are no surprises or sudden u-turns at the project kickoff. Likewise, getting an early understanding of the research audience sets us up for a head-start on recruitment.

    Let the experts do the recruitment

    Recruitment is difficult and time consuming. Although it offers an opportunity to start learning from our audience from day one, the reality is that practitioners seldom have the time. Recruitment is often overlooked and under-resourced as an activity which ultimately impacts the project as a whole.

    This underestimation of effort also exists when clients take on recruitment, remarking “I never thought it would be this difficult!” after exhausting their entire panel in the first week with little to no progress. This is not due to a lack of effort on their part, rather the complexity and success ratio that goes with the territory.

    Actionable insights hinge on a careful selection of pre-screened and representative sample of incentivised participants. When we cut corners or “just make do” the resulting drop in quality is embarrassingly obvious. Garbage in, garbage out.

    Recruitment is a job, unless there is a dedicated role in-house or have the privilege of extra resources, bring in a trusted professional.

    Check the calendar

    When the research is conducted can occasionally have an undesired and negative impact on results. Each sector has its own peaks and troughs of activity. For instance, the travel sector trading period peaks over Christmas and new year. During this time the stakes are high and there is little room for experimentation. Higher education institutions almost completely shut down over the summer holidays making it difficult to gain access with stakeholders and students.

    It’s also worth considering whether other calendar dates might impact participants behaviour. For example, grocery shopping behaviours will differ over bank holidays and other seasonal dates. Likewise, family behaviours will be different over term time and school holidays.

    These key dates are often overlooked or not discussed as part of early conversations, whenever planning research activities, it’s always worth asking “why are we specifically conducting the study during this period?” “what might impact our research during this period?”.

    Use a balanced ratio of research and analysis

    Rich insight comes from spending quality time with the research observations we collect. When the ratio of research collection to analysis is out of balance we compromise on the quality of the outputs. At the very least, we should follow a 1:1 ratio. For every hour we spend talking to participants, spend the same amount of time with our observations.

    When it comes to recall our memories are fallible. As time passes we remember events less accurately and become more susceptible to our own biases. This is problematic when we schedule long, concurrent days of research with little time or opportunity to gather thoughts and discuss observations. Shortening the time between research and analysis and introducing activities for sense-making helps us to maintain the integrity of our insights and avoid bias creep.

    Share the journey

    There’s no doubt that research is a team sport and the value it creates through collaboration. Involving teams across different departments and areas of the business increases awareness and encourages discussions. Together we break down siloed structures within organisations.

    Encouraging a broad range of team members across the business to attend research sessions is a step toward creating internal research evangelists. Session attendees invariably find value in what they observe and bring this enthusiasm back into the business. Not only does this encourage cross functional collaboration and the dissemination of customer insight but, more importantly, amplifies the volume of the customer voice and drives the shift toward a true customer centric culture.

    This was originally posted on my own website.

    ]]>
    Mental models https://clearleft.com/posts/mental-models Thu, 21 Nov 2019 12:00:51 +0000 https://clearleft.com/posts/mental-models I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?”

    This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.

    I remember a few years back being really confused about npm. Fortunately, someone who was working at npm at the time came to Brighton for FFConf, so I asked them to explain it to me.

    As I understood it, npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?

    Anyway, the mental model I had of npm was: npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.

    But then I saw people talking about using npm to manage client-side JavaScript. That really confused me. That’s why I was asking for clarification.

    It turns out that my confusion was somewhat warranted. The npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.

    I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.

    On the one hand, that makes total sense when you consider that the code is literally in the same programming language: JavaScript.

    On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.

    Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:

    I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.

    Writing back-end and writing front-end code require very different approaches, in my opinion. But those differences have been erased in “modern” JavaScript.

    The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.

    In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.

    Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.

    That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.

    This was originally posted on my own site.

    ]]>
    Becoming medical experts in the world of design https://clearleft.com/posts/becoming-medical-experts-in-the-world-of-design Wed, 20 Nov 2019 09:45:00 +0000 https://clearleft.com/posts/becoming-medical-experts-in-the-world-of-design In our last blog we shared how we moved from concepts to our chosen design problem. Here we want to bring you up to speed on our design process.

    Now that we’re deep in the design phase of this project, storyboarding is becoming incredibly helpful. Not only does it help us build empathy for different groups of patients suffering an illness, but it also identifies the different pain points and opportunities in peoples’ experiences. It’s amazing what simple sketching can do to uncover things you hadn’t thought of. We’ve been sketching everyone from ‘stoic men’ who have a tendency to avoid self-care, all the way to our ‘worried well’ who need that extra reassurance. With the extensive medical terminology, diagnosis and treatment information we’re gathering along the way we’re considering ourselves quite the medical experts in the world of design right now.

    storyboarding intern project
    Storyboarding

    Throughout designing we wanted to be mindful of the barriers to self-care. The main ones include:

    • Anxiety and belief that the severity and duration are beyond normal
    • Past experiences of prescription for minor ailments deemed confirmation of the need for medical intervention in any future illness
    • Lack of sufficient knowledge and skills to implement self-care
    • Lack of attention to self-care

    To do this we flipped the various barriers into opportunities, framed as ‘How Might We’ statements. In essence we stepped down a level into more granular framing of the problem. You can read more about ‘How Might We’s’ in one of our blogs here.

    We began ideation at a high level, imagining different self-care structures, via design studios that we shared with you in our previous blog. With our focus on the exciting opportunity to empower people to “self-prescribe” and access more personalised self-care advice we diverged our imagination again – how could we best design for that? With the backdrop of storyboards showing us where points of intervention were needed we sketched, and sketched some more. As we were sketching, we focused on the users’ needs in each square of the storyboard and came up with solutions to address those needs. Some of these were larger concepts, others appeared to be features, but it gave us a lot to work with.

    Our extensive set of How Might We's

    We seemed to be aligned in what ideas should be set aside for now. At this point we started to see the connection between the more compelling ‘mini-ideas’ and the formation of a user journey that addresses a range of problems and opportunities for our different personas. Fleshing this out on Post-its meant we could easily shift things around or cut what we later felt was unnecessary. As always, we sourced feedback from Clearlefties, this time to ‘sense-check’ the wider service as well as give a different eye to the detail. A conversation with one of our developers helped clarify technical constraints and options for delivery. As a result we decided that our concept will be in the form of a progressive web app.

    Beginning to form a basic user journey

    Although we’re designing a digital solution, it has non-digital interfaces and integration that makes it somewhat service-like. We’re now guerilla testing the concept whilst considering the content and copy of the screens. Although good user experience always relies on an almost pedantic consideration of copy, designing in the health space perhaps depends on this even more. It means on top of the brand voice, clarity, instilling trust and other content considerations, we need language that reassures, and keeps people 100% safe. We’re enjoying the challenge!

    You can follow us on twitter @clearleftintern for regular updates on the project.

    ]]>
    Tiny Lesson: Sketching, the timeless design tool https://clearleft.com/posts/the-timeless-design-tool Mon, 18 Nov 2019 00:00:00 +0000 https://clearleft.com/posts/the-timeless-design-tool I’ve been working in design for 15 years and I’ve relied and honed one flexible thinking tool my whole career. One that is almost barrier-less and one that’s speed is so incredibly powerful.

    That tool is sketching.

    Anyone that has worked with me for any amount of time will know that I literally draw as I think and am likely to leave little tracing of my thinking in the form of doodles all over the place. I might say not always ideal in a paperless corporate culture or a tidy polished agency environment!

    I don’t just sketch interfaces, I sketch strategy frameworks, concept models, storyboards and random diagrams. I live sketch too in workshops, which is probably more of expert skill, but one that’s extremely valuable alignment tool in a workshop environment. Sketching has simply become my way of thinking.

    Over the years, many people have asked me to give them sketching tips and to be honest I’ve always put it to the back of my priority list as I’ve almost seen it as not the most valuable part. The challenging part I’ve always thought is working out what to sketch in the first place. On reflection, I wonder if I’m wrong. I’ve done it so much, it comes so naturally now that I find it very difficult to work without a pen and a piece of paper. Like everything, it takes a degree of confidence to put pen to paper and certainly in full view of others.

    Sketching

    Recently I was asked whilst working at a client’s office to provide a sketching workshop and this time I accepted. So these were my top tips:

    Use good pens

    I consider a trip to a stationary shop a real treat and the array of pens on offer is immense, without meaning to you can easily spend a fortune. My favourite pen though is actually not one of the fancy ones it’s a trusty Paper-Mate Fineliners. I discovered it in an agency 3 jobs ago and it’s made the stationary cupboard of every agency I’ve been at ever since (and now Clearleft). A good pen, not too thin and not too thick, makes a massive difference when sketching.

    Use colour strategically

    Before you start a sketch consider how you are going to use colour and be consistent with it. Typically my main sketch is in black pen, I use a red pen for linking arrows and then a reserve a colour for the title and notes and then a different colour for questions.

    Use highlighters to lift the edges

    As well as good drawing pens consider buying a grey shadowing pen to lift the sketch. I typically use Tombow Dual Brush to add a sort of drop shadow to the edge of the sketch and key elements within it. You’ll be amazed what difference it makes.

    Sketching out a system
    Using sketching to illustrate a platform's system during a kick-off workshop

    Start messy and then draw a neat version

    In the case that you want to use a sketch to communicate an idea. I’d just drawing a neat version after some messier doodles.

    Practice readable writing - write in uppercase

    My natural handwriting is very messy and I always wondered at school how on earth examiners would be able to read it. Over the years though I’ve developed a neat uppercase writing style that I use for more sketches.

    Scan and tidy up in photoshop

    If I am adding my sketches to a deck I always use photoshop to clean them up a bit. Sometimes scanning can add little marks that look messy and also occasionally I add colour to key buttons to lift them.

    Doodle of a user group or persona from a workshop
    Using sketching to illustrate user groups during as client workshop

    We find sketching can be an incredible ‘unblocker’ when it comes to designing a complicated system or service, as well as getting stakeholder buy-in. We’d love to hear how you use sketching on Twitter.

    ]]>
    Professional Development Framework: Roles https://clearleft.com/posts/professional-development-roles Thu, 14 Nov 2019 10:55:00 +0000 https://clearleft.com/posts/professional-development-roles It’s been 7 months since we publicly released our professional development framework. In that time we’ve received lots of helpful feedback from the design community in terms of how it’s been useful, how it could evolve, and where we want to take it next.

    From supporting individuals and teams working with our framework directly to inspiring design leaders to create their own professional development frameworks using ours as a starting point, we’ve been humbled by how helpful people have found it so far, and energised by the potential to do so much more with it.

    9365

    We were also fairly surprised by how extensively the framework has been shared and put into action. What we published was very much in an Alpha state, missing some essential context of how to use it, and some nice data visualisation artefacts that help to make it more practical to use, both of which we were aware would limit its effectiveness.

    Nonetheless, we’re happy to be able to elaborate further on the framework by demonstrating how it can be used to define specific roles and their career path(s) in relation to some role archetypes relevant to us: UX Designer, Product Designer and Design Researcher.

    We’re also pleased to announce our Professional Development Framework is now available on Progression at clearleft.progressionapp.com, which elaborates further on these role specifications and their respective primary career paths.

    Framework principles

    As we touched upon on the initial release, there are a few critical principles behind our framework:

    Measure what matters
    Define an appropriate level of specificity, that neither overwhelms or abstracts what’s important.

    Shape behaviour
    “What gets measured gets done”, so only measure what you want to influence.

    Exemplify values
    Soft skills, attitudes and behaviours, such as collaboration and empathy, are equally important, if not moreso, than hard skills.

    Be role agnostic
    Support an approach to professional development that can scale and flex to multiple roles, initially within digital but potentially beyond.

    Open the conversation
    The quantification of professional development into numbers is meaningless without the conversation framing it (as Jason Mesut also alluded to in his talk at Leading Design London this year.)

    A professional development framework is the starting point for a dialogue between people. It is a means to an end, rather than the end itself.

    © Hayden Slaughter

    Defining a role

    Defining a role should hopefully be as simple and obvious as it sounds:

    Step 1. Shortlist the prerequisite skills
    The framework currently contains 20 skills. Not all skills are relevant to measure on every role, so a useful starting point is to filter out the things that aren’t essential to the role, or shortlist the ones that are.

    Step 2. Benchmark the proficiency
    When you’ve decided on the skills to measure, the next step is to decide on the proficiency level required of each skill for the role.

    Step 3. Plot the career path(s)
    As a general rule, aim for a tiering that allows skills to ‘level up’ logically as the seniority of the career path for that role evolves. This isn’t necessarily always relevant however, given that the Mastery level of each skill can be extremely hard to accomplish, and skill focus areas can naturally change based on seniority (see example below).

    Step 4. Ensure roles complement each other
    No discipline exists in a silo, and your framework should ensure roles when combined together provide a set of skills that is greater than the sum of the parts.

    Example: The UX Designer Role

    Here’s a practical example of how we’ve put this into action at Clearleft…

    At Clearleft we use a set of 9 skills intrinsic to our values which we feel are important to measure in all roles at the company. These are:

    Communication

    • Collaboration
    • Presentation
    • Feedback

    Problem solving

    • Initiative
    • Methodology
    • Planning

    Empathy

    • Relationships
    • Support
    • Human-centricity

    That’s already a fair few skills to measure, so we’re very conscious not to overwhelm this list with an unwieldy number of additional things to measure.

    For our junior and midweight UX roles, as an example, we measure around 12 skills in total. This grows to 15+ skills for senior roles. These additional discipline-specific skills in UX are:

    • Architecture
    • Validation
    • Exploration
    • Craft

    It’s worth stating what may already be obvious here, in that the framework categorisations of Core skills, Strategy, Design, Leadership, and Operations are not necessarily analogous with roles. A role can comprise of skills from multiple categories.

    Also of note is that at Clearleft, around three quarters of what we measure is based on the same foundational behaviours, the remaining 25% on roles-specific hard skills. Maybe that feels like a disproportionate split but Clearleft’s collective values and collaborative, consultative and open design approach is intrinsic to our way of working and continued success as a design studio, moreso than any discipline-specific hard skills, hence this focus.

    At Clearleft we have three simple tiers of Junior, Midweight and Senior to most roles. This generally maps nicely to the Novice, Intermediate and Expert proficiency levels in the framework, but as already mentioned, it isn’t always the case that skills follow this obvious progression, or that the tiers for the skill are relevant to the seniority of the role.

    As a demonstration, our midweight UX Designer role skills shift considerably when levelling up to UX Strategists.

    e.g. UX Career Path - from midweight to senior

    Example: The Design Team

    A common team shape at Clearleft might be a UX, Designer and Researcher, which would provide the following complementary skills mix for our clients.

    e.g. 3 person design & research team

    Another probably even more common team might be a UX Strategist and Product Designer, which would still create a strong mix of complementary skills. It’s also worth bearing in mind that all our UX Strategists have already accumulated the skills of our midweight UX Designers, such as intermediate-level Validation skills, so there is more depth than may be initially obvious looking at a single role description alone.

    e.g. 2 person strategic design team

    You can therefore see how we shape different teams, for different clients, based on the needs of each specific project and the skills we measure within the individuals in our team.

    Next up from our professional development framework will be some visualisation tools that help you benchmark the individuals in your team against the roles you have defined, and start to understand where they’d like to go next.

    Get in touch if you’d like us to help apply it to your company.

    ]]>
    Microfrontends with Vue.js https://clearleft.com/posts/microfrontends-with-vue-js Wed, 13 Nov 2019 11:30:00 +0000 https://clearleft.com/posts/microfrontends-with-vue-js We’ve recently been building a complex search integration for a holiday provider using a microfrontends approach in Vue.js.

    I’ve noted in the past, that there’s a common trap many fall into with frontend frameworks:

    Not all of your app/site needs to be controlled by JavaScript. When you dive into the framework pool, the natural, and most documented step is to write it all in the framework.

    This site uses a traditional server-rendered stack, so we began from a position of great markup, and added layers of functionality. Instead of using Vue CLI to spin up a new Vue website, we opted to import Vue into the existing JS pipeline - a Gulp build system provided by the client. It meant losing some of the HMR benefits that come from a well-tuned Webpack setup, but it was definitely the most appropriate and comfortable option to fit for the client.

    That’s one of the fun parts of agency-life, each project is a case of finding the balance between user needs, client appropriateness, and developer appetite.

    Requirements

    The site needed to render components for various features on the results page:

    1. Search panel
    2. Search results & pagination
    3. Search filters
    4. Search sorting

    And a different set on the details page:

    1. Search panel
    2. Holiday details
    3. Edit holiday details
    4. Holiday pricing

    Choosing an approach

    Option 1

    Write isolated Vue instances that read data from localStorage & the URL, and work independently.

    There was some definite initial appetite for this choice. Truly encapsulated components is the utopian dream, but it has a major downside: code duplication. If each component is working in isolation, they all have to do a lot of similar grunt work (reading URLs, catching errors, responding to APIs).

    Option 2

    Use a single Vue instance wrapper and use portal-vue to push the components into the correct parts of the non-Vue DOM

    The client had previous experience with portal-vue, so this seemed like a sensible route. But for our use-case, portal-vue simply acted as an unnecessary abstraction around Vue’s default instance mounting logic. Sharing a Vue instance would also mean prop-drilling and event bubbling galore. Not fun.

    Option 3

    Write a Vue instance per feature, mount them to various DOM nodes, and link them all together with a Vuex store.

    Vue already has a ‘portal’ system out of the box: the .mount() function. This, combined with Vue’s ability to automatically inject the store dependency into all child components, makes this option extremely powerful. Every feature of the website is its own distinct Vue application, but with the option to hook into global data if required.

    The code

    Here’s how we instantiated the various Vue applications:

    import Vue from 'vue';
    import SearchPanel from './components/SearchPanel/SearchPanel.vue';
    import SearchResults from './components/SearchResults/SearchResults.vue';
    import Feefo from './components/Reviews/Feefo.vue';
    import store from './store';
    
    store.dispatch('init', storeDefaults);
    
    const vueRoots = [
      {
        id: 'search-panel',
        component: SearchPanel,
        store
      },
      {
        id: 'search-results',
        component: SearchResults,
        store
      },
      {
        id: 'reviews-box',
        component: Feefo
      }
    ];
    
    vueRoots.forEach(({ id, store, component }) => {
      if (document.getElementById(id)) {
        new Vue({
          store,
          render: h => h(component)
        }).$mount(`#${id}`);
      }
    });

    After importing our top-level components, we instantiate the store, passing in any default state (driven by server-rendered JSON). This backbone of data includes items like API endpoints and feature flags.

    Next we have a list of all the Vue instances we wish to load. It’s here where we pass in the reference to the store for the components that require it. In the example above, the Feefo component doesn’t need access to the store, so it doesn’t get lumbered with it.

    Finally, we loop around the objects and check whether the mounting node is on the current page. Vue will still instantiate a component, even if element isn’t available. While this can be helpful, it’s not the desired effect for this site.

    Accessing data

    The beauty of this approach over option two, is the way Vue injects the store as a property of this in all child components & mixins.

    Reading global data is so clean:

    export default {
      computed: {
        hasResults() {
          return !!this.$store.state.results.length;
        }
      }
    };

    Writing data is a little more involved. Due to the actions/mutations pattern in Vuex, we can’t write directly to this.$store.state properties.

    One approach I tend to lean towards is a generic state updater mutation, which works for most use-cases:

    const mutations = {
      updateState(state, payload) {
        const payloads = !Array.isArray(payload) ? [payload] : payload;
    
        for (const { key, value } of payloads) {
          state[key] = Array.isArray(value) ? [...value] : value;
        }
      }
    };

    Writing to global state is then achieved with the following call:

    this.$store.commit('updateState', {
      key: 'stateKey',
      value: 'new-value'
    });
    
    // Multiple value update
    this.$store.commit('updateState', [
      {
        key: 'stateKey',
        value: 'new-value'
      },
      {
        key: 'aDifferentStateKey',
        value: 'another-value'
      }
    ]);

    Vuex helpers

    Store state is generally read in with computed properties, but it can be cumbersome writing this.$store.state.key if you’re pulling in a lot of data. Vuex comes with some handy helper methods out of the box for this very purpose.

    import { mapState } from 'vuex';
    
    export default {
      computed: {
        ...mapState(['results']),
    
        hasResults() {
          return !!this.results.length;
        }
      }
    };

    mapGetters, mapActions and mapMutations work in the same way, and go a long way towards cleaning up your code.

    Microfrontends benefits

    The biggest win from this approach came when we realised that the ‘Edit holiday details’ component really should be working against the same dataset as the ‘Search panel’. Had they been written as isolated components; reading data in from the URL and working independently, it would’ve been a nightmare to unpick. This approach made is very straightforward to conform the two components retrospectively.

    This approach doesn’t give you the ability to communicate directly between components, but that’s broadly seen as a feature, not a bug. Firing events out and receiving data in is a scalable component design paradigm. Components, where possible, should be a reflection of state, not stateful.

    We’re used to the idea of systematic design, and component-based thinking, but this frontend infrastructure modularity feels like an interesting space. There’s certainly more to be explored when it comes to bundle sizes, dynamic imports, and working with multiple stores/modules, but on the back of this project, it feels like a very viable approach to adding a bit of framework spice to a server-rendered site.

    Preact & Unistore does a great job of injecting different dependencies into a component if you’re on the other side of the JS fence! I’ve written about that previously here.

    This was originally posted on my own site

    ]]>
    FF Conf 2019 https://clearleft.com/posts/ff-conf-2019 Mon, 11 Nov 2019 15:42:31 +0000 https://clearleft.com/posts/ff-conf-2019 A report from Brighton’s unmissable annual front-end gathering.

    Friday was FF Conf day here in Brighton. This was the eleventh(!) time that Remy and Julie have put on the event. It was, as ever, excellent.

    It’s a conference that ticks all the boxes for me. For starters, it’s a single-track event. The more I attend conferences, the more convinced I am that multi-track events are a terrible waste of time for attendees (and a financially bad model for organisers). I know that sounds like a sweeping broad generalisation, but ask me about it next time we meet and I’ll go into more detail. For now, I just want to talk about this mercifully single-track conference.

    FF Conf has built up a rock-solid reputation over the years. I think that’s down to how Remy curates it. He thinks about what he wants to know and learn more about, and then thinks about who to invite to speak on those topics. So every year is like a snapshot of Remy’s brain. By happy coincidence, a snapshot of Remy’s brain right now looks a lot like my own.

    You could tell that Remy had grouped the talks together in themes. There was a performance-themed chunk right after lunch. There was a people-themed chunk in the morning. There was a creative-coding chunk at the end of the day. Nice work, DJ.

    I think it was quite telling what wasn’t on the line-up. There were no talks about specific libraries or frameworks. For me, that was a blessed relief. The only technology-specific talk was Alice’s excellent talk on Git—a tool that’s useful no matter what you’re coding.

    One of the reasons why I enjoyed the framework-free nature of the day is that most talks—and conferences—that revolve around libraries and frameworks are invariably focused on the developer experience. Think about it: next time you’re watching a talk about a framework or library, ask yourself how it impacts user experience.

    At FF Conf, the focus was firmly on people. In the case of Laura’s barnstorming presentation, those people are end users (I’m constantly impressed by how calm and measured Laura remains even when talking about blood-boilingly bad behaviour from the tech industry). In the case of Amina’s talk, the people are junior developers. And for Sharon’s presentation, the people are everyone.

    One of the most useful talks of the day was from Anna who took us on a guided tour of dev tools to identify performance improvements. I found it inspiring in a very literal sense—if I had my laptop with me, I think I would’ve opened it up there and then and started tinkering with my websites.

    Harry also talked about performance, but at Remy’s request, it was more business focused. Specifically, it was focused on Harry’s consultancy business. I think this would’ve been the perfect talk for more of an “industry” event, whereas FF Conf is very much a community event: Harry’s semi-serious jibes about keeping his performance secrets under wraps didn’t quite match the generous tone of the rest of the line-up.

    The final two talks from Charlotte and Suz were a perfect double whammy.

    When I saw Charlotte speak at Material in Iceland last year, I wrote this aside in my blog post summary:

    (Oh, and Remy, when you start to put together the line-up for next year’s FF Conf, be sure to check out Charlotte Dann—her talk at Material was the perfect mix of code and creativity.)

    I don’t think I can take credit for Charlotte being on the line-up, but I will take credit for saying she’d be the perfect fit.

    And then Suz Hinton closed out the conference with this rallying cry that resonated perfectly with Laura’s talk:

    Less mass-produced surveillance bullshit and more Harry Potter magic (please)!

    I think that rallying cry could apply equally well to conferences, and I think FF Conf is a good example of that ethos in action.

    This was originally posted on my own site.

    ]]>
    Developing Concepts https://clearleft.com/posts/developing-concepts Mon, 04 Nov 2019 18:15:34 +0000 https://clearleft.com/posts/developing-concepts A few weeks ago we shared our work in defining the problem space and moving to design. In this week’s blog we’re showing you that our design process hasn’t been a linear process and we’ve been both completing researching and developing concepts in parallel with each other.

    Expanding research

    We made the considered decision to focus our work on helping better prevent and self treat minor illness to avoid the use of GP time for these. But we soon realised we wanted to build a richer connection to people’s experiences and we could do this in parallel with beginning to consider solutions at a conceptual, high level.

    GP reception staff at our pilot surgery had already painted us a picture of their daily challenges via interviews. We were fortunate to expand on this by shadowing their day, over two days, to gauge the demands of patients first hand and how they serve these. We were struck by the opportunities for operational efficiencies to manage more routine or administrative needs and that a drive to digitally enable patients was missing, something we suspect is common given the pressures on receptionists as it stands. It was also interesting to hear the variation in how people are triaged, which showed the personalised approach the reception team are proud to give.

    Initial concept sketches and affinity map of interview insights

    There is a significant body of research quantifying the problem of GP use for minor conditions but we wanted to go deeper into why people see GPs for minor ailments and when and why people use other sources of support. We ran a survey to answer these questions. It told us that people generally access GPs when they feel symptoms have been going on too long, are concerned it’s more serious, and they want reassurance, even when symptoms were not severe. It showed us people are not consistent with self-care practices and interestingly only 20% of respondents were “ideal” self carers. In depth interviews unveiled that people tend not to receive self-care advice from GPs and pharmacists and online information is a source of anxiety and greater GP use, not something that helped avoid it.

    Coming up with concepts

    As we deepened our research we began concept ideation to design for self-care. How could we increase self-care and thereby reduce use of GPs? A sketching workshop and informal discussion with the design team at Clearleft generated many post-it note concepts, everything from wider community wellness, self-care kits, NHS footprinting and appointment tiering, to name a few. The idea at this stage was to think big and sketch fast.

    Armed with post-its, sharpies and sketching icebreakers we took the design process to Henfield Medical Centre. We wanted to ensure any ideas of experienced staff were given a voice. It also ensured the pool of ideas was as diverse as possible and we felt it was helpful in creating a partnership approach in the project.

    Designing with the Henfield Medical Centre team

    An intern team design studio gave us the space to individually generate ideas and share these. The research we ran alongside the ideation process identified other important opportunities so we included these problem areas in our sketches. A second sketching round allowed for the collective pool of ideas to be turned on their head, combined, and moulded into new forms. Having completed this we aggregated all the fantastic ideas and whittled these down to the most exciting concepts.

    Using a workshop with Clearleft allowing for voting and open discussion, we narrowed our concepts down to three key ones, two on the patient side and one on staff side:

    • Integrated self-care: Currently self-care provision in GP surgeries is inconsistent and outdated and surgery staff don’t signpost people to self-care. This concept is about designing for integrated, engaging and more personalized self-care support into the NHS primary care journey
    • Self-prescribing: Although many people want to self-care, applying this to daily lives isn’t easy. Self-prescription would empower people to prevent and self-treat conditions using their daily routines and practices.
    • Reception Insights dashboard: Front lines teams at Henfield Medical centre don’t have any insight into the trend in demands coming to the surgery. This means operational inefficiencies are not known and more interestingly, the changing medical demands not visible. This concept would empower staff to address inefficiencies and deliver targeted, more real-time community health improvements.
    Concept Gallery

    Empowering self-care

    Using the insights from conversations with the team, Dr Sheppard and our own analysis we’re beginning to design how self-care can be better supported in daily lives. We’re excited to start developing this into something more tangible using our personas and storyboards.

    You can follow us on twitter @clearleftintern for regular updates on the project.

    ]]>
    Design Effectiveness Report 2019 https://clearleft.com/posts/design-effectiveness-report-2019 Mon, 04 Nov 2019 12:00:00 +0000 https://clearleft.com/posts/design-effectiveness-report-2019 We surveyed designers from hundreds of organisations to uncover three factors that impact design effectiveness.

    Earlier this year we surveyed over 400 designers working in many different sectors and locations around the world. Our aim was to investigate the current state of design and to determine under what conditions design could be most successful.

    Get the report

    Take a look at the design effectiveness report

    We’ll be running the survey again early in 2020. If you would like to participate or be notified when the results are out, please drop us your details:

    Increasing design impact

    Like any investment or operational cost, design needs to be having an impact on the goals of the organisation. We concentrated our data analysis on what organisational and day-to-day practices are most likely to be in place in companies where design was said to have ‘contributed to an increase in sales, competitiveness, and/or brand loyalty’.

    The data indicates there are three key factors to increasing design effectiveness. All three ring true here at Clearleft. We’d love to hear your thoughts on if, and how, these factors are nurtured in your organisation. We will be repeating this survey every year to track changes.

    1. Empowered by management

    The highest performing design teams are those which are empowered by executive management to identify and pursue unplanned or unrequested ideas.

    Over half of design teams have the freedom to iterate solutions rather than be expected to get a perfect solution first time. This is incredibly important in maturing your design function, and why autonomy is a core part of life at Clearleft.

    However, less than half of the respondents work by developing hypotheses and experiments to test ideas and solutions, showing a gap in structure to enable this freedom.

    2. The right environment

    Creating a physical environment that supports collaborative design activities is another essential factor present in 9/10 organisations where design is making an impact. Spaces that break down siloes and enable dissemination of ideas and process are not to be underestimated.

    This means nurturing an environment where all disciplines can collaborate closely throughout the design and development processes, and giving all employees a good sense of customers and their needs.

    While it’s encouraging that design impact can be improved by making simple environmental changes, it’s worth keeping in mind that while a collaborative workplace is important, the empowerment of design teams by executive management to pursue unplanned ideas is vital to design impact.

    3. The importance of doing research

    Over half of the design teams that have contributed to an increase in sales, competitiveness, and/or brand loyalty do design research regularly or at scale. In contrast, of those organisations where design was not having an impact, 95% are undertaking little to no design research.

    Companies with the most effective design functions have integrated research and design teams. They are set up with research and design distributed throughout the organisation. The work of those disciplines is shared with the rest of the employees so that research and design become fundamental to the decision making and strategy of the organisation. It’s easy to see why and how this is beneficial to the business results as a whole, and perhaps why we’re seeing the increasing rise of Research ops.

    Attendees of our Design Leadership Breakfast getting an early look at the report

    What’s next?

    We hope you enjoyed the insights we gleaned from this survey. By downloading the report you should gain some ideas about what your company can do to improve the effectiveness and impact of its design teams. drop us your details to be a part of the 2020 survey.

    If you’d like to discuss the report on the phone or in person, or share insights from your own company email us at info+report2019@clearleft.com

    ]]>
    Connect your content for better products and services https://clearleft.com/posts/connect-your-content-for-better-products-and-services Mon, 21 Oct 2019 09:15:00 +0000 https://clearleft.com/posts/connect-your-content-for-better-products-and-services Whether you’re a content strategist, service designer, or work in customer experience, you need to know how content connects across your business.

    Understanding the content that finds its way to the customer, or informs how you communicate to customers, is vital to understanding the experience you’re providing across all interactions customers have with your brand.

    Here’s an example matrix showing the types of internal and external content. You may not have considered how much internal content you have that informs what a customer sees or hears. The quality of your internal content can impact your customer’s experience just as much as the content they see or hear directly.

    Example of content across an organisation

    Improve your products

    Content experts are often left out of the product development process. When product naming conventions or feature names are defined, they often stick, and unfortunately, this means something that started out as an internal name or company jargon finds its way onto the website or app and then doesn’t resonate with customers. Involving content experts in this process means you’re always thinking about how the product or feature will be perceived in its final implementation and using the language of the end-users.

    Provide better customer conversations

    Other areas with neglected content are internal tools and call scripts. The systems and call guides that staff use inform the conversations they have with customers. Involving content experts in the development or optimisation of tools and processes can have a hugely positive impact on the service provided. Spotting opportunities to improve scripts or content in the UI of internal tools shouldn’t be left to chance — content designers are just as valuable on your internal-facing content.

    Check your feedback loop

    One way to make your own content stronger is to make sure the right people see user-generated content. This could be social media comments about service, live chat conversations or customer complaints. If digital content teams never know why people are phoning up because they’re stuck onsite, or customer service teams never get to find out what customers are complaining about on Twitter, then how can they improve? Sharing the qualitative insights that sit behind the more commonly used data such as NPS scores can be much more useful and actionable than the scores themselves.

    Insight teams need to be connected to content providers and regularly sharing.

    Connecting your content

    It’s rare that a central content team would create all the content shown in this example matrix. It’s much more likely that content creators sit in pockets of the business, such as marketing, UX teams, customer experience, brand or customer comms.

    One way to better connect these teams is to think about content in terms of customer lifecycle, rather than by channel, and ensure that all the people responsible for each stage are connected, and talking to each other regularly. This way they can ensure their content and messaging is consistent.

    There are techniques the teams can use together, for example, journey mapping, which will help identify all the content elements and how they hang together, and identify any opportunities for improvement.

    Setting up content steering groups or holding regular content sharing sessions for all creators across the business are also both great ways to ensure alignment. But ideally, alignment should start at the top, with a joined-up strategy.

    Make it strategically-driven

    All content should be underpinned by a shared set of values, principles, voice, and terminology. For customers to trust a brand, they need to see or hear consistent and well-crafted content — any dip in quality or break inconsistency will erode this trust.

    The best way to achieve this it to have an overarching content strategy. Not only is it key to business success, it will also give your teams direction and provide the frameworks they need to make their content creation efficient and effective.

    Who should steer the ship?

    To achieve connected content across an entire organisation you’ll need someone to map out the architecture, spot opportunities and highlight pain points. You’ll then need to establish guidelines, create and agree workflows, create shared lexicons, and all of the operational elements that allow teams to do their day-to-day work more effectively. This work can be done by senior content strategists, but to fully embed this across the business you’ll need sponsorship and endorsement from the top down.

    Even companies with Chief Content Officers aren’t focusing enough on their internal content problems — they tend to focus on the customer-facing content as people in these roles often come from marketing backgrounds. Perhaps this will change as service design thinking gains traction.

    In the meantime, content teams will rely on the advocacy of senior digital or customer experience leaders who fully understand the importance of content.

    There’s also often a disconnect between traditional content (editorial and marketing) and product teams. For a business to grow its content maturity it will need to recognise that content lives on the inside and the outside of a business. And far from being invisible to customers, it’s often the back-end content that can have a detrimental effect on the experience with a brand.

    Follow us on Twitter to join the conversation and share your take on the topic.

    This post was originally published on Medium

    ]]>
    Are you designing a product or a service? https://clearleft.com/posts/are-you-designing-a-product-or-a-service Fri, 18 Oct 2019 10:09:00 +0000 https://clearleft.com/posts/are-you-designing-a-product-or-a-service Traditionally, the distinction between a product and a service was relatively clear.

    While a product is a tangible thing that can be measured and counted, a service is less concrete and is the outcome of using skills and expertise to satisfy a need. However, the digital space has certainly blurred the lines between products and services, so it’s no longer sufficient to define a product as something you can “drop on your foot” (The Economist, 2010) In fact, it’s actually quite difficult to explain the difference without getting tied up in quite complex linguistic knots!

    In the digital space we talk a lot about products; many organisations have Product teams with Product Owners supported by Product Designers working towards their product strategy by progressing through their product backlog. There is a lot of debate about what makes a good product and how to deliver them, but what about the services that underpin them?

    8917

    So how do you determine if you are working on a product or a service? We created an interactive tool to help you do just that…

    Are you designing a product or is it really a service flow chart

    This model highlights that although you might be positioning yourself as working on a digital product more often than not it’s a vehicle for service provision. Afterall, people only want a product because it gives them an experience and an outcome.

    The model also highlights that in some cases the entire service is embodied in a digital product, think of the likes of Netflix, Uber or Spotify. In these cases, there is a massive opportunity for ‘Product Teams’ to influence the entire service experience, thinking not only of the end-to-end customer journey but also the front-stage and back-stage workings. This would involve exploring not only the customer interactions but also the operating model, the content workflow/approvals, the business model and even down to the governance structures.

    So my advice, therefore, is to recognise when your product embodies a service and push your remit to explore the entire design challenge.

    ]]>
    Defining the problem https://clearleft.com/posts/defining-the-problem Tue, 15 Oct 2019 09:32:00 +0000 https://clearleft.com/posts/defining-the-problem We’re 7 weeks into our three-month internship program and lots has happened. Once again we want to share the techniques and path we’ve taken to reach the double diamond midpoint.

    In our last blog we talked about our interviews with GPs, nurses and front line staff, and the use of How Might We’s to frame the design opportunities in four key problem areas. These challenges are experienced by Henfield Medical Centre but research shows us they are shared across primary care.

    Guerilla research

    To move forward with more in depth research we narrowed our focus towards the two interrelated areas: self-care (how can self-care information and uptake be better integrated in the GP patient journey) and the practice of triaging patients (could design improve the efficiency of triage?). Although we had important insight from the practice side we wanted to speak to people in the community. We visited Henfield, using guerilla research techniques to gain fast insight. We were left struck by the different understandings of self care and attitudes towards the role of GPs.

    Synthesising the opportunities

    Back in the studio we hit a decision wall, and we thought that was the time to elicit some advice from more experienced Clearflefties. Encouraged to diverge our thinking again to check we had considered all the opportunity areas we ran a How Might We’s rethinking session. We individually wrote HMW statements before theming these and dot voting. Here we met our first plot twist, bringing the topic of ‘reassurance’ back to the drawing board (the reliance on GPs for sometimes unnecessary reassurance).

    We crafted problem statements in these target areas so we were clear on our users, user needs and impact of the problems. But we needed one problem statement to move forward with! With different views in the team we mapped out the opportunities in these areas, as well as the challenges we faced if designing for these. From this we used the ‘NUF’ technique, comparing our initial broad concepts for these problems in terms of how new, useful and feasible they are.

    From the research there were some clear personas of patients that we hadn’t yet mapped out on paper. So using our best friend the post it, we assessed what we considered relevant to understand in the context of this project and brought these groups of people to life against these metrics. These included typical self-care practices, attitudes to GP care and of course needs. It was important to clarify the different needs and goals of users before designing for one.

    Playback and next steps

    Nearing the end of discovery we wanted to bring the rest of the team in the project in greater depth, so we hosted a ‘brown bag’ lunch. Something us designers need to be able to do in the ‘real world’ is present our work to stakeholders. As such we prepping as one would for a client playback, beginning with storyboarded the project journey so far. This helped us take the high level view of the process before surfacing all the findings, methods, quotes, insights and learnings along the way that enriches the picture.

    All of this research has helped us move into the design phase! We’ve run a design studio with fellow colleagues, and ideation sessions with stakeholders. We realised we needed more data to be confident in our designs so alongside starting the idea development phase we are gathering more user insight via a survey. This is also giving us access to people to interview or test concepts on before moving to implementation.

    You can follow us on twitter @clearleftintern for regular updates on the project.

    ]]>
    The rise of research ops — a view from the inside https://clearleft.com/posts/the-rise-of-research-ops-a-view-from-the-inside Thu, 10 Oct 2019 23:00:00 +0000 https://clearleft.com/posts/the-rise-of-research-ops-a-view-from-the-inside Earlier this week Clearleft hosted a lively morning of debate around ‘Accelerating Your Digital Design Maturity’ featuring leading industry voices from Tesco, Babylon Health, Sky, Twitter, Google Ventures and UCL.

    In front of an audience of 70 design leads, two panels explored the challenges and approaches their organisations have around getting closer to customer needs and delivering better products faster, or to use a couple of industry buzzwords: research ops and design ops. We wanted to share some key insights from the former.

    Kate Tarling discussing Research Ops in our first panel

    What is research ops?

    Research ops are the processes necessary for understanding customer needs and integrating research into an organisation’s decisions. It is the latter part that can be most challenging.

    It stands to reason that research ops should ensure that researchers have what they need to do their job. That might include the right software, labs and other tools, training and professional development. It should also include consistent methodologies for different forms of research, along with all the legal and administrative systems and paperwork required.

    But that’s a baseline. Where research ops really comes into its own is in establishing research across the organisation. According to Daniel Burka (Director of Product and Design, Resolve to Save Lives), research ops needs to “be both selfish and selfless” meaning it must be objective in its pursuit of insight and understanding, but then open and active in socialising the results.

    Design research vs. market research

    In many large organisations there is an insights team tasked with market research. This kind of research has been around for many years and has earned a mature place within companies. Conversely design research is relatively new. Some of the techniques may be similar, but the purposes tend to be different, and practitioners in the two camps can have a tendency to look down on one another, with design researchers dismissing market research’s focus groups and the insights team not seeing why they should spend time watching usability testing.

    Of course both have their place and the panel stressed the importance of the insight team joining design researchers combine their efforts in understanding the desires of the market and the direct needs and difficulties of customers.

    Do we need specialist researchers?

    Unanimously, yes, there is definitely a role for specialised researchers. A trained researcher will be able to put together a programme of research and design sessions with users that are as unbiased as possible, non-judgemental and objective. But more than that, Tomasz Maslowski (Head of UX & Design, Tesco) pointed out that an experienced researcher “doesn’t just play the notes but hears the space between the notes.”

    So while it’s really helpful that designers, developers, product owners, executives and others are all taken into the field at various points, the skill of the specialist research is to distill what people say into what people mean. The non-trained ear can be prone to confirmation bias and pick out the soundbites and opinions that supports their theory or position, or not take what they are hearing into context or proportion.

    Research should collaborate and communicate

    You could say that about any discipline, but Dan noted that if you put research among designers then they can hear where designers are unsure about decisions and bring that into their research.

    Dan points out that a potential problem with product design is assuming that users care about the product when what they really care about is what a product can do for them. Researchers can help expose this if they are working closely with designers, rather than the design or product team simply giving research a list of questions to get answers to. “They should be the team’s questions, not the researchers’ or the designers’ questions”. Quite often the more useful research questions lead not to answers but more questions.

    This is why research is far less effective when done in isolation. It’s time for researchers - and research ops - to “get scrappy” and think about how to get learnings into the wider team, and up to executive levels. Examples include running mandatory “customer closeness” sessions enabling all employees to see first-hand customers using products.

    Ultimately the panel concluded that research’s job is to mitigate risk and confirm (or deny) opportunity, and these are aspects that the whole organisation needs to understand and use.


    Many thanks to Daniel Burka (Director of Product and Design, Resolve to Save Lives), Tomasz Maslowski (Head of UX & Design, Tesco), James Stevens (Director of Group Product Design, Sky) and Kate Tarling (Digital and Design Leadership, Fly UX) for their generous time.


    In part two we’ll cover the second panel and ask what design ops is and whether we should care about it?

    If you’d like to discuss how we could help you mature your research function do get in touch.

    ]]>
    Critique your shortcut to better designs https://clearleft.com/posts/critique-your-shortcut-to-better-designs Fri, 04 Oct 2019 13:22:29 +0000 https://clearleft.com/posts/critique-your-shortcut-to-better-designs Want to create better designs? Interested in becoming a better designer? There are few shortcuts to better design but introducing regular structured critique to your design process is one of them.

    In my role as a UX consultant, I’m often helping clients improve the impact and efficiency of their design work. In reviewing how design is done I’m surprised that there is frequently an absence of routine critique sessions.

    The good news is that critique is an easy habit to adopt and develop. I’m going to give a few tips in this article to show it’s quick to do, rewarding to participate in, and will lead to immediate improvements in the quality of your design work.

    Critique of early concept ideas with the team from Virgin Atlantic

    Does the word critique make you cringe?

    When I ask design teams who don’t do critiques why it’s not a fixture in their working work they tend to pull pained expressions. Digging deeper and asking what comes to mind with the word critique and the associations are exclusively negative.

    I commonly hear mentions of Statler and Waldorf the old cranky upper balcony hecklers extraordinaire in the muppets, or Dorothy Parker and her acerbic poison pen, Simon Cowell and his judging panel of cronies and Anton Ego voiced so sardonically by Peter O’Toole in ratatouille Pixar’s Ratatouille.

    Critique, when done properly, provides a safe space to get feedback from peers. If it doesn’t improve the designs or help you grow as a designer then you are doing it wrong.

    If you remember one thing: Critique ≠ Criticism.

    Why make design crits part of your practice?

    A good peer-review process acts as essential quality control. Getting feedback early and often enables you as a designer to benefit from the wisdom of your colleagues.

    It’s a tried and tested practice used in varying forms in other creative endeavours to help challenge, shape, and enhance work. When making movies, dailies (the previous day’s outputs) are watched by the cast and crew to improve their performances, actors receive notes from directors and producers in rehearsals and during the run of a performance, novelists receive comments on manuscripts as they move from draft to draft.

    In all these cases feedback is a baked-in part of the creative process. It starts early and continues through iterations from lo-fidelity to polished product.

    A design critique should be time spent well for both the person seeking feedback and those giving it.

    As a designer, the activity gives an opportunity to stress-test design ideas by seeing what questions it raises. It also improves design quality by gathering suggestions for enhancements and reduces risks by getting a sense check of the work with time to make any adjustments.

    Equally, for participants, giving feedback should be rewarding – as you get to see how your colleagues approach design problems and get to sharpen your critical thinking skills.

    At Clearleft, we’ll use the expertise of our colleagues when we are looking to get input on project work, devise new workshop activities and prepare conference talks etc. Anything you are creating will benefit from you having to explain your design decisions and from the feedback from a fresh pair of eyes with a new perspective.

    Critiques of work are better when done more often and earlier rather than less and later.

    Critique of a mobile prototype with the team from Cruise.co

    How do you run a productive design critique?

    It feels embarrassingly simple when written down. But that’s the point. Critique is simple to do if you bear in mind a couple of essential things:

    1 Set a time and place

    Invite the colleagues you feel can give you some considered feedback. A mix of designers, subject matter experts and insightful others. Aim for 3 to 5 people to provide a range of views while having enough time for everyone to be heard. Set aside 30 to 45 minutes as people will appreciate a calendar meeting that isn’t an hour long.

    Of course, this step is easier if you have critiques booked in as a regular ongoing ceremony.

    2 Facilitate the session

    Help the people you’ve invited to give you better feedback. Start by briefly giving some context on the problem you are trying to solve and any key insights or constraints that are useful for them to know. Then tell them what you are looking for feedback on. Keeping it targetted will help them focus and give you more actionable suggestions.

    Then show your designs. Sometimes it helps to do this with a commentary walking people through the designs. Other times a timed silent gallery allows people to view the work and consider their responses. You want to foster an atmosphere that encourages considered advice rather than knee jerk reactions.

    3 Get some feedback from your peers

    This is the point at which the tension seems to rise the first few times you do a critique. It’s helpful to set a few ground rules to make everyone feel more comfortable and make the session more productive. You want to create a space that allows people to be constructive rather than combative.

    Remind people why they are there: to use their expertise to help improve the design.

    Remind people to use language that questions or offers advice rather than dictates. Move from ‘you should …’ to ‘you might want to consider …’.

    Remind people to separate critique of the person from the product. My go-to phrase is ‘Be hard on ideas. Be kind on people’.

    To balance feedback I like to get everyone, in turn, to contribute one thing they like in the design that they feel meets the brief and then one thing they would suggest changing.

    I’m a fan of helping the attendees frame their feedback by starting their replies with:

    I like how … (to pull out a positive thing to keep), and Even better if … (to suggest something to reconsider or change).

    Once you’ve captured the feedback and if time allows dive into a discussion on the areas you want to explore in more detail. However, be careful not to dive into creating solutions on the spot. This is not the purpose of the session and is unfair on the person giving you the feedback and you as a designer to have an immediate fix. Questions in the session and solutions later.

    What are you waiting for?

    One of the key activities that lead to better quality designed products and services is to do regular critiques. When run well they help designers to articulate their work and to canvas valuable insight and input from colleagues.

    For teams who don’t yet do critiques, what’s stopping you from putting one in the calendar to improve whatever you are working on now? After all, any habit needs to start with the first time.

    ]]>
    How to be a good speaker https://clearleft.com/posts/about-speaking Thu, 03 Oct 2019 13:43:00 +0000 https://clearleft.com/posts/about-speaking I’m in the process of curating our UX London and Leading Design events, I watch around 200+ conference talks a year. Here’s a quick checklist of things I find work well and work poorly.

    Don't worry about having a unique concept

    I see too many smart people put off of public speaking because of this. It’s perfectly reasonable to take an existing concept and layer on your own perspective and experiences. This humanises the topic and makes it interesting. Also, remember that what’s obvious to you isn’t obvious to everybody. There are always new people joining the industry.

    Also don’t feel you need to write a new talk each time. Like music or stand up comedy, talks get better with practice. As a side note, I saw the same talk 5 times over the course of 3 years. Each time I took something new away because I was in a different place in my career.

    45 minutes is a looooong time to keep somebody's attention

    No matter how interesting the subject, a monotone delivery make it hard for your audience to stay engaged. Use your voice (speed, pitch, volume etc) and body (gestures, stage) as a tool to keep things interesting. Try to minimise the “ums” and “ahs”. This comes from practice.

    Try to avoid the “speaker square dance” when you shift your weight from one foot to the other. It’s distracting and makes you look nervous.

    (These are all things I still do, to my annoyance).

    Try to avoid “listicle talks” if possible. You know the type. Here are 7 or 12 things I think are important, and I’m going to go through them one by one. It’s a handy formula, but it make people conscious of time. “Crikey, they’re only at number 4”.

    A few things to avoid

    Try to avoid giving a big bio at the start of a talk. I know it’s a great way of you “justifying to the audience” why you’re on stage, but folks generally don’t care. Often it’s better to start right in the middle of a story, as it makes the audience tune in.

    Typical speaker jokes like “I’m the only thing between you and beer” can be risky. Especially if you haven’t been listening to the other speakers and they’ve already said that about lunch or coffee. I also worry about normalising over consumption of alcohol at conferences.

    Asking your audience to perform a task can be risky, especially in front of Brits and Northern Europeans, who would rather curl up into a ball and die that risk the social awkwardness of talking to their neighbours. However… Once people get talking, it’s actually hard to get them to stop. If you do have an activity planned, make sure you leave enough time for it to be a meaningful connection.

    It’s super common to make jokes about finishing off your slides last night or not getting to bed late because you were out drinking While it’s good to be vulnerable and human, if played wrong, the message this sends is that you don’t care about the audience.

    If you are going to present a concept or opinion style talk, you’ll probably need to give some back story. Try to keep this short as I’ve seen plenty of talks that are so much back story, they run out of time to cover the more interesting topics.

    The best talks have a really clear story arc

    Things that make sense when you read them in your head, often don’t gel when read aloud. So make sure you practice your talks out loud half a dozen times to make sure it flows. A surprising number of “natural” speakers have a speaking coach.

    I think it’s usually better to assume a reasonable amount of audience knowledge. For instance, If in doubt assume folks know what a Design System is rather than spend 20 minutes explaining it to folks. Better to spend that time talking about what you do differently.

    Nerves are natural

    If you’re a little shy at conferences, speaking is The Best way to break the ice. Nobody talks to you before the talk. Everybody wants to talk to you afterwards, largely because they have a way in. As such, public speaking is bizarrely good for introverts.

    Nerves are natural. Everybody gets them. Some of the best speakers I know are an absolute wreck before going on stage, swearing they’ll never speak again. Then they get up on stage, really enjoy the talk and can’t wait to do the next one.

    You can’t really banish nerves, all you can do is manage them. You may think the whole audience can tell that you’re nervous. Generally, nobody has a clue. You are your own worse critic. Remember nerves are really just excitement and excitement is a good, performance enhancer.

    Be visible, be reliable

    Many conference organisers like to have seen people speak in advance, so if possible try to record your talks, even if they’re internal presentations or at local networking events.

    Organisers appreciated that you’re busy, but also appreciate it if you can respond back to them in a timely manner. One of the reasons we see “the same old faces” is because those speakers are reliable.

    There are more conferences and events than ever before. As such there’s a huge demand for new voices, especially from underrepresented groups. I encourage you to take advantage of this opportunity and put yourselves forward.

    I’ve shared this thread and some follow-up resources on Twitter - please add you own.

    One last thing (I promise). Speaking is fun and provides a great sense of accomplishment sharing what you’ve learnt to help other people.

    However, conference speaking isn’t necessary to advance your career. Some of the best, most successful people I know don’t do talks. Don’t feel pressured into speaking.

    ]]>
    How to measure content https://clearleft.com/posts/how-to-measure-content Mon, 30 Sep 2019 13:45:00 +0000 https://clearleft.com/posts/how-to-measure-content Good content is often seen as something that’s hard to measure. It’s (a sometimes small) part of much broader system and is so closely intertwined with design. But sometimes as content designers, we’re asked to demonstrate the effect of our content.

    If you’re changing content as part of a whole page design, then you won’t be able to test the effect of changing the content in isolation. But when you’re only making changes to content, with the right analytics and behavioural insights you should be able to measure the effect of that change.

    Visualising results will help you tell your story. Photo by Carlos Muza on Unsplash

    What you measure will be determined by the impact you’re trying to have. And the mistake people often make is not setting a target for their content up-front. Without a target metric or purpose in mind, it’s impossible to review your content to see whether it did what you wanted it to. Firstly because you don’t know what you wanted it to do, and secondly, because you didn’t work out upfront what you could measure, and how.

    To set good targets upfront, you need to know the problem you’re trying to solve. Once you know that you can think about how you’ll know when you’ve solved that problem. You’ll also need the capability to measure. Without access to data or analytics it becomes much harder (but not impossible) to measure effectiveness. Ideally you also need to combine quant data with more qualitative insights. Data will tell you what but not why.

    Here are a few ideas of how to measure content for different problems:

    1. Awareness

    In this instance you want people to be made aware of your product or service or draw a user’s attention to particular important information. The key metric here is visibility — you need more people that see or find your content. The isolated content changes you could make to drive traffic include:

    • Navigation labels
    • Page headers
    • Meta-data
    • Clearer preceding call to action labels
    • Hyperlinks or related links
    • Clearer categorisation of help content (if your content is help-related)
    • Exposed or surfaced help
    • Social sharing links

    How will you know if it’s worked?

    Metric-driven measurement for this could include page-views, click-throughs, site traffic, social shares, fewer calls to your contact centre or live chats on a particular topic. If it’s navigation labels you’ve changed, you could also run tree tests.

    For more qualitative data you could run some basic guerilla research — giving participants a task to find a particular piece of information. You could also ask your customer call centres to anecdotally record whether related queries or complaints have increased or decreased after a set period of time. Depending on the volume of traffic your site receives it may take a while to see strong results this way.

    2. Engagement

    Better engagement isn’t just about time spent on a page — there are many other ways we can measure it. Often when we want to increase engagement it’s because we want to keep people onsite for longer, so they’re more likely to buy or stay loyal to a brand. But sometimes we just need to make sure they’re reading what we want them to read because it’s important, or we want them to interact with something or provide information. Changes to content might include:

    • Product or pricing information
    • Help content
    • T&Cs or legal information
    • Forms
    • Article content
    • Proposition messaging
    • Videos
    • Social media posts

    How will you know if it’s worked?

    Metrics you might look at could include page dwell-times, video views, likes or shares, page bounce rates (you would be hoping these decrease). You should also look at how many users are clicking through to other parts of your site (vs leaving the site), form completions, or data or comment submissions.

    Qualitative insights are always useful, even when you do have data, to understand user behaviour. Consider usability testing, or reviewing heatmaps, which will help you see which bits of content users are spending the most time on. But use heatmaps with caution as they don’t often tell the full story and not all analytics tools let users know they’re being monitored.

    3. Comprehension

    It’s all very well driving more traffic and engagement, but if users don’t understand what they’re reading, then they won’t understand what they’re buying or signing up to either. Confusion can lead to a loss of custom, complaints, and negative sentiment.

    Content changes can be tested on many parts of your site to improve understanding, such as:

    • Product or pricing information
    • Help content
    • Legal wording and T&Cs
    • Special offer wording
    • Questions in forms
    • Service emails

    How will you know if it’s worked?

    From a data point of view you could monitor calls to your call centre, live-chat starts, on-page feedback submissions and click-throughs to help. You’d expect these to decrease. You might also be able to look at brand sentiment through social media channels or complaints data.

    Testing comprehension through usability testing is great. Set participants a task, then once they’ve completed it ask questions to see what they understood from the content your testing. Cloze tests or highlighter tests are also a basic way to test comprehension.

    A/B tests are a great way to test content changes (image courtesy of FezBot2000 on Unsplash)

    4. Conversion

    Copy changes can be one of the most effective ways to increase conversion. Here are some of the things you might want to test or optimise:

    • Call to action labels
    • Marketing proposition messaging
    • Price and product information
    • Help copy in forms
    • T&C wording

    Help copy in forms and T&Cs are often neglected, but if a user has anxiety about not knowing what’s expected of them, or what they are signing up to, they can drop out at the final hurdle. Making sure your copy is clear and honest is the best way to build trust and improve conversion.

    How will you know if it’s worked?

    Click-throughs, page progression, quotes or leads, sales, renewals, and newsletter sign-ups are all quite standard metrics to use depending on the copy you’re changing.

    Qualitative insights are much more useful to see exactly which copy is causing a user to drop out of a journey. For example on web form, identifying the field or bit of copy that’s causing users to panic or feel frustrated and leave the page, will give you the insight you need to iterate that copy. This can be done by watching users interact with the site, as well as by looking at tools such as heatmaps.

    Conversion is often driven by a combination of factors, but usability, comprehension and task completion are important elements, and working on these metrics can have a positive impact on conversion rates too.

    To find out why page 4 was seeing such comparatively high drop-outs you’d need to get more insight

    5. Task completion

    Getting someone through to their task as quickly as possible can be a key goal for many sites who want to ensure an efficient and effective user experience. The balance you must find is one between speed and comprehension. Some experiences shouldn’t be fast; for example purchasing insurance too quickly could lead users to feel wary about having everything covered. If you’re aiming for speed your content needs to be super-clear and transparent. The other way to measure task completion is whether someone could complete their task or not. Content you might test for task completion could be:

    • Online self-service (for example amending account details)
    • Purchase or renewal journey
    • Navigation labels to help users find information (for example help content or a phone number)

    How will you know if it’s worked?

    If you’re looking at completion time, as with all metrics, you’ll need a benchmark to start from, which means you’ll need to have gained some ‘average time to task’ metrics through user-testing. This gives you a way to assess whether your content change has improved this time. Some great work has been done by the GDS team on benchmarking.

    If you don’t have metrics to start from, then usability testing is your best way to test, but you’ll be looking at whether the participants are able to complete the task at all, which arguably you could learn from site metrics.

    8754

    You might be able to A/B test different versions of the same copy element (so some of your users see one version and the rest see the other). If you can’t run such tests, you might just have to make a change, monitor the results over a set period of time, then revert or iterate if you don’t get your required outcome.

    One of the great things about content is that it’s often a relatively easy and cheap way to make small changes but see big results. If you do see big results, then share them widely. To create content advocates, you need to demonstrate the value — so create reports, dashboards, or a ‘successful content tests’ Workplace channel…whatever it takes to tell your story.

    This post was originally published on Medium.

    ]]>
    Geneva Copenhagen Amsterdam https://clearleft.com/posts/geneva-copenhagen-amsterdam Tue, 24 Sep 2019 17:16:50 +0000 https://clearleft.com/posts/geneva-copenhagen-amsterdam Back in the late 2000s, I used to go to Copenhagen every for an event called Reboot. It was a fun, eclectic mix of talks and discussions, but alas, the last one was over a decade ago.

    It was organised by Thomas Madsen-Mygdal. I hadn’t seen Thomas in years, but then, earlier this year, our paths crossed when I was back at CERN for the 30th anniversary of the web. He got a real kick out of the browser recreation project I was part of.

    I few months ago, I got an email from Thomas about the new event he’s running in Copenhagen called Techfestival. He was wondering if there was some way of making the WorldWideWeb project part of the event. We ended up settling on having a stand—a modern computer running a modern web browser running a recreation of the first ever web browser from almost three decades ago.

    So I showed up at Techfestival and found that the computer had been set up in a Shoreditchian shipping container. I wasn’t exactly sure what I was supposed to do, so I just hung around nearby until someone wandering by would pause and start tentatively approaching the stand.

    If you’re at Techfestival.co in Copenhagen, drop in to this shipping container where I’ll be demoing WorldWideWeb.cern.ch

    “Would you like to try the time machine?” I asked. Nobody refused the offer. I explained that they were looking at a recreation of the world’s first web browser, and then showed them how they could enter a URL to see how the oldest web browser would render a modern website.

    Lots of people entered facebook.com or google.com, but some people had their own websites, either personal or for their business. They enjoyed seeing how well (or not) their pages held up. They’d take photos of the screen.

    People asked lots of questions, which I really enjoyed answering. After a while, I was able to spot the themes that came up frequently. Some people were confusing the origin story of the internet with the origin story of the web, so I was more than happy to go into detail on either or both.

    The experience helped me clarify in my own mind what was exciting and interesting about the birth of the web—how much has changed, and how much and stayed the same.

    All of this very useful fodder for a conference talk I’m putting together. This will be a joint talk with Remy at the Fronteers conference in Amsterdam in a couple of weeks. We’re calling the talk How We Built the World Wide Web in Five Days:

    The World Wide Web turned 30 years old this year. To mark the occasion, a motley group of web nerds gathered at CERN, the birthplace of the web, to build a time machine. The first ever web browser was, confusingly, called WorldWideWeb. What if we could recreate the experience of using it …but within a modern browser! Join (Je)Remy on a journey through time and space and code as they excavate the foundations of Tim Berners-Lee’s gloriously ambitious and hacky hypertext system that went on to conquer the world.

    Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.

    We’ve been honing the material and doing some run-throughs at the Clearleft HQ at 68 Middle Street this week. The talk has a somewhat unusual structure with two converging timelines. I think it’s going to work really well, but I won’t know until we actually deliver the talk in Amsterdam. I’m excited—and a bit nervous—about it.

    Whether it’s in a shipping container in Copenhagen or on a stage in Amsterdam, I’m starting to realise just how much I enjoy talking about web history.

    This was originally published on my own site.

    ]]>
    The discovery phase of our healthcare brief https://clearleft.com/posts/the-discovery-phase-of-our-healthcare-brief Mon, 23 Sep 2019 10:32:00 +0000 https://clearleft.com/posts/the-discovery-phase-of-our-healthcare-brief As we introduced in our first blog post, our brief is focused on Primary care and their ambition to ‘Do more with Less’. Whether enhancing the network of primary caregivers, improving efficiency and/or reducing the load on clinicians and staff.

    It’s a huge problem area for us to understand and we want to share what we’ve been up to so far.

    We have Clearleft at our disposal, whether via meetings, playbacks, our ‘open surgeries’ or over granola and coffee in the morning, however, we have the autonomy to shape our project to some degree.

    Discovery playback session with our Clearleft stakeholders

    Our fantastic project manager Alison keeps us moving forward at the pace we need to, amongst many things. We agreed to split the 13-week project into three distinct phases; Discovery, Design and Implementation, using the Double Diamond to guide our work through these phases. With a complex problem space, we planned for a slightly longer discovery phase, and to define our focus over five weeks.

    Our double diamond

    Kicking off discovery

    The brief touched on some of the difficulties experienced by our stakeholder GP surgery, these included rural transport, continuity of care and provision of out-of-hours services amongst many others. This helped us not go into the project cold, but we were keen to meet with doctors face to face and delve into their day-to-day life as GPs, as well as introduce ourselves.

    We wanted to gauge what the most limiting challenges are. We were interested in who they partner with, the impact of third party innovations in healthcare, and how they operate as part of the newly formed Primary Care Network of Chanctobury. Not only did we listen and learn, but the meeting generated some empathy for the varied challenges in Primary care.

    Narrowing down

    An affinity mapping exercise back at the office helped us see the themes in what we learned, and helped us mentally get some clarity amongst a sea of new acronyms. We also began building an experience map of people and Primary care journeys.

    There were a number of challenges we started to uncover.

    • Transport of people in rural communities to other regional services, especially older people, is pen and paper based and lacks easy volunteer driver organisation.
    • Unsurprisingly, IT capabilities and integration underpin many inefficiencies.
    • Staffing issues, especially turnover and new staff training of front line staff is a pressure on resource and service.
    • New groupings of surgeries, with the Chanctonbury PCN being no different, are just starting to embed ways of working together.
    • Demands on GP surgeries for problems that can be treated at home or with over-the counter remedies
    • Raising awareness amongst patients to direct people to out of hours services at different local practices
    • Mental health in rural communities and impact of isolation.
    An experience map of post its on brown papare
    Experience map

    We wanted to talk further with a wider set of staff. We were very grateful to have the opportunity to return and meet with front line staff, nurses, doctors and the practice manager to talk more around our targeted areas.

    We planned the lines of enquiry while allowing for a fluid conversation, and used the app Otter to auto-transcribe the 2.5 hour session for us to refer back to. Initially planned as separate interviews, on the day staff joined us when they were able to step out, so the session naturally became a more informal group interview and discussion.

    At Henfield medical centre posing our research questions and faciliatating discussion around the problem space

    Exploring the problem space using 'How might we' statements

    Compiling our insights we were able to create a first round of ‘How might we’ statements to help identify problem areas that may be helped through design in the time frame we have.

    ‘How Might We’ (HMW) statements are useful when wanting to frame the broad areas of challenge that you might want to design for. Very deliberate in its wording, ‘How’ reminds us that we don’t know how this can currently be addressed, helping create a sentiment of curiosity and openness to the challenge. ‘Might’ invites the team to consider lots of different ways to come to a solution, which may or may not be used. ‘We’ reminds us of the collaborative nature of most design projects. It provides gentle guidance for the team without restricting too early. These were our first round:

    Self-care

    How might we improve self-care rates in rural areas to reduce demand on GP and front line staff time in triaging?

    Partnerships

    How might we help the surgery and newly formed PCN work with partners to manage community health?

    Triaging system

    How might we support or scale the Henfield Triage process?

    Awareness and communications

    How might we help the surgery and PCN better work with partners to manage patients’

    Isolation

    How might we help the surgery better communicate with the local community?

    Learn more in our first Vlog:

    Next steps

    We’ve gained incredible new perspectives from experienced Clearlefties in project playback sessions and ‘open surgery’ sessions. We will be going back to our pilot area Henfield, to talk to local users about their experiences and perspectives of GP health care, utilising some of Clearleft’s Guerilla research techniques.

    You can follow us on Twitter here to help and follow our project. We’d love the support of the design and research community.

    ]]>
    Tiny lesson: rapid builds, email signatures and Airtable https://clearleft.com/posts/rapid-builds Mon, 23 Sep 2019 08:00:00 +0000 https://clearleft.com/posts/rapid-builds We’re fortunate enough to have some rather snazzy email signatures, kindly created by Benjamin. He’s been lovingly crafting these by hand; diligently updating them each time an event concludes or a new Clearleftie joins.

    This seemed like a fun and helpful task to automate, and after a morning of hackery, I had a working version of the signature generator deployed and ready for an internal test.

    There are a couple of interesting technical decisions which we’ll delve into, but before that, let’s talk about building at pace more generally.

    Quick web things, quick web wins

    One of my passions is rapidly testing ideas on the web. Development pace is one of the biggest things the web has going for it over native applications. A HTML file, domain, server space and a couple of hours is all you need to get something online.

    It’s worth saying upfront that rapid builds are definitely not for everything. In fact, they should be used sparingly for prototypes or side projects that aren’t business-critical. Like the good, fast, cheap venn diagram, quick builds are inherently flawed, but they do still have merit.

    I really love a thorough plan and technical spec, but there’s something wonderful about rapidly spiking a problem and not worrying too much about the details. It’s how Sergey, JS Pedalboard, Javasnack and several marginally popular F1 parody websites came about.

    The stack isn't really important

    Spikes are a great way to try a new technology and learn by making mistakes. The only pre-requisite I’d suggest is to use a stack that you can deploy easily. There’s nothing worse than getting something working locally, then finding you can’t host it without an AWS degree or pricey hosting infrastructure.

    In the past, PHP was my jam. It’s still so much easier to host than Node.js, and you can build quickly without getting too bogged down in implementation details. These days I’m tending to lean on static site generators like Hugo and Sergey (shameless plug), before deploying to Netlify.

    For this project, I opted for Preact and a small vanilla Node build script. Preact CLI boilerplated the site very quickly, and after a few minutes of stripping back the extra cruft, I had an ES6, reactive and hot-loaded development environment ready. I then ran a quick ‘hello, world’ deploy to confirm it would all build on Netlify.

    With that in place, it was time to actually build the darn thing.

    From the very rough plan in my head, it appeared there were two main parts to this project:

    1. The template generator - Preact
    2. The data source - JSON & Airtable

    Generating code with code

    Email signatures are notoriously awful to code, but fortunately Benjamin had done the hard work already! I grabbed an existing signature and converted it into a little method that squirted the parts of a ‘person’ in, mixed it with some sensible defaults, and returned some HTML (well, JSX).

    const person = {
      forename: 'Trys',
      surname: 'Mudford',
      team_name: 'trys-mudford',
      avatar_name: 'trys-mudford-small',
      role: 'Front end developer'
    };
    
    renderSignature = person => (
      <div style="min-height:50px;line-height:17px;color:#505050;min-width:350px;font-family: Arial, sans-serif; font-size: 10pt; line-height: 1.5;">
        <p>{person.forename}</p>
        <p>---</p>
        <a href={data.defaults.team_url + person.team_name}>
          <img
            style="float:left;margin:2px 6px 32px 0;width:90px"
            src={data.defaults.avatar_url + person.avatar_name + '.png'}
            alt={person.forename + ' Profile Pic'}
          />
        </a>
        <p>
          <strong>{person.forename + ' ' + person.surname}</strong>
          <br />
          {person.role} |{' '}
          <a
            style="color:#006ff5;text-decoration:none;font-weight:700;border-bottom:1px"
            href="https://clearleft.com/"
          >
            Clearleft
          </a>
          <br />
          <a
            style="color:#505050;text-decoration:none"
            href={data.defaults.phone_url}
          >
            {data.defaults.phone_text}
          </a>
        </p>
      </div>
    );

    The next step was to import the list of staff from a JSON file, pick out a selected team member and render the above template. Thanks to JS imports, this was nice and clean to achieve:

    import { h, Component } from 'preact';
    import data from './data';
    
    class App extends Component {
      render() {
        const person = data.team.find(x => x.team_name === 'trys-mudford');
    
        return (
          <div>
            {person && (
              <section class="person">{this.renderSignature(person)}</section>
            )}
          </div>
        );
      }
    }

    Next I moved the hard-coded user identifier up into state, and added a <select> field to control it.

    state = {
      teamName: ''
    };
    
    setTeamName = event => {
      this.setState({ teamName: event.target.value });
    };
    
    render() {
      return (
        <form>
          <label for="who" class="screen-reader-only">
            Pick a team member
          </label>
          <select
            id="who"
            value={this.state.teamName}
            onChange={this.setTeamName}
          >
            <option value="">Who are you?</option>
            {data.team.map(person => (
              <option value={person.team_name}>
                {person.forename} {person.surname}
              </option>
            ))}
          </select>
        </form>
      )
    }

    Finally, I added a touch of state restoration with the help of localStorage. When a user returns to the site for a second time, their previous staff choice gets prefilled, saving one click. The goal of this site is to save us time so this feature is; although by no means essential, surprisingly useful.

    const STORAGE_NAME = 'signatureTeamName';
    
    state = {
      teamName: localStorage.getItem(STORAGE_NAME) || ''
    };
    
    setTeamName = event => {
      this.setState({ teamName: event.target.value }, () => {
        localStorage.setItem(STORAGE_NAME, this.state.teamName);
      });
    };

    With that, plus a bit of styling, the frontend was complete.

    Airtable API

    The above was achieved with a static JSON file, which was super rapid to build with. As an MVP, this all works and could genuinely be used in production - there’s no shame in avoiding databases altogether. But part of the fun in rapid building is trying new things out.

    Airtable is like Excel on steroids - and a spreadsheet seemed like the most straightforward way to get data into this system without getting tied up in databases and servers. I considered Google Sheets, but their API authentication was too cumbersome, so Airtable won the day. As I said, pick tools that deploy easily!

    Once I had an API key, I created a file called fetch.js and ran node fetch.js in the terminal. This runs whatever JS is in the file - like a Bash script for those of us who don’t know Bash. Data fetching in Node is still less than ideal, but I’ve got a handy little method that converts the in built https library into a promise:

    const https = require('https');
    
    /**
     * Generic HTTP Get request promisified
     * @param {string} url - the API endpoint
     * @returns {Promise<Object>} - the response
     */
    function get(url) {
      return new Promise((resolve, reject) => {
        https
          .get(url, res => {
            let data = '';
            res.on('data', chunk => (data += chunk));
            res.on('end', () => resolve(JSON.parse(data)));
          })
          .on('error', err => reject(err));
      });
    }

    The response from Airtable is an object with a records array. Each record is a row in the spreadsheet which in turn has a fields property. Each item in this object is keyed to the name of the column, and represents a cell.

    I started out writing some fairly dodgy but working™ code to take the rows, loop them and add them to a new array. That array was then converted into a new JSON file ready to be consumed by the Preact application. Once I’d confirmed that was all working, I refactored a bit and ended up with this:

    /**
     * Parse Airtable response, running through a transform callback function
     * @param {string} url - the Airtable endpoint
     * @param {transform} transform - the transform function to run through
     * @returns {Promise<AirtableRecord[]>} - an array of records
     */
    function fetchFromAirTable(url, transform) {
      return get(url)
        .then(res => res.records
          .filter(x => Object.keys(x.fields).length)
          .map(transform)
        );
    }
    
    function fetchStaff() {
      return fetchFromAirTable(
        `https://api.airtable.com/v0/${SPREADSHEET}/Staff?maxRecords=40&view=Grid%20view&api_key=${KEY}`,
        record => ({
          forename: record.fields['First Name'],
          surname: record.fields.Surname,
          team_name: record.fields['Team Name'],
          avatar_name: record.fields['Avatar Name'],
          role: record.fields.Role
        })
      );
    }
    
    (() => {
      console.log('Fetching data from Airtable...');
    
      return Promise.all([fetchStaff()])
        .then(([team]) => {
          let data = JSON.stringify({
            team,
            defaults
          });
          console.log('Writing results...');
          fs.writeFileSync('src/data/index.json', data);
          console.log('Build complete');
        })
        .catch(err => {
          throw new Error('Fetching failed', err);
        });
    })();

    Instead of pushing the transformed data into a new array, I relied on the wonderful Array methods we have available in JS. Using .map() with a callback worked as a really neat way to extract the data transformation out into the calling function, simultaneously keeping the data fetching code nice and generic.

    Scaling this to work with ‘events’ as well as ‘staff’ was a case of creating a new method, adding the appropriate API URL, and writing a new transform function.

    Build time scraping

    One option would’ve been to hit the Airtable API directly on the client-side. This has the advantage of always being up to date, but has a few downsides:

    • Additional point of failure on the live site
    • Dependant on Airtable keeping their API format consistent
    • Rate limiting & pricing considerations
    • ‘Dangers’ of exposing all the spreadsheet data
    • Definite dangers of exposing API keys
    • CORS hell

    The approach I took was to fetch the data once at build time, create a new JSON file and read that in to Preact. It’s the same technique I used on the 2018 incarnation of Paul the Octopus.

    The biggest benefit of this approach is how well it fails. If: Airtable change their API design/auth, someone updates the spreadsheet format drastically, or the sky falls in, new releases will simply not build and the current release will continue to stay live. I’ll get an email alerting me to the failed build, and I can investigate in my own time.

    Hiding secrets

    With any quick build, you need to decide what’s worth optimising and what’ll ‘do’ for the MVP. If you get bogged down optimising prematurely, you’ll never ship anything. If you cut too many corners, the product will be unsalvageable. The trick is to avoid painting oneself into a corner.

    Environment variables are one of those things that are worth setting up early doors. They’re not exactly exciting, but retrospectively adding them is even less fun. Plus, the very act of adding them to a project forces you to consider how the site will be deployed.

    Hard coding secrets into a repository isn’t a hugely clever idea, so it’s good practice to create an .env file, pull in the dotenv module, and rely on environment variables from the start.

    Come up with a plan

    You don’t have to fly totally blind with projects like this. It’s worth coming up with a small plan, even if it’s only in your head. For this project, the plan looked a bit like:

    • Decide on a stack
    • Bootstrap the site
    • Render a plain HTML signature
    • Make a template to render a signature from an object
    • Extract user details & defaults into a JSON file
    • Fetch something from Airtable
    • Save that thing as JSON
    • Trigger the fetch at build time

    If you have a reasonably big idea in mind, it’s worth breaking it down into smaller features first. This ‘backlog prioritisation’ exercise might sound pretty formal for a single day build, but I find it helps me stay focused. I quite like GitHub projects & Trello for this task - I’ll make ‘MVP, nice to have, backlog, in progress, done’ columns and divide the features accordingly. The MoSCoW method is a decent alternative approach.

    Guessed requirements

    The final thing I wanted to touch on was guessed requirements. It’s an inevitability that the thing you build will have some rough edges and won’t work perfectly first time out. But that’s okay, you’re not building a business-critical system, you’re building a ✨ fun web thing

    With this project, I got a bit carried away and added a ‘copy the code’ feature. It ran the renderSignature method through Preact’s render to string library, before copying it to your clipboard with execCommand. There were some interesting success/error states to consider and I had to use refs to select DOM nodes within the application.

    The only problem was, the feature wasn’t needed.

    Gmail and Apple mail both work from the default browser selection and clipboard, and don’t allow you to paste HTML. So the feature was swiftly removed. It could’ve been avoided with some basic specifications, but it also wasn’t a big deal. The feature took about 30 minutes to add, and was a nice problem to solve.

    The fact that it didn’t make it to launch matters little, it was still useful to learn and code, even if I was the only beneficiary.

    This was originally posted on my website.

    ]]>
    Applying a system mindset at both a service and product level https://clearleft.com/posts/applying-a-system-mindset-at-both-a-service-and-product-level Thu, 19 Sep 2019 09:10:44 +0000 https://clearleft.com/posts/applying-a-system-mindset-at-both-a-service-and-product-level Over my career, I’ve applied my design skills in a number of different realms.

    I’ve been called an ‘Ergonomist’ designing a national transport system, I’ve been called a ‘Service Designer’ designing an end-to-end airport passenger experience and more recently at Clearleft I’ve been called a ‘UX Designer’ designing a relationship management product for a large investment bank. However, the one thing that has stood me in good stead throughout my whole career is taking a system’s approach to all problems - no matter whether they are at a service or product level.

    I'm not a fan of the phrase 'digital service design'

    The benefits of taking a system’s approach at a service design level is well understood. The whole premise behind service design is that an experience should be designed across all customer touchpoints - including understanding how the tangible and intangible components influence one another within the whole system. It’s one of the reasons the new term of ‘digital service design’ has never sat well with me as, unless the service is 100% digital, dividing the service design between the digital and non-digital parts defeats the object.

    At a product level, your brief is typically more focused on an individual/subset of product(s) or service ’touchpoints’. However, it’s absolutely critical not to lose sight of where this component sits within the broader service strategy - what is your role, what is your relationship with other touchpoints and how might your design decisions influence other areas of the service.

    Ultimately, the experience is defined by all touch-points, not just the ones you are focused on.

    The risk of not maintaining a systems mindset

    Not maintaining this system mindset at all levels is one of the reasons in my experience that it’s all too easy for the original service strategy to get lost at execution or take a life of its own. One of my favourite quotes of all time is this one by Steve Jobs which highlights that the journey from idea to product is rarely a straight one:

    You know, one of the things that really hurt Apple was after I left John Sculley got a very serious disease. It’s the disease of thinking that a really great idea is 90% of the work. And if you just tell all these other people ‘here’s this great idea’ then of course they can go off and make it happen. And the problem with that is that there’s just a tremendous amount of craftsmanship in between a great idea and a great product. And as you evolve that great idea, it changes and grows. It never comes out like it starts because you learn a lot more as you get into the subtleties of it. And you also find there are tremendous tradeoffs that you have to make.

    Steve Jobs

    Successful innovation relies on both service and product-level design working hand-in-hand. This is importantly a two-way relationship that requires everyone to understand that executed well the whole can actually be greater than the sum of all the parts.

    ]]>
    A new way to talk about content strategy https://clearleft.com/posts/a-new-way-to-talk-about-content-strategy Fri, 13 Sep 2019 09:03:00 +0000 https://clearleft.com/posts/a-new-way-to-talk-about-content-strategy I don’t know about you, but I need methodology and theory to be really simple, and also visual. ‘Draw me a diagram’ is often my go-to line. I find it very hard to make the abstract tangible and practical without clear diagrams and examples.

    For this reason I sometimes struggle to articulate content strategy to clients and colleagues. It’s essentially the substance, tools, people and process that get you to your outcomes — or how to focus your efforts to achieve your goals. And that’s important, because without focus you’re just building ‘stuff’ without purpose.

    But when it comes to breaking that down into tangible things that a business can do, I just couldn’t find a diagram that said what I wanted it to. So I’ve created my own.

    Content strategy as a process

    It made sense for me to think of a strategy in the form of a process – the steps to get from here to success. So the steps I’ve come up with are:

    1. Focus

    What are you trying to achieve as a business, and what do your users need? Thinking about your user goals in terms of business value can help to align these.

    For example, if customers are seeking more information on a particular topic, by providing that info you can keep customers onsite for longer and increase the chances of them buying. You meet their needs and by doing so, sell more.

    While the overall goal you are trying to achieve through content is probably sales or loyalty, think about the more granular metrics that contribute towards this about might be more relevant to user needs.

    Once you’ve drawn out the areas to focus on, you can define a content mission statement.

    2. Foundations

    In order to create effective content you need to lay the foundations. This consists of:

    • your values (which are aligned to your content mission)
    • voice and tone
    • the substance and structure of your content, ie. WHAT will you be creating or refining?

    Once you know what you need to create, it’s easier to work out how to get there.

    3. People

    It goes without saying you’ll need someone to create or refine your content. Perhaps multiple teams and disciplines need to be involved? Setting out the roles and responsibilities is particularly important when you don’t have a defined content team. Who needs to provide product information, and who has overall accountability for the content once it’s live? No accountability isn’t just dangerous from a quality point of view. If no one’s reviewing the existing content once it’s live then you’re accumulating content debt.

    4. Process

    The next thing to focus on is your production and build process. This could be very simple if you’re a team of one or embedded in a product team. But if you’re in a large, fragmented organisation it’ll be more complex. Creating briefing templates or setting SLAs (service level agreements) might even be necessary if you’re managing a vast number of stakeholder requests. The great thing about defining metrics and KPIs is that you now have a criteria to prioritise content against. If it’s not contributing to business goals or metrics then is it really a priority, or do you need to include other objectives options in your brief such as ‘legal requirement’?

    If you have a CMS it’s best practice to document the workflow and list out creators, editors etc. Even if this is just to keep track of who has access. You’ll need to make sure there’s a process for removing users when they leave the business or adding new users when they join too.

    Under process I’d also include style guides and QA checklists. Part of any content production is ensuring it’s governed in such a way that whoever produces it achieves a high quality and consistent piece of content. When multiple content creators exist you’ll need guidelines to make sure this happens.

    5. Measurement

    Much like the agile process of test and learn, we must make sure we’re tracking against our targets. Whether this is through analytics and data, or more qualitative feedback such as usability testing, we need to revisit what’s gone live. In the case of a large website with multiple content producers, it’s advisable to review the content at regular intervals and check it’s still accurate and fit for purpose.

    6. Maintenence

    Once our content is live our work isn’t done. Iterating content, testing new versions (through AB testing) and optimising for usability isn’t just advised, it’s essential if we want our site to be the best it can be. We all like to think that when we hit publish that it’s the best work we’ve ever done. But the chances are that looking at your site with a critical eye will highlight lots of room for improvement.

    Content strategies are only achievable with buy-in from senior stakeholders, which is sometimes tricky. My top tip is to include them in the strategy creation process. Start with stakeholder interviews to understand what they think the company should be trying to achieve through content, and bring them into any workshops you run. In a recent presentation by Gather Content I heard:

    8676

    so the appetite for better content is there. But better content doesn’t just happen — it starts with a better strategy.

    This post was originally published on Medium

    ]]>
    Tiny Lesson: 3 useful Figma features https://clearleft.com/posts/tiny-lesson-3-useful-figma-features Thu, 12 Sep 2019 10:47:00 +0000 https://clearleft.com/posts/tiny-lesson-3-useful-figma-features I recently started designing and prototyping in Figma and I want to quickly show you three useful features I’ve discovered. Those are smart selection, colour styles, and prototype master connections.

    Smart selection

    When you select elements which are evenly spaced, Figma recognises that these are related and allows you to adjust the spacing for all of them at once. You can also swap the position of these elements without disturbing the spacing. You can read more about smart selection on Figma here.

    Colour Styles

    In Figma you can save any colour as a style and apply it to fills, strokes, and text to ensure consistency throughout your designs. Any updates you make to your colour styles will be immediately reflected in your file or project, anywhere you’ve applied that style. The real-time updating means you can tweak colours and test colour combinations really fast with your actual components. This is the way that colour styles should work in a design tool, but it could also be a really cool way to help you with some specific tasks, like adjusting brand colours to meet accessibility requirements, or I could see this working really well to help with designing a themable template.

    Prototype connections

    If you have a component which appears on multiple pages, like a website header or footer, connections added to the master instance of that component will work in all of its instances. I think at the moment this only works if the master component is on the same page as its instances but it’s still really useful and can save a lot of time. Here’s some more information on creating prototypes in Figma.

    Those are just a few of the nice touches Figma has introduced that I’ve found useful. I’m really enjoying using the app – it’s a pretty exciting tool and I’m learning something new about it every day. It’s definitely my first choice design app at the moment. It’s cross-platform and there’s a free tier, so have a look, and let us know what you think.

    ]]>
    Getting started https://clearleft.com/posts/getting-started Mon, 09 Sep 2019 10:53:23 +0000 https://clearleft.com/posts/getting-started I got an email recently from a young person looking to get into web development. They wanted to know what languages they should start with, whether they should a Mac or a Windows PC, and what some places to learn from.

    I wrote back, saying this about languages:

    For web development, start with HTML, then CSS, then JavaScript (and don’t move on to JavaScript too quickly—really get to grips with HTML and CSS first).

    And this is what I said about hardware and software:

    It doesn’t matter whether you use a Mac or a Windows PC, as long as you’ve got an internet connection, some web browsers (Chrome, Firefox, for example) and a text editor. There are some very good free text editors available for Mac and PC:

    For resources, I had a trawl through links I’ve tagged with “learning” and “html” and sent along some links to free online tutorials:

    After sending that email, I figured that this list might be useful to anyone else looking to start out in web development. If you know of anyone in that situation, I hope this list might help.

    This was originally posted on my own site.

    ]]>
    Hello from the Interns https://clearleft.com/posts/a-quick-hello Thu, 05 Sep 2019 08:30:00 +0000 https://clearleft.com/posts/a-quick-hello We’re pretty excited to be part of the Clearleft 2019 Internship Programme. We’ll be sharing our design process with you throughout our journey – but before we do, let’s introduce ourselves and the project.

    Clearleft has seen past interns produce some great solutions, such as a connected audio player and a product to empower citizens in planning applications. This year we’re exploring the opportunities for user-centred design and technology to enhance primary care in the NHS (GP services).

    The NHS is undergoing both a digital transformation and operational one with the recent grouping of surgeries into Primary Care Networks (PCNs). This will enable efficiencies and greater resilience, but it’s early days. Alongside this the NHS continues to feel pressures from growing patient demand and falling GP numbers.

    This project brings doctors and designers to the table with a shared purpose of uncovering opportunities and designing for good to support our much loved NHS.

    A bit about us

    Beyza has a background in industrial design and recently finished her master’s degree in Entrepreneurship, Innovation, and Technologies from Strathclyde Business School. She has expertise in medical device design and UX, and has founded a start-up in oral health technologies. She was also the design manager at Ege University design centre, supporting innovation in the medical sector. She’s excited to join this programme to improve her UX & UI skills in healthcare technologies.

    Holly is a UX Designer with an interest in health and nutrition, in particular in the growing research area of the human microbiome and disease. She has trained in biomedicine and volunteered for the mental health charity Mind. After 12 years crafting solutions and engaging users in reducing carbon emissions she successfully completed a 3 month immersive programme in UX Design. She’s excited to have joined Clearleft in her home city where she can build her skills alongside a team of talented designers.

    Lacin is a researcher and multidisciplinary designer. Having completed an interior architecture degree she continued her studies by doing a masters degree in Design where she started to become interested in design research. Her research focuses on sensory design, biophilia and immersive environments within the healthcare industry. She is very excited to collaborate with talented people from various backgrounds and learn as much as she can about the design and research process in an agile environment.

    Discovery to implementation

    This is a three month project. We’re lucky to have a project manager, helping us to keep to our trajectory. Five days in we are deep in the discovery phase of this vast problem area, speaking with local doctors. During the second month we will be exploring different designs for our target problem before the final stage of implementation in which we will be bringing that solution to life.

    You can follow us on twitter @clearleftintern and we will be posting regular blog posts here on the Clearleft site. If you have any comments or want to share any experiences or ideas in this area you can email us.

    ]]>
    Debunking some design sprints myths https://clearleft.com/posts/debunking-some-design-sprints-myths Thu, 29 Aug 2019 14:33:00 +0000 https://clearleft.com/posts/debunking-some-design-sprints-myths We regularly use design sprints to help clients to accelerate design, unblock problems and investigate new ideas. We’re big fans of design sprints when done well. However …

    … we also find there are some common misunderstandings about the technique pioneered by Jake Knapp and the team at Google Ventures.

    During UX London Jerlyn and I ran a Design Sprint 102 workshop. As part of it, we tested 5 myths we often hear by getting people to run to different sides of a room to show if they thought a statement was true or false.

    We discovered there was little consensus in the attendees’ answers.

    So which side of the room would you go to? For each of the five myths below do you think the statement is true or false?

    Chris running a design sprint workshop in a large glass room at UX London
    Chris and Jerlyn running the Design Sprint 102 workshop at UX London 2019

    Myth 1: Design sprints only work for digital products

    Answer… False

    Design sprints are delivery medium agnostic. If you have a business challenge that you want to give focus to by exploring, creating and testing possible solutions then a design sprint can be a valuable approach.

    We’ve used design sprints to reimagine a Council’s omnichannel service delivery, to redesign billing information (including the paper version) for a utility company, as well as on many digital products.

    In his Sprint book, Jake Knapp talks about using the process for making and testing a new chocolate bar. Ex-Clearleftie, Cennydd Bowles has an [ethical design sprint] (https://www.cennydd.com/ethical-design-sprints) to shape policy and procedures.

    The size and nature of the design problem are more important than if the solution is digital, physical or a mix of both.

    Myth 2: Design sprints are a cheap 
way to do quality design work

    Answer… False

    It’s easy to see the appeal to business decision-makers in the strapline from the Sprint Book. Who doesn’t want to ‘Solve Big Problems and Test New Ideas in Just Five Days’.

    In organisations where there is a perception that design takes too long and innovation is costly then a design sprint may appear to be a silver bullet.

    However, design sprints can be incredibly wasteful if you don’t focus on the right problem. Afterall they involve a lot of concentrated time from a team of skilled people.

    The true value in a design sprint is in exploring new possibilities and learning quickly the ones that offer value to pursue further and the ones to let go.

    Myth 3: You can test your prototypes
 with whoever you can find

    Answer… True but …

    The final day of the design sprint is set aside to test your ideas. This is where you learn what is useful and desirable for the intended audience.

    Our top tip is to build as little as you can to learn as much as you can. Aim to be prototype ready not production ready. It’s okay if your artefact for testing is held together with sticky tape and string.

    When it comes to who to evaluate your ideas with always test with people who belong in the space you are exploring. Shortcuts in recruiting lead to a shortfall in insight.

    When testing for usability you can get away with a less strict recruit. When testing for desirability and validating user needs then make sure you research with people who’ll use your product or service.

    Myth 4: Design sprints are a great way to show your organisation the value and benefit of design

    Answer… True but …

    As fans of design sprints, we certainly advocate using them as a low-risk and relatively low-cost method for exploring potentially high-value ideas.

    We have many examples of design sprint successes both in developing new products and services but also in engaging stakeholders in the value design and design thinking offers.

    However, it is easy for teams and stakeholders to become addicted to the energy and excitement a design sprint generates. There is often a danger that this leads to becoming blinkered to other design techniques.

    Design sprints are best as a kick-starter but lack the rigour to deliver fully considered products or services.

    Myth 5: A design sprint is a 
 five-day long process

    Answer: False

    The length of a design sprint is not fixed. We have taken the principles of a structured rapid design process and applied it to projects ranging from four days to three weeks. The semi-official design sprint 2.0 outlines, as a headline at least, a four-day process. Although it shrinks the week by a day by moving some activities into pre and post phases of the design sprint.

    There seems to be an arms race going on to see how quickly you can run a design sprint. If you search on Medium you’ll find articles on running a one-day design sprint, topped by the five-hour design sprint that gets superseded by the three-hour version before another article trims this to a two-hour process. At some point, you don’t have space and time to explore possibilities but merely to badly execute your first obvious ideas.

    Be careful of speed design. Design sprints are best to address tricky challenges with a degree of divergent thinking. If the business challenge you face is worth investigating then it needs enough time to figure out and to play around with some design alternatives that move away from just obvious and safe solutions.

    Interested in design sprints?

    Find out more about Design Sprints at Clearleft with a collection of our thoughts and resources to help you get more from this valuable and often misunderstood design technique.

    ]]>
    Linear Interpolation Functions https://clearleft.com/posts/linear-interpolation-functions Wed, 21 Aug 2019 11:00:00 +0000 https://clearleft.com/posts/linear-interpolation-functions Linear Interpolation isn’t just for animation, it’s amazing for data manipulation and a worthwhile tool to have in your coding arsenal.

    I wrote a blog post some months back on Linear Interpolation. It was a subject I knew very little about at the time, having not done a great deal of animation work. But now I know a little more, I’ve found it’s been one of those techniques I keep coming back to for most projects.

    What I’ve learned is that interpolation isn’t just about animation, or even about visual things—it’s about data conversion.

    Aside: that might sound a bit heavy or dry, but it’s how my brain works! I love how different coding concepts ‘click’ for different people in different ways.

    Among more traditional animation-y things, I’ve used these techniques to calculate rotary dial positions on the guitar pedalboard, mapped usernames to fallback avatars on Daisie and plotted typographic graphs on a side project I’m currently building.

    The four functions

    const lerp = (x, y, a) => x * (1 - a) + y * a;
    const clamp = (a, min = 0, max = 1) => Math.min(max, Math.max(min, a));
    const invlerp = (x, y, a) => clamp((a - x) / (y - x));
    const range = (x1, y1, x2, y2, a) => lerp(x2, y2, invlerp(x1, y1, a));

    There’s a Typescript version at the bottom of the page, if you’re that way inclined.

    Lerp

    A lerp returns the value between two numbers at a specified, decimal midpoint:

    lerp(20, 80, 0)   // 20
    lerp(20, 80, 1)   // 80
    lerp(20, 80, 0.5) // 40

    It’s great for answering gnarly maths questions like: “What number is 35% between 56 and 132?” with elegance: lerp(56, 132, 0.35). My maths skills aren’t all that, so it’s great to have these up my sleeve.

    Here’s an example that converts a range slider set between 0 and 1, to a hsl() colour with hue degrees of 11 through 60.

    See the Pen Lerp by Trys Mudford (@trys) on CodePen.

    Clamp

    The clamp method is wonderfully dull. You give it a number and then a minimum & maximum. If your number falls within the bounds of the min & max, it’ll return it. If not, it’ll return either the minimum it’s smaller, or the maximum if it’s bigger.

    clamp(24, 20, 30) // 24
    clamp(12, 20, 30) // 20
    clamp(32, 20, 30) // 30

    It’s really handy for preventing absurd numbers from entering a calculation, stopping an element from rendering off screen, or controlling the edges of a <canvas>.

    Here’s an example that lets you add or subtract 10 from the current number, but clamped between 0 and 100.

    See the Pen Clamp by Trys Mudford (@trys) on CodePen.

    Inverse Lerp

    This works in the opposite way to the lerp. Instead of passing a decimal midpoint, you pass any value, and it’ll return that decimal, wherever it falls on that spectrum. Internally it also uses a clamp, so you never get unwieldy values back.

    invlerp(50, 100, 75)  // 0.5
    invlerp(50, 100, 25)  // 0
    invlerp(50, 100, 125) // 1

    This is great for scroll animations. Questions like “How far through this section has the user scrolled?” can be neatly answered with code like:

    const position = el.getBoundingClientRect();
    const howFarThrough = invlerp(
      position.top,
      position.bottom,
      window.scrollY
    );

    Here’s an example that tracks the percentage scroll position of a target slab against the viewport.

    See the Pen Inverse Lerp by Trys Mudford (@trys) on CodePen.

    Range

    This final method is ace. It’s a one-liner that converts a value from one data range to another. That might sound a bit arbitrary, but it’s surprisingly useful. We pass in two data ranges and a value that sits within data range one (it will still be clamped).

    //    Range 1    Range 2    Value
    range(10, 100, 2000, 20000, 50) // 10000

    Taking the previous example up a notch, let’s say that as the user scrolls through a section, we want to subtly move an element down the page by 150px. The section is in the middle of the document, starting at 3214px and ending at 3892px, and we want to convert window.scrollY from the big range down to a value between 0px and 150px. That’s a pretty nasty calculation to make, but range() makes it nice and clean.

    const position = el.getBoundingClientRect();
    const transformY = range(
      position.top,
      position.bottom,
      0,
      150,
      window.scrollY
    );

    If the user is above the section, it’ll be clamped to 0px. If they’re below, it’ll be clamped to 150px. And in all positions in between, it’ll evenly interpolate between the values.

    The final example takes the previous Codepen and maps the result against a transform: translateY range of -20% to 20%. Parallax, eat your heart out.

    See the Pen Range by Trys Mudford (@trys) on CodePen.

    Typescript version

    const lerp = (x: number, y: number, a: number) => x * (1 - a) + y * a;
    const invlerp = (x: number, y: number, a: number) => clamp((a - x) / (y - x));
    const clamp = (a: number, min = 0, max = 1) => Math.min(max, Math.max(min, a));
    const range = (
      x1: number,
      y1: number,
      x2: number,
      y2: number,
      a: number
    ) => lerp(x2, y2, invlerp(x1, y1, a));

    This was originally posted on my website.

    ]]>